id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
529,476 | https://en.wikipedia.org/wiki/Sirolimus | Sirolimus, also known as rapamycin and sold under the brand name Rapamune among others, is a macrolide compound that is used to coat coronary stents, prevent organ transplant rejection, treat a rare lung disease called lymphangioleiomyomatosis, and treat perivascular epithelioid cell tumour (PEComa). It has immunosuppressant functions in humans and is especially useful in preventing the rejection of kidney transplants. It is a mammalian target of rapamycin (mTOR) kinase inhibitor that reduces the sensitivity of T cells and B cells to interleukin-2 (IL-2), inhibiting their activity.
This compound also has a use in cardiovascular drug-eluting stent technologies to inhibit restenosis.
It is produced by the bacterium Streptomyces hygroscopicus and was isolated for the first time in 1972, from samples of Streptomyces hygroscopicus found on Easter Island. The compound was originally named rapamycin after the native name of the island, Rapa Nui. Sirolimus was initially developed as an antifungal agent. However, this use was abandoned when it was discovered to have potent immunosuppressive and antiproliferative properties due to its ability to inhibit mTOR. It was approved by the U.S. Food and Drug Administration (FDA) in 1999. Hyftor (sirolimus gel) was approved for topical treatment of facial angiofibroma in the European Union in May 2023.
Medical uses
Sirolimus is indicated for the prevention of organ transplant rejection and for the treatment of lymphangioleiomyomatosis (LAM).
Sirolimus (Fyarro), as protein-bound particles, is indicated for the treatment of adults with locally advanced unresectable or metastatic malignant perivascular epithelioid cell tumour (PEComa).
In the EU, sirolimus, as Rapamune, is indicated for the prophylaxis of organ rejection in adults at low to moderate immunological risk receiving a renal transplant and, as Hyftor, is indicated for the treatment of facial angiofibroma associated with tuberous sclerosis complex.
Prevention of transplant rejection
The chief advantage sirolimus has over calcineurin inhibitors is its low toxicity toward kidneys. Transplant patients maintained on calcineurin inhibitors long-term tend to develop impaired kidney function or even kidney failure; this can be avoided by using sirolimus instead. It is particularly advantageous in patients with kidney transplants for hemolytic-uremic syndrome, as this disease is likely to recur in the transplanted kidney if a calcineurin-inhibitor is used. However, on 7 October 2008, the FDA approved safety labeling revisions for sirolimus to warn of the risk for decreased renal function associated with its use. In 2009, the FDA notified healthcare professionals that a clinical trial conducted by Wyeth showed an increased mortality in stable liver transplant patients after switching from a calcineurin inhibitor-based immunosuppressive regimen to sirolimus. A 2019 cohort study of nearly 10,000 lung transplant recipients in the US demonstrated significantly improved long-term survival using sirolimus + tacrolimus instead of mycophenolate mofetil + tacrolimus for immunosuppressive therapy starting at one year after transplant.
Sirolimus can also be used alone, or in conjunction with a calcineurin inhibitor (such as tacrolimus), and/or mycophenolate mofetil, to provide steroid-free immunosuppression regimens. Impaired wound healing and thrombocytopenia are possible side effects of sirolimus; therefore, some transplant centers prefer not to use it immediately after the transplant operation, but instead administer it only after a period of weeks or months. Its optimal role in immunosuppression has not yet been determined, and it remains the subject of a number of ongoing clinical trials.
Lymphangioleiomyomatosis
In May 2015, the FDA approved sirolimus to treat lymphangioleiomyomatosis (LAM), a rare, progressive lung disease that primarily affects women of childbearing age. This made sirolimus the first drug approved to treat this disease. LAM involves lung tissue infiltration with smooth muscle-like cells with mutations of the tuberous sclerosis complex gene (TSC2). Loss of TSC2 gene function activates the mTOR signaling pathway, resulting in the release of lymphangiogenic growth factors. Sirolimus blocks this pathway.
The safety and efficacy of sirolimus treatment of LAM were investigated in clinical trials that compared sirolimus treatment with a placebo group in 89 patients for 12 months. The patients were observed for 12 months after the treatment had ended. The most commonly reported side effects of sirolimus treatment of LAM were mouth and lip ulcers, diarrhea, abdominal pain, nausea, sore throat, acne, chest pain, leg swelling, upper respiratory tract infection, headache, dizziness, muscle pain and elevated cholesterol. Serious side effects including hypersensitivity and swelling (edema) have been observed in renal transplant patients.
While sirolimus was considered for treatment of LAM, it received orphan drug designation status because LAM is a rare condition.
The safety of LAM treatment by sirolimus in people younger than 18 years old has not been tested.
Coronary stent coating
The antiproliferative effect of sirolimus has also been used in conjunction with coronary stents to prevent restenosis in coronary arteries following balloon angioplasty. The sirolimus is formulated in a polymer coating that affords controlled release through the healing period following coronary intervention. Several large clinical studies have demonstrated lower restenosis rates in patients treated with sirolimus-eluting stents when compared to bare-metal stents, resulting in fewer repeat procedures. A sirolimus-eluting coronary stent was marketed by Cordis, a division of Johnson & Johnson, under the tradename Cypher. However, this kind of stent may also increase the risk of vascular thrombosis.
Vascular malformations
Sirolimus is used to treat vascular malformations. Treatment with sirolimus can decrease pain and the fullness of vascular malformations, improve coagulation levels, and slow the growth of abnormal lymphatic vessels. Sirolimus is a relatively new medical therapy for the treatment of vascular malformations in recent years, sirolimus has emerged as a new medical treatment option for both vascular tumors and vascular malformations, as a mammalian target of rapamycin (mTOR), capable of integrating signals from the PI3K/AKT pathway to coordinate proper cell growth and proliferation. Hence, sirolimus is ideal for "proliferative" vascular tumors through the control of tissue overgrowth disorders caused by inappropriate activation of the PI3K/AKT/mTOR pathway as an antiproliferative agent.
Angiofibromas
Sirolimus has been used as a topical treatment of angiofibromas with tuberous sclerosis complex (TSC). Facial angiofibromas occur in 80% of patients with TSC, and the condition is very disfiguring. A retrospective review of English-language medical publications reporting on topical sirolimus treatment of facial angiofibromas found sixteen separate studies with positive patient outcomes after using the drug. The reports involved a total of 84 patients, and improvement was observed in 94% of subjects, especially if treatment began during the early stages of the disease. Sirolimus treatment was applied in several different formulations (ointment, gel, solution, and cream), ranging from 0.003 to 1% concentrations. Reported adverse effects included one case of perioral dermatitis, one case of cephalea, and four cases of irritation.
In April 2022, sirolimus was approved by the FDA for treating angiofibromas.
Adverse effects
The most common adverse reactions (≥30% occurrence, leading to a 5% treatment discontinuation rate) observed with sirolimus in clinical studies of organ rejection prophylaxis in individuals with kidney transplants include: peripheral edema, hypercholesterolemia, abdominal pain, headache, nausea, diarrhea, pain, constipation, hypertriglyceridemia, hypertension, increased creatinine, fever, urinary tract infection, anemia, arthralgia, and thrombocytopenia.
The most common adverse reactions (≥20% occurrence, leading to an 11% treatment discontinuation rate) observed with sirolimus in clinical studies for the treatment of lymphangioleiomyomatosis are: peripheral edema, hypercholesterolemia, abdominal pain, headache, nausea, diarrhea, chest pain, stomatitis, nasopharyngitis, acne, upper respiratory tract infection, dizziness, and myalgia.
The following adverse effects occurred in 3–20% of individuals taking sirolimus for organ rejection prophylaxis following a kidney transplant:
Diabetes-like symptoms
While sirolimus inhibition of mTORC1 appears to mediate the drug's benefits, it also inhibits mTORC2, which results in diabetes-like symptoms. This includes decreased glucose tolerance and insensitivity to insulin. Sirolimus treatment may additionally increase the risk of type 2 diabetes. In mouse studies, these symptoms can be avoided through the use of alternate dosing regimens or analogs such as everolimus or temsirolimus.
Lung toxicity
Lung toxicity is a serious complication associated with sirolimus therapy, especially in the case of lung transplants. The mechanism of the interstitial pneumonitis caused by sirolimus and other macrolide MTOR inhibitors is unclear, and may have nothing to do with the mTOR pathway. The interstitial pneumonitis is not dose-dependent, but is more common in patients with underlying lung disease.
Lowered effectiveness of immune system
There have been warnings about the use of sirolimus in transplants, where it may increase mortality due to an increased risk of infections.
Cancer risk
Sirolimus may increase an individual's risk for contracting skin cancers from exposure to sunlight or UV radiation, and risk of developing lymphoma. In studies, the skin cancer risk under sirolimus was lower than under other immunosuppressants such as azathioprine and calcineurin inhibitors, and lower than under placebo.
Impaired wound healing
Individuals taking sirolimus are at increased risk of experiencing impaired or delayed wound healing, particularly if they have a body mass index in excess of 30 kg/m2 (classified as obese).
Interactions
Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump; hence, inhibitors of either protein may increase sirolimus concentrations in blood plasma, whereas inducers of CYP3A4 and P-gp may decrease sirolimus concentrations in blood plasma.
Pharmacology
Pharmacodynamics
Unlike the similarly named tacrolimus, sirolimus is not a calcineurin inhibitor, but it has a similar suppressive effect on the immune system. Sirolimus inhibits IL-2 and other cytokine receptor-dependent signal transduction mechanisms, via action on mTOR, and thereby blocks activation of T and B cells. Ciclosporin and tacrolimus inhibit the secretion of IL-2, by inhibiting calcineurin.
The mode of action of sirolimus is to bind the cytosolic protein FK-binding protein 12 (FKBP12) in a manner similar to tacrolimus. Unlike the tacrolimus-FKBP12 complex, which inhibits calcineurin (PP2B), the sirolimus-FKBP12 complex inhibits the mTOR (mammalian Target Of Rapamycin, rapamycin being another name for sirolimus) pathway by directly binding to mTOR Complex 1 (mTORC1).
mTOR has also been called FRAP (FKBP-rapamycin-associated protein), RAFT (rapamycin and FKBP target), RAPT1, or SEP. The earlier names FRAP and RAFT were coined to reflect the fact that sirolimus must bind FKBP12 first, and only the FKBP12-sirolimus complex can bind mTOR. However, mTOR is now the widely accepted name, since Tor was first discovered via genetic and molecular studies of sirolimus-resistant mutants of Saccharomyces cerevisiae that identified FKBP12, Tor1, and Tor2 as the targets of sirolimus and provided robust support that the FKBP12-sirolimus complex binds to and inhibits Tor1 and Tor2.
Pharmacokinetics
Sirolimus is metabolized by the CYP3A4 enzyme and is a substrate of the P-glycoprotein (P-gp) efflux pump. It has linear pharmacokinetics. In studies on N=6 and N=36 subjects, peak concentration was obtained in 1.3 hours +/r- 0.5 hours and the terminal elimination was slow, with a half life around 60 hours +/- 10 hours. Sirolimus was not found to effect the concentration of ciclosporin, which is also metabolized primarily by the CYP3A4 enzyme.
The bioavailabiliy of sirolimus is low, and the absorption of sirolimus into the blood stream from the intestine varies widely between patients, with some patients having up to eight times more exposure than others for the same dose. Drug levels are, therefore, taken to make sure patients get the right dose for their condition. This is determined by taking a blood sample before the next dose, which gives the trough level. However, good correlation is noted between trough concentration levels and drug exposure, known as area under the concentration-time curve, for both sirolimus (SRL) and tacrolimus (TAC) (SRL: r2 = 0.83; TAC: r2 = 0.82), so only one level need be taken to know its pharmacokinetic (PK) profile. PK profiles of SRL and of TAC are unaltered by simultaneous administration. Dose-corrected drug exposure of TAC correlates with SRL (r2 = 0.8), so patients have similar bioavailability of both.
Chemistry
Sirolimus is a natural product and macrocyclic lactone.
Biosynthesis
The biosynthesis of the rapamycin core is accomplished by a type I polyketide synthase (PKS) in conjunction with a nonribosomal peptide synthetase (NRPS). The domains responsible for the biosynthesis of the linear polyketide of rapamycin are organized into three multienzymes, RapA, RapB, and RapC, which contain a total of 14 modules (figure 1). The three multienzymes are organized such that the first four modules of polyketide chain elongation are in RapA, the following six modules for continued elongation are in RapB, and the final four modules to complete the biosynthesis of the linear polyketide are in RapC. Then, the linear polyketide is modified by the NRPS, RapP, which attaches L-pipecolate to the terminal end of the polyketide, and then cyclizes the molecule, yielding the unbound product, prerapamycin.
The core macrocycle, prerapamycin (figure 2), is then modified (figure 3) by an additional five enzymes, which lead to the final product, rapamycin. First, the core macrocycle is modified by RapI, SAM-dependent O-methyltransferase (MTase), which O-methylates at C39. Next, a carbonyl is installed at C9 by RapJ, a cytochrome P-450 monooxygenases (P-450). Then, RapM, another MTase, O-methylates at C16. Finally, RapN, another P-450, installs a hydroxyl at C27 immediately followed by O-methylation by Rap Q, a distinct MTase, at C27 to yield rapamycin.
The biosynthetic genes responsible for rapamycin synthesis have been identified. As expected, three extremely large open reading frames (ORF's) designated as rapA, rapB, and rapC encode for three extremely large and complex multienzymes, RapA, RapB, and RapC, respectively. The gene rapL has been established to code for a NAD+-dependent lysine cycloamidase, which converts L-lysine to L-pipecolic acid (figure 4) for incorporation at the end of the polyketide. The gene rapP, which is embedded between the PKS genes and translationally coupled to rapC, encodes for an additional enzyme, an NPRS responsible for incorporating L-pipecolic acid, chain termination and cyclization of prerapamycin. In addition, genes rapI, rapJ, rapM, rapN, rapO, and rapQ have been identified as coding for tailoring enzymes that modify the macrocyclic core to give rapamycin (figure 3). Finally, rapG and rapH have been identified to code for enzymes that have a positive regulatory role in the preparation of rapamycin through the control of rapamycin PKS gene expression.
Biosynthesis of this 31-membered macrocycle begins as the loading domain is primed with the starter unit, 4,5-dihydroxocyclohex-1-ene-carboxylic acid, which is derived from the shikimate pathway. Note that the cyclohexane ring of the starting unit is reduced during the transfer to module 1. The starting unit is then modified by a series of Claisen condensations with malonyl or methylmalonyl substrates, which are attached to an acyl carrier protein (ACP) and extend the polyketide by two carbons each. After each successive condensation, the growing polyketide is further modified according to enzymatic domains that are present to reduce and dehydrate it, thereby introducing the diversity of functionalities observed in rapamycin (figure 1). Once the linear polyketide is complete, L-pipecolic acid, which is synthesized by a lysine cycloamidase from an L-lysine, is added to the terminal end of the polyketide by an NRPS. Then, the NSPS cyclizes the polyketide, giving prerapamycin, the first enzyme-free product. The macrocyclic core is then customized by a series of post-PKS enzymes through methylations by MTases and oxidations by P-450s to yield rapamycin.
Research
Cancer
The antiproliferative effects of sirolimus may have a role in treating cancer. When dosed appropriately, sirolimus can enhance the immune response to tumor targeting or otherwise promote tumor regression in clinical trials. Sirolimus seems to lower the cancer risk in some transplant patients.
Sirolimus was shown to inhibit the progression of dermal Kaposi's sarcoma in patients with renal transplants. Other mTOR inhibitors, such as temsirolimus (CCI-779) or everolimus (RAD001), are being tested for use in cancers such as glioblastoma multiforme and mantle cell lymphoma. However, these drugs have a higher rate of fatal adverse events in cancer patients than control drugs.
A combination therapy of doxorubicin and sirolimus has been shown to drive Akt-positive lymphomas into remission in mice. Akt signalling promotes cell survival in Akt-positive lymphomas and acts to prevent the cytotoxic effects of chemotherapy drugs, such as doxorubicin or cyclophosphamide. Sirolimus blocks Akt signalling and the cells lose their resistance to the chemotherapy. Bcl-2-positive lymphomas were completely resistant to the therapy; eIF4E-expressing lymphomas are not sensitive to sirolimus.
Tuberous sclerosis complex
Sirolimus also shows promise in treating tuberous sclerosis complex (TSC), a congenital disorder that predisposes those afflicted to benign tumor growth in the brain, heart, kidneys, skin, and other organs. After several studies conclusively linked mTOR inhibitors to remission in TSC tumors, specifically subependymal giant-cell astrocytomas in children and angiomyolipomas in adults, many US doctors began prescribing sirolimus (Wyeth's Rapamune) and everolimus (Novartis's RAD001) to TSC patients off-label. Numerous clinical trials using both rapamycin analogs, involving both children and adults with TSC, are underway in the United States.
Effects on longevity
mTOR, specifically mTORC1, was first shown to be important in aging in 2003, in a study on worms; sirolimus was shown to inhibit and slow aging in worms, yeast, and flies, and then to improve the condition of mouse models of various diseases of aging. Sirolimus was first shown to extend lifespan in wild-type mice in a study published by NIH investigators in 2009; the studies have been replicated in mice of many different genetic backgrounds. A study published in 2020 found late-life sirolimus dosing schedules enhanced mouse lifespan in a sex-specific manner: limited rapamycin exposure enhanced male but not female lifespan, providing evidence for sex differences in sirolimus response. The results are further supported by the finding that genetically modified mice with impaired mTORC1 signalling live longer.
Sirolimus has potential for widespread use as a longevity-promoting drug, with evidence pointing to its ability to prevent age-associated decline of cognitive and physical health. In 2014, researchers at Novartis showed that a related compound, everolimus, increased elderly patients' immune response on an intermittent dose. This led to many in the anti-aging community self-experimenting with the compound. However, because of the different biochemical properties of sirolimus, the dosing is potentially very different from that of everolimus. Ultimately, due to known side-effects of sirolimus, as well as inadequate evidence for optimal dosing, it was concluded in 2016 that more research was required before sirolimus could be widely prescribed for this purpose. Two human studies on the effects of sirolimus (rapamycin) on longevity did not show statistically significant benefits. However, due to limitations in the studies, further research is needed to fully assess its potential in humans.
Sirolimus has complex effects on the immune system—while IL-12 goes up and IL-10 decreases, which suggests an immunostimulatory response, TNF and IL-6 are decreased, which suggests an immunosuppressive response. The duration of the inhibition and the exact extent to which mTORC1 and mTORC2 are inhibited play a role, but were not yet well understood according to a 2015 paper.
Topical administration
When applied as a topical preparation, researchers showed that rapamycin can regenerate collagen and reverse clinical signs of aging in elderly patients. The concentrations are far lower than those used to treat angiofibromas.
SARS-CoV-2
Rapamycin has been proposed as a treatment for severe acute respiratory syndrome coronavirus 2 insofar as its immunosuppressive effects could prevent or reduce the cytokine storm seen in very serious cases of COVID-19. Moreover, inhibition of cell proliferation by rapamycin could reduce viral replication.
Atherosclerosis
Rapamycin can accelerate degradation of oxidized LDL cholesterol in endothelial cells, thereby lowering the risk of atherosclerosis. Oxidized LDL cholesterol is a major contributor to atherosclerosis.
Lupus
As of 2016, studies in cells, animals, and humans have suggested that mTOR activation as process underlying systemic lupus erythematosus and that inhibiting mTOR with rapamycin may be a disease-modifying treatment. As of 2016 rapamycin had been tested in small clinical trials in people with lupus.
Lymphatic malformation (LM)
Lymphatic malformation, lymphangioma or cystic hygroma, is an abnormal growth of lymphatic vessels that usually affects children around the head and neck area and more rarely involving the tongue causing macroglossia. LM is caused by a PIK3CA mutation during lymphangiogenesis early in gestational cell formation causing the malformation of lymphatic tissue. Treatment often consists of removal of the affected tissue via excision, laser ablation or sclerotherapy, but the rate of recurrence can be high and surgery can have complications. Sirolimus has shown evidence of being an effective treatment in alleviating symptoms and reducing the size of the malformation by way of altering the mTOR pathway in lymphangiogenesis. Although an off label use of the drug, Sirolimus has been shown to be an effective treatment for both microcystic and macrocystic LM. More research is however needed to develop and create targeted, effective treatment therapies for LM.
Graft-versus-host disease
Due to its immunosuppressant activity, Rapamycin has been assessed as prophylaxis or treatment agent of Graft-versus-host disease (GVHD), a complication of hematopoietic stem cell transplantation. While contrasted results were obtained in clinical trials, pre-clinical studies have shown that Rapamycin can mitigate GVHD by increasing the proliferation of regulatory T cells, inhibiting cytotoxic T cells and lowering the differentiation of effector T cells.
Applications in biology research
Rapamycin is used in biology research as an agent for chemically induced dimerization. In this application, rapamycin is added to cells expressing two fusion constructs, one of which contains the rapamycin-binding FRB domain from mTOR and the other of which contains an FKBP domain. Each fusion protein also contains additional domains that are brought into proximity when rapamycin induces binding of FRB and FKBP. In this way, rapamycin can be used to control and study protein localization and interactions.
Veterinary uses
A number of veterinary medicine teaching hospitals are participating in a long-term clinical study examining the effect of rapamycin on the longevity of dogs.
References
Further reading
External links
Anti-aging substances
Immunosuppressants
Lactams
Macrolides
Orphan drugs
Drugs developed by Pfizer
Polyenes
Drugs developed by Wyeth
Ophthalmology drugs
Chemical biology | Sirolimus | [
"Chemistry",
"Biology"
] | 5,796 | [
"Senescence",
"Anti-aging substances",
"Chemical biology",
"nan"
] |
309,304 | https://en.wikipedia.org/wiki/Propellant%20mass%20fraction | In aerospace engineering, the propellant mass fraction is the portion of a vehicle's mass which does not reach the destination, usually used as a measure of the vehicle's performance. In other words, the propellant mass fraction is the ratio between the propellant mass and the initial mass of the vehicle. In a spacecraft, the destination is usually an orbit, while for aircraft it is their landing location. A higher mass fraction represents less weight in a design. Another related measure is the payload fraction, which is the fraction of initial weight that is payload. It can be applied to a vehicle, a stage of a vehicle or to a rocket propulsion system.
Formulation
The propellant mass fraction is given by:
where:
is the propellant mass fraction
is the initial mass of the vehicle
is the propellant mass
is the final mass of the vehicle
Significance
In rockets for a given target orbit, a rocket's mass fraction is the portion of the rocket's pre-launch mass (fully fueled) that does not reach orbit. The propellant mass fraction is the ratio of just the propellant to the entire mass of the vehicle at takeoff (propellant plus dry mass). In the cases of a single-stage-to-orbit (SSTO) vehicle or suborbital vehicle, the mass fraction equals the propellant mass fraction, which is simply the fuel mass divided by the mass of the full spaceship. A rocket employing staging, which are the only designs to have reached orbit, has a mass fraction higher than the propellant mass fraction because parts of the rocket itself are dropped off en route. Propellant mass fractions are typically around 0.8 to 0.9.
In aircraft, mass fraction is related to range, an aircraft with a higher mass fraction can go farther. Aircraft mass fractions are typically around 0.5.
When applied to a rocket as a whole, a low mass fraction is desirable, since it indicates a greater capability for the rocket to deliver payload to orbit for a given amount of fuel. Conversely, when applied to a single stage, where the propellant mass fraction calculation doesn't include the payload, a higher propellant mass fraction corresponds to a more efficient design, since there is less non-propellant mass. Without the benefit of staging, SSTO designs are typically designed for mass fractions around 0.9. Staging increases the payload fraction, which is one of the reasons SSTOs appear difficult to build.
For example, the complete Space Shuttle system has:
fueled weight at liftoff: 1,708,500 kg
dry weight at liftoff: 342,100 kg
Given these numbers, the propellant mass fraction is .
The mass fraction plays an important role in the rocket equation:
Where is the ratio of final mass to initial mass (i.e., one minus the mass fraction), is the change in the vehicle's velocity as a result of the fuel burn and is the effective exhaust velocity (see below).
The term effective exhaust velocity is defined as:
where Isp is the fuel's specific impulse in seconds and gn is the standard acceleration of gravity (note that this is not the local acceleration of gravity).
To make a powered landing from orbit on a celestial body without an atmosphere requires the same mass reduction as reaching orbit from its surface, if the speed at which the surface is reached is zero.
See also
Fuel fraction
Mass ratio
References
Astrodynamics
Mass
Single-stage-to-orbit
Rocket propulsion
ro:Fracţie masică | Propellant mass fraction | [
"Physics",
"Mathematics",
"Engineering"
] | 712 | [
"Scalar physical quantities",
"Astrodynamics",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Aerospace engineering",
"Wikipedia categories named after physical quantities",
"Matter"
] |
309,536 | https://en.wikipedia.org/wiki/Str%C3%A4hle%20construction | Strähle's construction is a geometric method for determining the lengths for a series of vibrating strings with uniform diameters and tensions to sound pitches in a specific rational tempered musical tuning. It was first published in the 1743 Proceedings of the Royal Swedish Academy of Sciences by Swedish master organ maker Daniel Stråhle (1700–1746). The Academy's secretary Jacob Faggot appended a miscalculated set of pitches to the article, and these figures were reproduced by Friedrich Wilhelm Marpurg in Versuch über die musikalische Temperatur in 1776. Several German textbooks published about 1800 reported that the mistake was first identified by Christlieb Benedikt Funk in 1779, but the construction itself appears to have received little notice until the middle of the twentieth century when tuning theorist J. Murray Barbour presented it as a good method for approximating equal temperament and similar exponentials of small roots, and generalized its underlying mathematical principles.
It has become known as a device for building fretted musical instruments through articles by mathematicians Ian Stewart and Isaac Jacob Schoenberg, and is praised by them as a unique and remarkably elegant solution developed by an unschooled craftsman.
The name "Strähle" used in recent English language works appears to be due to a transcription error in Marpurg's text, where the old-fashioned diacritic raised "e" was substituted for the raised ring.
Background
Daniel P. Stråhle was active as an organ builder in central Sweden in the second quarter of the eighteenth century. He had worked as a journeyman for the important Stockholm organ builder Johan Niclas Cahman and, in 1741, four years after Cahman's death, Stråhle was granted his privilege for organ making. According to the system in force in Sweden at the time a privilege, a granted monopoly which was held by only a few of the most established makers of each type of musical instruments, gave him the legal right to build and repair organs, as well as to train and examine workers, and it also served as a guarantee of the quality of the work and education of the maker. An organ by him from 1743 is preserved in its original condition at the chapel at Strömsholm Palace; he is also known to have made clavichords, and a notable example with an unusual string scale and construction signed by him and dated 1738 is owned by the Stockholm Music Museum. His apprentices included his nephew Petter Stråhle and Jonas Gren, partners in the famous Stockholm organ builders Gren & Stråhle, and according to Abraham Abrahamsson Hülphers in his book Historisk Afhandling om Musik och Instrumenter published in 1773, Stråhle himself had studied mechanics (which has been assumed to have included mathematics) with Swedish Academy of Science founding member Christopher Polhem. He died in 1746 at Lövstabruk in northern Uppland.
Stråhle published his construction as a "new invention, to determine the Temperament in tuning, for the pitches of the clavichord and similar instruments" in an article that appeared in the fourth volume of the proceedings of the newly formed Royal Swedish Academy of Sciences, which included articles by prominent scholars and Academy members Polhem, Carl Linnaeus, Carl Fredrik Mennander, Augustin Ehrensvärd, and Samuel Klingenstierna. According to organologist Eva Helenius musical tuning was a subject of intense debate in the Academy during the 1740s, and though Stråhle himself was not a member his was the third article on practical musical topics published by the Academy—the first two were by amateur musical instrument maker, minister, and Academy member Nils Brelin which related inventions applicable to harpsichords and clavichords.
Stråhle wrote in his article that he had developed the method with "some thought and a great number of attempts" for the purpose of creating a gauge for the lengths of the strings in the temperament which he described as that which made the tempering ("sväfningar") mildest for the ear, as well comprising as the most useful and even arrangement of the pitches. His instructions produce an irregular tuning with a range of tempered intervals similar to better known tunings published during the same period, but he provided no further comments or description about the tuning itself; today it is generally considered to be an approximation of equal temperament. He also did not elaborate upon any advantages of his construction, which can produce accurate and repeatable results without calculations or measurement with only a straightedge and dividers; he described the construction in only five steps, and it is less iterative than arithmetic methods described by Dom Bédos de Celles method for determining organ pipe lengths in just intonation or Vincenzo Galilei for determining string fret positions in approximate equal temperament, and geometrical methods such as those described by Gioseffo Zarlino and Marin Mersenne—all of which are much better known than Stråhle's. Stråhle concluded by stating that he had applied the system to a clavichord, although the tuning as well as the method of determining a set of sounding lengths can be used for many other musical instruments, but there is little evidence showing whether it was put into more widespread practice other than the two examples described in the article, and whose whereabouts today are unknown.
Construction
Stråhle instructed first to draw a line segment QR of a convenient length divided in twelve equal parts, with points labeled I through XIII. QR is then used as the base of an isosceles triangle with sides OQ and OR twice as long as QR, and rays drawn from vertex O through each of the numbered points on the base. Finally a line is drawn from vertex R at an angle through a point P on the opposite leg of the triangle seven units from Q to a point M, located at twice the distance from R as P. The length of MR gives the length of the lowest sounding pitch, and the length of MP the highest of the string lengths generated by the construction, and the sounding lengths between them are determined by the distances from M to the intersections of MR with lines O I through O XII, at points labeled 1 through 12.
Stråhle wrote that he had named the line PR "Linea Musica", which Helenius noted was a term Polhem had used in an undated but earlier manuscript now located at the Linköping Stifts- och Landsbibliotek and which is accompanied by notes from composer and geometer Harald Vallerius (1646–1716) and Stråhle's former employer J. N. Cahman.
Stråhle also showed line segments parallel to MR through points NHS, LYT, and KZV in order to illustrate how once created the construction could be scaled to accommodate different starting pitches.
Stråhle stated at the conclusion of the article that he had implemented the string scale in the highest three octaves of a clavichord, although it is unclear whether this section would have been strung all with the same gauge wire under equal tension like the monochord which he wrote it resembled, and whose construction he described in more detail. He only described an indirect method of setting its tuning, however, requiring that he first establish reference pitches by transferring the corresponding string lengths to the movable bridges on a keyed thirteen string monochord whose open strings had been previously tuned in unison.
Faggot's numerical representation
The article following Stråhle's was a mathematical treatment of it by Jacob Faggot (1699–1777), then secretary of the Academy of Sciences and future director of the Surveying Office, who in the same volume also contributed articles on a weight measure for lye and methods for calculating the volume of barrels. Faggot was one of the first members of the Academy, and had also been member of a special commission on weights and measures. He apparently was not a musician, though Helenius described he was interested in musical topics from a mathematical perspective and documented that he periodically came in contact with musical instrument makers through the Academy. Helenius also presented a theory that Faggot had a more active, if indirect and posthumous influence on the construction of musical instruments in Sweden, claiming that he may have suggested the long tenor strings used in two experimental instruments built by Johan Broman in 1756 which she proposed influenced the type of clavichord built in Sweden in the late eighteenth and early nineteenth centuries.
In his analysis of Stråhle's article Faggot outlined the trigonometric steps he had used to calculate the sounding lengths of the individual pitches, for the purpose of comparing the new tuning produced by Stråhle's method, against a tuning with pure thirds, fourths and fifths (labeled "N. 1" in the table), and equal temperament, which he called only "an older temperament and [which] is introduced in Mr. Mattheson's Critica Musica" ("N. 2"), He intended the resulting set of figures to show whether "the tuning of the pitches, following the previously described invention, satisfies the ear with pleasant sounds and with better evenness, in the Musical pitches on a keyboard instrument, and therefore teaches understanding better can judge than the old and previously known manner of tuning, when the eye can see what the ear hears."
Both articles were reproduced in a German edition of the Academy's proceedings published in 1751, and a table of Faggot's calculated string lengths was subsequently included by Marpurg on his 1776 Versuch über die musikalische Temperatur, who wrote that he accepted their accuracy but that rather than accomplishing "Strähle"'s stated goal, the tuning represented an unequal temperament "not even of the tolerable type."
The sounding lengths calculated by Faggot are substantially different from what would be produced according to Stråhle's instructions, a fact which appears to have been first published by Christlieb Benedict Funk in Dissertatio de Sono et Tono in 1779, and the tuning he created includes intervals tuned outside of the range conventionally used in Western art music. Funk is credited the observation of this discrepancy in Gehler's Physikalisches Wörterbuch in 1791, and Fischer's Physikalisches Wörterbuch in 1804, and the error was pointed out by Ernst Chladni in Die Akustik in 1830. No similar comments appear to have been published in Sweden during the same period.
These works report Faggot's mistake as the result of having used a value from the tangent instead of the sine column from the logarithmic tables. The error itself consisted of making the angle of RP about seven degrees too great, which caused the effective length of QP to increase to 8.605. This greatly exaggerated the errors of the temperament compared to the tunings he presented alongside it, although it is not clear whether Faggot observed these apparent defects as he made no further comments about Stråhle's construction or temperament in the article.
The tuning
The tuning produced following Stråhle's instructions is a rational temperament with a range of fifths from 696 to 704 cents, which is from about one cent flatter than a meantone fifth to two cents sharp of just 3:2; the range of major thirds is from 396 cents to 404 cents, or ten cents sharp of just 5/4 to three cents flat of Pythagorean 81/64. These intervals fall within what is considered to have been acceptable but there is no distribution of better thirds to more frequently used keys that characterize what are today the most popular of the tunings published in the seventeenth and eighteenth centuries, which are known as well temperaments. The best fifth is pure in the key of F♯—or the pitch given by MB—which has a 398 cent third, and the best third is in the key E, which has a 697 cent fifth; the best combination of the two intervals is in the key of F and the worst combination is in the key of B♭.
Barbour's algebraic representation and geometric construction
J. Murray Barbour brought new attention to Stråhle's construction along with Faggot's treatment of it in the 20th century. Introduced in the context of Marpurg, he included an overview of it alongside the more famous methods of determining string lengths in his 1951 book Tuning and Temperament where he characterized the tuning as an "approximation for equal temperament". He also demonstrated how close Stråhle's construction was to the best approximation the method could provide, which reduces the maximum errors in major thirds and fifths by about half a cent and is accomplished by substituting 7.028 for the length of QP.
Barbour presented a more complete analysis of the construction in "A Geometrical Approximation to the Roots of Numbers" published six years later in American Mathematical Monthly. He reviewed Faggot's error and its consequences, and then derived Stråhle's construction algebraically using similar triangles. This takes the generalized form
Using the values from Stråhle's instructions this becomes
Letting so that leads to a form of the first formula that is more useful for calculation
Barbour then described a generalized construction using the easily obtained mean proportional for the length of MB that avoids most of the specific angles and lengths required in the original. For musical applications it is simpler and its results are slightly more uniform than Stråhle's, and it has the advantage of producing the desired string lengths without additional scaling.
He instructed to first draw the line MR corresponding to the larger of the two numbers with MP the smaller, and to construct their mean proportional at MB. The line that will carry the divisions is drawn from R at any acute angle to MR, and perpendicular to it a line is drawn through B, which intersects the line to be divided at A, and RA is extended to Q such that RA=AQ. A line is drawn from Q through P, intersecting the line through BA at O, and a line drawn from O to R. The construction is completed by dividing QR and drawing rays from O through each of the divisions.
Barbour concluded with a discussion of the pattern and magnitude of the errors produced by the generalized construction when used to approximate exponentials of different roots, stating that his method "is simple and works exceedingly well for small numbers". For roots from 1 to 2 the error is less than 0.13%—about 2 cents when N=2— with maxima around m=0.21 and m=0.79. The error curve appears roughly sinusoidal and for this range of N can be approximated by about 99% by fitting the curve obtained for N=1, . The error increases rapidly for larger roots, for which Barbour considered the method inappropriate; the error curve resembles the form with maxima moving closer to m= 0 and m=1 as N increases.
Schoenberg's refinements of Barbour's methods
The paper was published with two notes added by its referee, Isaac Jacob Schoenberg. He observed that the formula derived by Barbour was a fractional linear transformation and so called for a perspectivity, and that since three pairs of corresponding points on the two lines uniquely determined a projective correspondence Barbour's condition that OA be perpendicular to QR was irrelevant. The omission of this step allows a more convenient selection of length for QR, and reduces the number of operations.
Schoenberg also noted that Barbour's equation could be viewed as an interpolation of the exponential curve through the three points m=0, m=1/2, and m=1, which he expanded upon in a short paper titled "On the Location of the Frets on the Guitar" published in American Mathematical Monthly in 1976. This article concluded with a brief discussion of Stråhle's fortuitous use of for the half-octave, which is one of the convergents of the continued fraction expansion of the , and the best rational approximation of it for the size of the denominator.
Stewart and continued fractions
The use of fractional approximations of in Stråhle's construction was expanded upon by Ian Stewart, who wrote about the construction in "A Well Tempered Calculator" in his 1992 book Another Fine Math You've Got Me Into... as well as "Faggot's Fretful Fiasco" included in Music and Mathematics published in 2006. Stewart considered the construction from the standpoint of projective geometry, and derived the same formulas as Barbour by treating it from the start as a fractional linear function, of the form , and he pointed out that the approximation for implicit in the construction is , which is the next lower convergent from the half octave it produces. This is the consequence of the function simplifying to for m=0.5 where is the generating approximation.
Similar methods applied to musical instruments
The geometric and arithmetic methods for dividing monochords as well as musical instrument fretboards compiled by Barbour were for the stated purpose of illustrating the different tunings each represents or implies, and Schoenberg's and Stewart's works retained similar focus and references. Three textbooks on piano building that are not included by them show similar constructions to Stråhle's for designing new instruments but treat the tuning of their pitches independently; both constructions employ a non-perpendicular form as suggested by Schoenberg's observation in Barbour's "A Geometrical approximation to the Roots of Numbers", and one achieves optimal results while the other demonstrates an application with a root other than 2.
Kützing
Carl Kützing, an organ and piano maker in Bern during the middle of the 19th century wrote in his first book on piano design, Theoretisch-praktisches Handbuch der Fortepiano-Baukunst from 1833, that he devised a simple method of determining the sounding lengths in an octave after reading of the different geometric constructions described in an issue of Marpurg's Historisch-kritischen Beitragen zur Aufnahme der Musik; he stated that the divisions would be very accurate and that the construction could be used for fretting guitars.
Kützing introduced the construction following a description of a large sector to be made for the same purpose. He did not include either method in Das Wissenschaftliche der Fortepiano-Baukunst published eleven years later, where he calculated lengths using approximately 18:35 ratios between octave lengths and proposed a new method with a non-continuous curve adjusted for actual wire diameters in order to reduce tonal differences from jumps in tension.
Kützing instructed to extend a line segment bc—representing a known sounding length—at 45 degrees to the line ba, and from its octave at point d located midway between b and c, to extend a line perpendicular to ba intersecting it at e, then to divide de into 12 equal parts. The point a on ab is located by transferring the lengths of de, db, from e away from b, and rays extended from a through the points dividing de and intersecting bc to locate the different endpoints of the string lengths from c.
This arrangement is equivalent to using the mean proportional to locate a.
A re-labeled diagram with instructions was included in a pamphlet printed by England's largest piano manufacturers John Broadwood & Sons to accompany their display at the 1862 International Exhibition in London, where they described it as "a practical method of finding the lengths of Strings, for every note of the Octave on equal temperament; so that with wire of the same size the tension on each note shall be the same."
It was also reproduced, alongside a sector, by Giacomo Sievers, a Russian-born piano maker working in Naples, in his 1868 book Il Pianoforte, where he claimed it was the best practical method for determining sounding lengths of strings in a piano. Like Broadwood, Sievers did not describe its source or extent of its use, and did not explain any theory behind it. He also did not suggest it had any use beyond designing pianos.
Wolfenden
English piano maker Samuel Wolfenden presented a construction for determining all but the lowest sounding plain string lengths in a piano in A Treatise on the Art of Pianoforte Construction published in 1916; like Sievers, he did not explain whether this was an original procedure or one in common use, commenting only that it was "a very practical method of determining string lengths, and in past years I used it altogether". He added that at the time of writing he found calculating the lengths directly "somewhat easier" and had preceded the description with a table of computed lengths for the top five octaves of a piano. He included frequencies in equal temperament, but only published aural tuning instructions in his 1927 supplement.
Wolfenden explicitly advocated equalizing the tension of the plain strings which he proposed to accomplish in the upper range by combining a 9:17 ratio between octave lengths with a uniform change in string diameters (achieving slightly more consistent results over the otherwise similar system published by Siegfried Hansing in 1888), in contrast to Sievers scale whose stringing schedule results in higher tension for the thicker, lower sounding pitches.
Like Sievers, Wolfenden constructed all of the sounding lengths on a single segment at 45 degrees from the base lines for the rays, starting with points located for each C in the range designed at 54, 102, 192.5, 364, and 688mm from the upper point. The four vertices for the rays are then located by the intersections of the horizontal base lines extended from the lower C in each octave with a second line angled from the upper starting point for the string line, however, which he specified should both be at 51.5 degrees to the base lines and that the base lines have a 35:13 ratio with the difference between the two octave lengths.
Wolfenden's method approximates with roughly 1.3775, and is equivalent to in Barbour's form. Compensating for its smaller octaves this produces 596 cent half octaves, an error of about 1mm at note F4 (f′) compared with his calculated figures.
Notes
Stråhle (1743) p. 285-286:
"Enligit detta påfund, har jag bygt et Monochordium, i så måtto, at det fullan hafver 13 strängar, ock skulle dy snarare heta Tredekachordium, men som alla strängarna, äro af en nummer, längd ock thon; så behåller jag det gamla namnet.
"Til dessa tretton strängar, är lämpadt et vanligit Manual, af en Octave; men under hvar sträng, sedan de noga äro stämde i unison, sätter jag löfa stallar, å de puncter, ock till de längder fra crepinerne, som min nu beskrefne Linea Musica det äfkar : derefter hvar sträng undfår sin behöriga thon.
"Det Claver, som jag här til förfärdigat är jämnväl i de tre högre Octaverne, noga rättadt efter min Linea Musica, til strängarnes längd ock skilnad : ock på det stämningen, må utan besvär, kunna ske; så är mit Monochordium så giordt, at det kan ställas ofvan på Claveret, då en Octav på Claveret stämmes, thon för thon, mot sina tillhöriga thoner på Monochordium, derefter alla de andra thonerne, å Claveret, stämmas Octavs-vis; den stamningen, är ock för örat lättast at värkställa, emedan den bör vara fri för svängningar."
Faggot (1743) p. 286:"Huruvida thonernes stämning, efter förut beskrefne Påfund, förnöger hörsten, med behageligare ljud, ock med bättre likstämmighet, i de Musikaliska thonerne å et Claver, än de gamla ock härtils bekanta stämnings sätt, derom lärer förståndet bättre kunna döma, när ögat får se det örat hörer."
Marpurg (1776) p. 167-168:"Ich muss gestehen, dass sich dieser Aufsatz mit Vergnügen lesen lässet, und dass ich von der Richtigkeit der vom Hrn. Jacob Faggot, durch eine sehr mühsame trigonometrische Berechnung der Strählischen Linien, gefunden Zahlen voellig überzeuget bin. Nur muss ich hinzufügen, dass die gefunden Zahlen nicht geben, was sie geben sollen, und was Hr. Strähle suchte, nemlich eine Temperatur, welche das Schweben am gelindesten für das Gehör macht, und alle Töne in gehörige Gleichstimmigkeit setzet. Es enthalten nemlich selbige nichts anders als eine ungleichschwebende Temperatur, und nicht einmal von der erträglichsten Art."
References
Daniel P. Stråhle "Nytt Påfund, at finna Temperaturen i stämningen, för thonerne å Claveret ock dylika Instrumenter" Kongliga Swenska Wetenskaps Academiens Handlingar för Månaderne October, November, ock December, vol. IV, Lorentz Ludewig Grefing, Stockholm 1743 p. 281-285
Jacob Faggot "Trigonometrisk uträkning på en ny Temperatur, för thonernes stämning å Claveret" Kongliga Swenska Wetenskaps Academiens Handlingar för Månaderne October, November, ock December vol. IV, Lorentz Ludewig Grefing, Stockholm 1743 p. 286-291
Ian Stewart "Faggot's Fretful Fiasco" John Fauvel, Raymond Flood, Robin Wilson, ed. Music and Mathematics Oxford University Press 2006 p. 68-75
J. Murray Barbour Tuning and Temperament: A Historical Survey Michigan State College College Press, East Lansing 1951 p. 65-68
Geometry
Musical tuning
String instrument construction | Strähle construction | [
"Mathematics"
] | 5,530 | [
"Geometry"
] |
309,620 | https://en.wikipedia.org/wiki/Trisodium%20phosphate | Trisodium phosphate (TSP) is an inorganic compound with the chemical formula . It is a white, granular or crystalline solid, highly soluble in water, producing an alkaline solution. TSP is used as a cleaning agent, builder, lubricant, food additive, stain remover, and degreaser.
As an item of commerce TSP is often partially hydrated and may range from anhydrous to the dodecahydrate . Most often it is found in white powder form. It can also be called trisodium orthophosphate or simply sodium phosphate.
Production
Trisodium phosphate is produced by neutralization of phosphoric acid using sodium carbonate, which produces disodium hydrogen phosphate. The disodium hydrogen phosphate is reacted with sodium hydroxide to form trisodium phosphate and water.
Na2CO3 + H3PO4 -> Na2HPO4 + CO2 + H2O
Na2HPO4 + NaOH -> Na3PO4 + H2O
Uses
Cleaning
Trisodium phosphate was at one time extensively used in formulations for a variety of consumer-grade soaps and detergents, and the most common use for trisodium phosphate has been in cleaning agents. The pH of a 1% solution is 12 (i.e., very basic), and the solution is sufficiently alkaline to saponify grease and oils. In combination with surfactants, TSP is an excellent agent for cleaning everything from laundry to concrete driveways. This versatility and low manufacturing price made TSP the basis for a plethora of cleaning products sold in the mid-20th century.
TSP is still sold and used as a cleaning agent, but since the late 1960s, its use has diminished in the United States and many other parts of the world because, like many phosphate-based cleaners, it is known to cause extensive eutrophication of lakes and rivers once it enters a water system.
Although it is still the active ingredient in some toilet bowl-cleaning tablets, TSP is generally not recommended for cleaning bathrooms because it can stain metal fixtures and can damage grout.
Chlorination
With the formula the material called chlorinated trisodium phosphate is used as a disinfectant and bleach, like sodium hypochlorite. It is prepared using NaOCl in place of some of the base to neutralize phosphoric acid.
Flux
In the U.S., trisodium phosphate is an approved flux for use in hard soldering joints in medical-grade copper plumbing. The flux is applied as a concentrated water solution and dissolves copper oxides at the temperature used in copper brazing. Residues are water-soluble and can be rinsed out before plumbing is put into service.
TSP is used as an ingredient in fluxes designed to deoxygenate nonferrous metals for casting. It can be used in ceramic production to lower the flow point of glazes.
Painting enhancement
TSP is still in common use for the cleaning, degreasing, and deglossing of walls prior to painting. TSP breaks the gloss of oil-based paints and opens the pores of latex-based paint, providing a surface better suited for the adhesion of the subsequent layer.
Food additive
Sodium phosphates including monosodium phosphate, disodium phosphate, and trisodium phosphate are approved as food additives in the EU. They are commonly used as acidity regulators and have the collective E number E339. The United States Food and Drug Administration lists sodium phosphates as generally recognized as safe.
Exercise performance enhancement
Trisodium phosphate has gained a following as a nutritional supplement that can improve certain parameters of exercise performance. The basis of this belief is the fact that phosphate is required for the energy-producing Krebs cycle central to aerobic metabolism. Phosphates are available from a number of other sources that are much milder than TSP.
Regulation
In the Western world, phosphate usage has declined because of damage it causes to lakes and rivers through eutrophication.
Substitutes
By the end of the 20th century, many products that formerly contained TSP were manufactured with TSP substitutes, which consist mainly of sodium carbonate along with various admixtures of nonionic surfactants and a limited percentage of sodium phosphates.
Products sold as TSP substitutes, containing soda ash and zeolites, are promoted as direct substitutes. However, sodium carbonate is not as strongly basic as trisodium phosphate, making it less effective in demanding applications. Zeolites, which are clay based, are added to laundry detergents as water softening agents and are essentially non-polluting; however, zeolites do not dissolve and can deposit a fine, powdery residue in the wash tub. Cleaning products labeled as TSP may contain other ingredients, with perhaps less than 50% trisodium phosphate.
References
External links
Safety data from IPCS INCHEM
International Chemical Safety Card 1178
Cleaning product components
Food additives
Phosphates
Sodium compounds
Photographic chemicals
Edible thickening agents
E-number additives | Trisodium phosphate | [
"Chemistry",
"Technology"
] | 1,053 | [
"Phosphates",
"Components",
"Cleaning product components",
"Salts"
] |
309,801 | https://en.wikipedia.org/wiki/Information%20asymmetry | In contract theory, mechanism design, and economics, an information asymmetry is a situation where one party has more or better information than the other.
Information asymmetry creates an imbalance of power in transactions, which can sometimes cause the transactions to be inefficient, causing market failure in the worst case. Examples of this problem are adverse selection, moral hazard, and monopolies of knowledge.
A common way to visualise information asymmetry is with a scale, with one side being the seller and the other the buyer. When the seller has more or better information, the transaction will more likely occur in the seller's favour ("the balance of power has shifted to the seller"). An example of this could be when a used car is sold, the seller is likely to have a much better understanding of the car's condition and hence its market value than the buyer, who can only estimate the market value based on the information provided by the seller and their own assessment of the vehicle. The balance of power can, however, also be in the hands of the buyer. When buying health insurance, the buyer is not always required to provide full details of future health risks. By not providing this information to the insurance company, the buyer will pay the same premium as someone much less likely to require a payout in the future. The adjacent image illustrates the balance of power between two agents when there is perfect information. Perfect information means that all parties have complete knowledge. If the buyer has more information, the power to manipulate the transaction will be represented by the scale leaning towards the buyer's side.
Information asymmetry extends to non-economic behaviour. Private firms have better information than regulators about the actions that they would take in the absence of regulation, and the effectiveness of a regulation may be undermined. International relations theory has recognized that wars may be caused by asymmetric information and that "Most of the great wars of the modern era resulted from leaders miscalculating their prospects for victory". Jackson and Morelli wrote that there is asymmetric information between national leaders, when there are differences "in what they know [i.e. believe] about each other's armaments, quality of military personnel and tactics, determination, geography, political climate, or even just about the relative probability of different outcomes" or where they have "incomplete information about the motivations of other agents".
Information asymmetries are studied in the context of principal–agent problems where they are a major cause of misinforming and is essential in every communication process. Information asymmetry is in contrast to perfect information, which is a key assumption in neo-classical economics.
In 1996, a Nobel Memorial Prize in Economics was awarded to James A. Mirrlees and William Vickrey for their "fundamental contributions to the economic theory of incentives under asymmetric information". This led the Nobel Committee to acknowledge the importance of information problems in economics. They later awarded another Nobel Prize in 2001 to George Akerlof, Michael Spence, and Joseph E. Stiglitz for their "analyses of markets with asymmetric information". The 2007 Nobel Memorial Prize in Economic Sciences was awarded to Leonid Hurwicz, Eric Maskin, and Roger Myerson "for having laid the foundations of mechanism design theory", a field dealing with designing markets that encourage participants to honestly reveal their information.
History
The puzzle of information asymmetry has existed for as long as the market itself but remained largely unstudied until the post-WWII period. It is an umbrella term that can contain a vast diversity of topics.
Greek Stoics (2nd century BCE) treated the advantage that sellers derive from privileged information in the story of the Merchant of Rhodes. Accordingly, a famine had broken out on the island of Rhodes and several grain merchants in Alexandria set sail to deliver supplies. One of these merchants who arrives ahead of his competitors faces a choice: should he let Rhodians know that grain supplies are on the way or keep this knowledge to himself? Either decision will determine his profit margin. Cicero related this dilemma in De Officiis and agreed with Greek Stoics that the merchant had a duty to disclose. Thomas Aquinas overturned this consensus and considered price disclosure was not obligatory.
The three topics mentioned above drew on some important predecessors. Joseph Stiglitz considered the work of earlier economists, including Adam Smith, John Stuart Mill, and Max Weber. He ultimately concludes that though these economists seemed to have an understanding of the problems of information, they largely did not consider the implications of them, and tended to minimize the impact they could have or consider them merely secondary issues.
One exception to this is the work of economist Friedrich Hayek. His work with prices as information conveying relative scarcity of goods can be noted as an early form of acknowledging information asymmetry, but with a different name.
2001 Nobel Prize Inspirations
Information problems have always affected the lives of humans, yet it was not studied with any seriousness until near the 1970s when three economists fleshed out models which revolutionized the way we think about information and its interaction with the market. George Akerlof's paper The Market for Lemons introduced a model to help explain a variety of market outcomes when quality is uncertain. Akerlof's primary model considers the automobile market where the seller knows the exact quality of a car. In contrast, the buyer only knows the probability of whether a vehicle is good or bad (a lemon). Since the buyer pays the same price (based on their expected quality) for good cars and bad cars, sellers with high-quality cars may find the transaction unprofitable and leave, resulting in a market with a higher proportion of bad cars. The pathological path can continue as the buyer adjusts the expected quality and offers even lower prices, further driving out cars with not-so-bad quality. This results in a market failure purely driven by information asymmetry, as under perfect information, all cars can be sold according to their quality. Akerlof extends the model to explain other phenomena: Why raising the insurance price cannot facilitate seniors getting medical insurance? Why may employers rationally refuse to hire minorities? Through various applications, Akerlof developed the importance of trust in markets and highlighted the "cost of dishonesty" in insurance markets, credit markets, and developing areas. Around the same time, an economist by the name of Michael Spence wrote on the topic of job market signaling, and was introduced a work of the same name. The final topic is Stiglitz's work on the mechanism of screening. These three economists helped to further clarify a variety of economic puzzles at the time and would go on to win a Nobel Prize in 2001 for their contributions to the field. Since then, several economists have followed in their footsteps to solve more pieces of the puzzle.
Akerlof
Akerlof drew heavily from the work of economist Kenneth Arrow. Arrow, who was awarded a Nobel Prize in Economics in 1972, studied uncertainty in the field of medical care, among other things (Arrow 1963). His work highlighted several factors which became important to Akerlof's studies. First, is the idea of moral hazard. By being insured, customers may be inclined to be less careful than they otherwise would without insurance because they know the costs will be covered. Thus, an incentive to be less careful and increase risk exists. Second, Arrow studied the business models of insurance companies and noted that higher-risk individuals are pooled with lower-risk individuals, but both are covered at the same cost. Third, Arrow noted the role of trust in the relationship between doctor and patient. Medical providers only get paid when a patient is sick, and not when a person is healthy. Because of this, there is a great incentive for doctors to not provide the quality of care they could. A patient must defer to the doctor and trust that the doctor is using their knowledge to their best advantage to provide the patient with the best care. Thus, a relationship of trust is established. According to Arrow, the doctor relies on the social obligation of trust to sell their services to the public, even though the patients do not or cannot inspect the quality of a doctor's work. Last, he notes how this unique relationship demands that high levels of education and certification be attained by doctors in order to maintain the quality of medical service provided by doctors. These four ideas from Arrow contributed largely to Akerlof's work.
Spence
Spence cited no sources for his inspiration. However, he did acknowledge Kenneth Arrow and Thomas Schelling as helpful in discussing ideas during his pursuit of knowledge. He was the first to coin the term "signaling", and encouraged other economists to follow in his footsteps because he believed he had introduced an important concept in economics.
Stiglitz
Most of Stiglitz's academic inspirations were from his contemporaries. Stiglitz primarily attributes his thinking to articles by Spence, Akerlof, and a few earlier works by him and his co-author Michael Rothschild (Rothschild and Stiglitz 1976), each discussing various aspects of screening and the role of education. Stiglitz's work was a complement to the works of Spence and Akerlof and thus drew from some of the same inspirations from Arrow as Akerlof had.
The discussion of information asymmetry came to the forefront of economics in the 1970s when Akerlof introduced the idea of a "market for lemons" in a paper by the same name (Akerlof 1970). In this paper, Akerlof introduced a fundamental concept that certain sellers of used cars have more knowledge than the buyers, and this can lead to what is known as "adverse selection". This idea may be one of the most important in the history and understanding of asymmetric information in economics.
Spence introduced the idea of "signaling" shortly after the publication of Akerlof's work.
Stiglitz expanded upon the ideas of Spence and Akerlof by introducing an economic function of information asymmetry called "screening". Stiglitz's work in this area referred to the market for insurance, which is rife with information asymmetry problems to be studied.
Impact of 2001 Nobel Work
These three economists' simple yet revolutionary work birthed a movement in economics that changed how the field viewed the market forever. No longer can perfect information be assumed in some problems, as in most neoclassical models. Information asymmetry began to grow in prevalence in academic literature. In 1996, a Nobel Prize was given to James Mirrlees and William Vickrey for their research back in the 1970s and 1970s on incentive problems when facing uncertainty under asymmetric information. The impact of such academic work can go unrecognized for decades. Differing from the topics presented by Akerlof, Spence and Stiglitz, Mirrlees and Vickrey focused on how income taxation and auctions can be used as a mechanism to draw out information from market participants efficiently. This award marked the importance of information asymmetry in economics. It began a greater discussion on the topic that later led the Nobel committee to award three economists again in 2001 for significant contributions to the aforementioned topics.
These economists continued after the 1970s to contribute to the field of economics and develop their theories, and they have all had significant impacts. Akerlof's work had more impact than just the market for used cars. The pooling effect in the used car market also happens in the employment market for minorities.
One of the most notable impacts of Akerlof's work is its impact on Keynesian theory. Akerlof argues that the Keynesian theory of unemployment being voluntary implies that quits would rise with unemployment. He argues against his critics by drawing upon reasoning based on psychology and sociology rather than pure economics. He supplemented this with an argument that people do not always behave rationally, but rather information asymmetry leads to only "near rationality", which causes people to deviate from optimal behavior regarding employment practices.
Akerlof continues to champion behavioral economics, that these breaches into the fields of psychology and sociology are profound extensions of information asymmetry.
Stiglitz wrote that the trio's work has created a substantial wave in the field of economics. He notes how he explored the economies of third-world countries, and they seemed to exhibit behavior consistent with their theories. He noted how other economists have referred to gaining information as a transaction cost. Stiglitz also attempts to narrow down the sources of information asymmetries. He ties it back to the nature of each individual having information that others do not. Stiglitz also mentions how information asymmetry can be overcome. He believes there are two crucial things to consider: first, the incentives, and second, the mechanisms for overcoming information asymmetry. He argues that the incentives will always be there because markets are inherently informationally inefficient. If there is an opportunity to profit from gaining knowledge, people will do so. If there is no profit to be had, then people will not do so.
Spence's work on signaling moved on in the 1980s to spawn the field of study known as game theory.
The idea of information asymmetry has also had a significant effect on management research. It continues to offer additional improvements and opportunities as scholars continue their work.
Models
Information asymmetry models assume one party possesses some information that other parties have no access to. Some asymmetric information models can also be used in situations where at least one party can enforce, or effectively retaliate for breaches of, certain parts of an agreement, whereas the other(s) cannot.
Adverse selection
Akerlof suggested that information asymmetry leads to adverse selections. In adverse selection models, the ignorant party lacks or has differing information while negotiating an agreed understanding of or contract to the transaction. An example of adverse selection is when people who are high-risk are more likely to buy insurance because the insurance company cannot effectively discriminate against them, usually due to lack of information about the particular individual's risk but also sometimes by force of law or other constraints.
Credence Goods fits in the adverse selection model of information asymmetry. These are goods where the buyer lacks the knowledge even after a product is consumed to disguise the product's quality or where the buyer is unaware of the quality needed. An example of this are complex medical treatments such as heart surgery.
Moral hazard
Moral hazard occurs when the ignorant party lacks information about the performance of the agreed-upon transaction or lacks the ability to retaliate for a breach of the agreement. This can result in a situation where a party is more likely to take risks because they are not fully responsible for the consequences of their actions. An example of moral hazard is when people are more likely to behave recklessly after becoming insured, either because the insurer cannot observe this behaviour or cannot effectively retaliate against it, for example, by failing to renew the insurance.
Moral Hazard is not limited to individuals: firms can act more recklessly if they know they will be bailed out. For example, banks will allow parties to take out risky loans if they know that the government will bail them out.
Monopolies of knowledge
In the model of monopolies of knowledge, the ignorant party has no right to access all the critical information about a situation for decision-making. Meaning one party has exclusive control over information. This type of information asymmetry can be seen in government. An example of monopolies of knowledge is that in some enterprises, only high-level management can fully access the corporate information provided by a third party. At the same time, lower-level employees are required to make important decisions with only limited information provided to them.
Solutions
Countermeasures have widely been discussed to reduce information asymmetry. The classic paper on adverse selection is George Akerlof's "The Market for Lemons" from 1970, which brought informational issues to the forefront of economic theory. Exploring signaling and screening, the paper discusses two primary solutions to this problem. A similar concept is moral hazard, which differs from adverse selection at the timing level. While adverse selection affects parties before the interaction, moral hazard affects parties after the interaction. Regulatory instruments such as mandatory information disclosure can also reduce information asymmetry. Warranties can further help mitigate the effect of asymmetric information.
Signalling
Michael Spence originally proposed the idea of signalling. He suggested that in a situation with information asymmetry, it is possible for people to signal their type, thus believably transferring information to the other party and resolving the asymmetry.
This idea was initially studied in the context of matching in the job market. An employer is interested in hiring a new employee who is "skilled in learning". Of course, all prospective employees will claim to be "skilled in learning", but only they know if they really are. This is an information asymmetry.
Spence proposes, for example, that going to college can function as a credible signal of an ability to learn. Assuming that people who are skilled in learning can finish college more easily than people who are unskilled, then by finishing college, the skilled people signal their skills to prospective employers. No matter how much or how little they may have learned in college or what they studied, finishing functions as a signal of their capacity for learning. However, finishing college may merely function as a signal of their ability to pay for college; it may signal the willingness of individuals to adhere to orthodox views, or it may signal a willingness to comply with authority.
Signalling theory can be used in e-commerce research. Information asymmetry in e-commerce comes from information distortion that leads to the buyer's misunderstanding of the seller's true characteristics before the contract. Mavlanova, Benbunan-Fich and Koufaris (2012) noticed that signalling theory explains the relation between signals and qualities, illustrating why some signals are trustworthy and others are not. In e-commerce, signals deliver information about the characteristics of the seller. For instance, high-quality sellers are able to show their identity to buyers by using signs and logos, and then buyers check these signals to evaluate the credibility and validity of a seller's qualities. The study of Mavlanova, Benbunan-Fich and Koufaris (2012) also confirmed that signal usage is different between low-quality and high-quality online sellers. Low-quality sellers are more likely to avoid using expensive, easy-to-verify signals and tend to use fewer signals than high-quality sellers. Thus, signals help reduce information asymmetry.
Screening
Joseph E. Stiglitz pioneered the theory of screening. In this way, the under informed party can induce the other party to reveal their information. They can provide a menu of choices in such a way that the choice depends on the private information of the other party.
The side of asymmetry can occur on either buyer or seller. For example, sellers with better information than buyers include used-car salespeople, mortgage brokers and loan originators, financial institutions and real estate agents. Alternatively, situations where the buyer usually has better information than the seller include estate sales as specified in a last will and testament, life insurance, or sales of old art pieces without a prior professional assessment of their value. This situation was first described by Kenneth J. Arrow in an article on health care in 1963.
George Akerlof, in The Market for Lemons notices that, in such a market, the average value of the commodity tends to go down, even for those of perfectly good quality. Because of information asymmetry, unscrupulous sellers can sell "forgeries" (like replica goods such as watches) and defraud the buyer. Meanwhile, buyers usually do not have enough information to distinguish lemons from quality goods. As a result, many people not willing to risk getting ripped off will avoid certain types of purchases or will not spend as much for a given item. Akerlof demonstrates that it is even possible for the market to decay to the point of nonexistence.
An example of adverse selection and information asymmetry causing market failure is the market for health insurance. Policies usually group subscribers together, where people can leave, but no one can join after it is set. As health conditions are realized over time, information involving health costs will arise, and low-risk policyholders will realize the mismatch in the premiums and health conditions. Due to this, healthy policyholders are incentivized to leave and reapply to get a cheaper policy that matches their expected health costs, which causes the premiums to increase. As high-risk policyholders are more dependent on insurance, they are stuck with higher premium costs as the group size reduces, which causes premiums to increase even further. This cycle repeats until the high-risk policy holders also find similar health policies with cheaper premiums, in which the initial group disappears. This concept is known as the death spiral and has been researched as early as 1988.
Akerlof also suggests different methods with which information asymmetry can be reduced. One of those instruments that can be used to reduce the information asymmetry between market participants is intermediary market institutions called counteracting institutions, for instance, guarantees for goods. By providing a guarantee, the buyer in the transaction can use extra time to obtain the same amount of information about the good as the seller before the buyer takes on the complete risk of the good being a "lemon". Other market mechanisms that help reduce the imbalance in information include brand names, chains and franchising that guarantee the buyer a threshold quality level. These mechanisms also let owners of high-quality products get the full value of the goods. These counteracting institutions then keep the market size from reducing to zero.
Warranty
Warranties are utilised as a method of verifying the credibility of a product and are a guarantee issued by the seller promising to replace or repair the good should the quality not be sufficient. Product warranties are often requested from buying parties or financial lenders and have been used as a form of mediation dating back to the Babylonian era. Warranties can come in the form of insurance and can also come at the expense of the buyer. The implementation of "lemon laws" has eradicated the effect of information asymmetry upon customers who have received a faulty item. Essentially, this involves the customers returning a defective product regardless of circumstances within a certain time period.
Mandatory information disclosure
Both signaling and screening resemble voluntary information disclosure, where the party having more information, for their own best interest, use various measures to inform the other party. However, voluntary information disclosure is not always feasible. Regulators can thus take active measures to facilitate the spread of information. For example, the Securities and Exchange Commission (SEC) initiated Regulation Fair Disclosure (RFD) so that companies must faithfully disclose material information to investors. The policy has reduced information asymmetry, reflected in the lower trading costs.
Incentives and penalties
For firms to reduce moral hazard, they can implement penalties for bad behaviour and incentives to align objectives. An example of building in an incentive is insurance companies not insuring customers for the total value; this provides an incentive to be less reckless as the customer will suffer financial liability as well.
Information gathering
Most models in traditional contract theory assume that asymmetric information is exogenously given. Yet, some authors have also studied contract-theoretic models in which asymmetric information arises endogenously because agents decide whether or not to gather information. Specifically, Crémer and Khalil (1992) and Crémer, Khalil, and Rochet (1998a) study an agent's incentives to acquire private information after a principal has offered a contract. In a laboratory experiment, Hoppe and Schmitz (2013) have provided empirical support for the theory. Several further models have been developed which study variants of this setup. For instance, when the agent has not gathered information at the outset, does it make a difference whether or not he learns the information later on, before production starts? What happens if the information can be gathered already before a contract is offered? What happens if the principal observes the agent's decision to acquire information? Finally, the theory has been applied in several contexts, such as public-private partnerships and vertical integration.
Sources
Information asymmetry within societies can be created and maintained in several ways. Firstly, media outlets, due to their ownership structure or political influences, may fail to disseminate certain viewpoints or choose to engage in propaganda campaigns. Furthermore, an educational system relying on substantial tuition fees can generate information imbalances between the poor and the affluent. Imbalances can also be fortified by specific organizational and legal measures, such as document classification procedures or non-disclosure clauses. Exclusive information networks that are operational around the world further contribute to the asymmetry. Copyright laws increase information imbalances between the poor and the affluent. Lastly, mass surveillance helps the political and industrial leaders to amass large volumes of information, which is typically not shared with the rest of society.
Market impact
Zavolokina, Schlegel, and Schwabe (2020) state that Information asymmetry makes buyers and sellers distrust each other, which leads to opportunistic behaviour and may even lead to complete break down of the market. At the same time, lower quality provision in markets is also one of the consequences, as sellers do not get benefits enough to cover their production costs of providing higher quality products.
Countermeasures
Abito, Jose Miguel, & Salant, Yuval proposed that warranty enhancing consumer welfare highlights the relevance of policies that directly guide consumer decisions and increases buyers' trust in high-quality buyers.
Establish a real-time information announce platform, according to the collect information to achieve market transparency,eliminate trading concerns thereby.
Enhance customer experience by third-party quality checks like providing expert reviews.
Consumer protection law ensure that product quality meets expectations and that contract terms are fair.
Ensure quality in the form of standards and certificates and prove that all technical parameters have been tested.
Application in research
Accounting and finance
A substantial portion of research in the field of accounting can be framed in terms of information asymmetry, since accounting involves the transmission of an enterprise's information from those who have it to those who need it for decision-making. Bartov and Bodnar (1996) mentioned that the different accounting methods used by enterprises can lead to information asymmetry. For instance aggressively recognising revenue can result in preparers of financial statements having a much better understanding of the levels of future revenue then those reading the statements. Likewise, in finance literature, the acknowledgment of information asymmetry between organizations challenged the Modigliani–Miller theorem, which states that the valuation of a firm is unaffected by its financial structure. It challenges the theorem as one of the key assumptions is that investors would have the same information as a corporation. If there is not symmetry in information corporations can leverage their capital structure to get the most out of their valuation. Information asymmetry shed light on the importance of aligning interests of managers with those of stakeholders. As managers with significant power from information may make decisions based on their own interest as opposed to the companies. When the level of information asymmetry and associated monitoring cost is high, firms tend to rely less on board monitoring and more on incentive alignment. Various measures are used to align interest of managers to stop them from abusing their power from information asymmetry such as compensating based on performance using a bonus structure. This field of study is referred to as agency theory. Furthermore, financial economists apply information asymmetry in studies of differentially informed financial market participants (insiders, stock analysts, investors, etc.) or in the cost of finance for MFIs.
Effect of blogging
The effect of blogging as a source of information asymmetry as well as a tool reduce asymmetric information has also been well studied. Blogging on financial websites provides bottom-up communication among investors, analysts, journalists, and academics, as financial blogs help prevent people in charge from withholding financial information from their company and the general public. Compared to traditional forms of media such as newspapers and magazines, blogging provides an easy-to-access venue for information. A 2013 study by Gregory Saxton and Ashley Anker concluded that more participation on blogging sites from credible individuals reduces information asymmetry between corporate insiders, additionally reducing the risk of insider trading.
Game theory
Game theory can be used to analyse asymmetric information. A large amount of the foundational ideas in game theory builds on the framework of information asymmetry. In simultaneous games, each player has no prior knowledge of an opponent's move. In sequential games, players may observe all or part of the opponent's moves. One example of information asymmetry is one player can observe the opponent's past activities while the other player cannot. Therefore, the existence and level of information asymmetry in a game determines the dynamics of the game. James Fearon in his study of the explanations for war in a game theoretic context notices that war could be a consequence of information asymmetry – two countries will not reach a non-violent settlement because they have incentives to distort the amount of military resources they possess.
Contract theory
Contract theory provides insights into how various economic agents can enter contractual arrangements in situation of unequal levels of information. The development of contract theory is based on assuming its parties possess different levels of information on the contract's subject. For instance, in a road construction contract, a civil engineer may have more information on the various inputs required to undertake the project, than the other parties. Through contract theory, economic agents gain insights on how they can exploit information available to them, to enter beneficial contractual arrangements. The impact information asymmetry causes among parties with competing interests, such as games, has contributed to game theory. In no game do its players have complete information about each other; most importantly, no player knows the strategy the others intends to use to realize a win. This information asymmetry, together with the competing interests have resulted in the development of game theory (which seeks to provides insights as to how parties caught up in a situation where they are required to compete under a set of rules, can maximize their expected outcomes).
Information asymmetry occurs in situations where some parties have more information regarding an issue than others. It is considered a major cause of market failure. The contribution of information asymmetry to market failure arises from the fact that it impairs with the free hand which is expected to guide how modern markets work. For example, the stock market forms a major avenue through which publicly traded entities can raise their capital. The operation of stock markets across the world, is carried in a way that ensures current and potential investors have the same level of information about the stocks or any other securities that may be listed in that market. That level of information symmetry helps to ensure similar conditions to all parties in the market, which in turn helps to ensure the securities listed in those markets trade at fair value. However, cases of information sometimes arise, when certain parties obtain information that is not in the public domain. This can create market return abnormalities, such as an abrupt surge or decline in a security.
Artificial intelligence
Tshilidzi Marwala and Evan Hurwitz in their study of the relationship between information asymmetry and artificial intelligence observed that there is a reduced level of information asymmetry between two artificial intelligent agents than between two human agents. As a consequence, when these artificial intelligent agents engage in financial markets it reduces arbitrage opportunities making markets more efficient. The study also revealed that as the number of artificial intelligent agents in the market increase, the volume of trades in the market will decrease. This is primarily because information asymmetry of the perceptions of value of goods and services is the basis of trade.
Management
Information asymmetry has been applied in a variety of ways in management research ranging from conceptualizations of information asymmetry to building resolutions to reduce it. Studies have shown that information asymmetry can be a source of competitive advantage for the firms. A 2013 study by Schmidt and Keil has revealed that the presence of private information asymmetry within firms influences normal business activities. Firms that have a more concrete understanding of their resources can use this information to gauge their advantage over competitors. In Ozeml, Reuer and Gulati's 2013 study, they found that 'different information' was an additional source of information asymmetry in venture capitalist and alliance networks; when different team members bring diverse, specialized knowledge, values and outlooks towards a common strategic decision, the lack of homogeneous information distribution among the members leads to inefficient decision making.
Firms have the ability to apply strategies that exploit their informational gap. One way they can do this is through impression management, which involves undertaking actions and releasing information to influence stakeholders' and analysts' opinions positively, exploiting information asymmetry as external parties heavily rely on the information released by firms. A second way that firms exploit information asymmetry is through decoupling. This describes the discrepancy between formal procedures and failure to implement them. An example of this is executives announcing a stock repurchase plan without any intention of carrying it out, allowing them to raise new cash flow for their own benefit at the expense of shareholders. Management research goes on to explain that agents can perpetuate information asymmetry through information concealment. This involves firms not sharing information to exploit the informational advantage over rivals. In resource-based theory, it shows firms concealing information about their competitive advantage in order to build causal ambiguity to protect their firm from imitation.
Information asymmetry problems can be addressed by management through several approaches. First is the usage of incentives to encourage the disclosure and sharing information. An example of this is partnering specifically with companies that disclose relatively more information. Second, is through precommitment, where actions are undertaken at present to ensure future commitments. Third, is the usage of an information intermediary in which an intermediary is used to gather and relay information between two parties. A common example of this are financial analysts that gather information from the financial statements of a company, and uses it to create reports and advice for potential investors and clients. Fourth, is the usage of monitoring and reward. Monitoring allows management to confirm information that was previously uncertain, such as performance and behaviour. Monitoring can also be used alongside other incentives such as rewarding for performance.
Online advertising
Online advertising is a dominant form of advertising, and a potential source of information asymmetry. Online advertising consists of utilities(a good) being encoded into a message received by a customer who decodes the message, making a purchasing decision. Firms' messages are tailored to specific goals and intentions, and can be a source of information asymmetry due to interpretation, or intent. The nature of the internet and prevalence of social media in society has given firms opportunities to create promotional content in a less passive way than other forms of advertising. 'Noise' represents any techniques that are used with the intent of obstructing, altering, or blocking the interpretation of the message by the receiver. This can increase the amount of information asymmetry in a transaction, as the buyer may not understand the product to its fullest extent, even if they believe to fully understand the message being sent to them.
Firms communicate to the virtual marketplace through online advertising, and as such the feedback of consumers feeling manipulated or feeling the presence of information asymmetry may be indicative of the lack of transparency by a firm. Highly advertised and strongly promoted items are generally more likely to be bought by customers, even if the product is inferior to less advertised competition, introducing adverse selection. The power of the internet also changes how consumers deal with information asymmetry, as they have the means to find vast amounts of information about products with relatively little effort. While a consumer can use this power to assist their research to find a product that is not being marketed maliciously, this decision is made due to information asymmetry, not due to the customer being perfectly rational.
Some consumers are aware of the usage of strategies and techniques by firms to advertise and influence their media consumption, however do not necessarily alter their trust in the source of the information accordingly. Online advertising that appears trustworthy but can be malicious in intent can still be trusted by consumers, despite the information asymmetry, even if consumers themselves identify as critical of the medium. Social media personalities, much like other celebrities, also have influence over consumers who would otherwise consider themselves dissuaded by the advertising, providing firms another method of aggressive advertising with potential information asymmetry.
See also
Artificial scarcity
Asymmetric competition
Bounded rationality
Caveat emptor
Inequality of bargaining power
Natural borrowing limit
Perfect information
Real prices and ideal prices
Notes
References
(Chaps. 13 and 14 discuss applications of adverse selection and moral hazard models to contract theory.)
External links
"The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2001" – Official Prize announcement by the Nobel Foundation, nobelprize.org, October 2001. Accessed November 12, 2007. (Related links.)
The Economist: Information asymmetry, Secrets and agents,
Information
Law and economics
Market failure | Information asymmetry | [
"Physics"
] | 7,663 | [
"Asymmetric information",
"Symmetry",
"Asymmetry"
] |
309,910 | https://en.wikipedia.org/wiki/List%20of%20chordate%20orders | This article contains a list of all of the classes and orders that are located in the Phylum Chordata.
The subphyla Tunicata and Vertebrata are in the unranked Olfactores clade, while the subphylum Cephalochordata is not. Animals in Olfactores are characterized as having a more advanced olfactory system than animals not in it.
The only extinct classes shown are Placodermi and Acanthodii. Note that there are many other extinct chordate groups that are not shown here.
Subphylum Cephalochordata
Class Leptocardii: Lancelets
Order Amphioxiformes
Subphylum Tunicata
Class Ascidiacea: Sessile tunicates
Order Enterogona
Order Pleurogona
Order Aspiraculata
Class Thaliacea: Pelagic tunicates
Order Doliolida
Order Pyrosomida
Order Salpida: salps
Class Appendicularia: Solitary, free-swimming tunicates
Order Copelata
Subphylum Vertebrata
Infraphylum Agnatha: Jawless vertebrates
Superclass Cyclostomata: Extant jawless vertebrates
Class Myxini: Hagfish
Order Myxiniformes
Class Hyperoartia: Lampreys
Order Petromyzontiformes
Infraphylum Gnathostomata: Jawed vertebrates
Class Placodermi: Armoured fish †
Order Acanthothoraci
Order Arthrodira
Order Antiarchi
Order Brindabellaspida
Order Petalichthyida
Order Phyllolepida
Order Ptyctodontida
Order Rhenanida
Order Pseudopetalichthyida (The placement of this order is debated.)
Order Stensioellida (The placement of this monotypic order is debated.)
Class Chondrichthyes: Cartilaginous fish
Subclass Elasmobranchii
Superorder Batoidea
Order Rajiformes: rays and skates
Order Rhinopristiformes: sawfishes
Order Torpediniformes: electric rays
Order Myliobatiformes: (sting)rays
Superorder Selachimorpha (sharks)
Order Heterodontiformes: bullhead sharks
Order Orectolobiformes: carpet sharks
Order Carcharhiniformes: ground sharks
Order Lamniformes: mackerel sharks
Order Hexanchiformes: frilled and cow sharks
Order Squaliformes: dogfish sharks
Order Squatiniformes: angel sharks
Order Pristiophoriformes: saw sharks
Subclass Holocephali
Order Chimaeriformes: chimaeras
Class Acanthodii: Spiny sharks †
Order Climatiiformes
Order Ischnacanthiformes
Order Acanthodiformes
Superclass Osteichthyes: Bony fish
Class Actinopterygii: Ray-finned fish
Order Asarotiformes †
Order Discordichthyiformes †
Order Paphosisciformes †
Order Scanilepiformes †
Order Cheirolepidiformes †
Order Paramblypteriformes †
Order Rhadinichthyiformes †
Order Palaeonisciformes †
Order Tarrasiiformes †
Order Pachycormiformes †
Order Ptycholepiformes †
Order Redfieldiiformes †
Order Haplolepidiformes †
Order Aeduelliformes †
Order Platysomiformes †
Order Dorypteriformes †
Order Eurynotiformes †
Subclass Cladistii
Order Polypteriformes
Subclass Chondrostei
Order Acipenseriformes: sturgeons and paddlefishes
Subclass Neopterygii
Infraclass Holostei
Order Lepisosteiformes, the gars
Order Amiiformes, the bowfins
Infraclass Teleostei
Superorder Osteoglossomorpha
Order Osteoglossiformes, the bony-tongued fishes
Order Hiodontiformes, including the mooneye and goldeye
Order Lycopteriformes
Order Ichthyodectiformes †
Superorder Elopomorpha
Order Elopiformes, including the ladyfishes and tarpon
Order Albuliformes, the bonefishes
Order Notacanthiformes, including the halosaurs and spiny eels
Order Anguilliformes, the true eels and gulpers
Order Saccopharyngiformes, including the gulper eel
Superorder Clupeomorpha
Order Clupeiformes, including herrings and anchovies
Superorder Ostariophysi
Order Gonorynchiformes, including the milkfishes
Order Cypriniformes, including barbs, carp, danios, goldfishes, loaches, minnows, rasboras
Order Characiformes, including characins, pencilfishes, hatchetfishes, piranhas, tetras.
Order Gymnotiformes, including electric eels and knifefishes
Order Siluriformes, the catfishes
Superorder Protacanthopterygii
Order Salmoniformes, including salmon and trout
Order Esociformes the pike
Order Osmeriformes, including the smelts and galaxiids
Superorder Stenopterygii
Order Ateleopodiformes, the jellynose fish
Order Stomiiformes, including the bristlemouths and marine hatchetfishes
Superorder Cyclosquamata
Order Aulopiformes, including the Bombay duck and lancetfishes
Superorder Scopelomorpha
Order Myctophiformes, including the lanternfishes
Superorder Lampridiomorpha
Order Lampriformes, including the oarfish, opah and ribbonfishes
Superorder Polymyxiomorpha
Order Polymixiiformes, the beardfishes
Superorder Paracanthopterygii
Order Percopsiformes, including the cavefishes and trout-perches
Order Batrachoidiformes, the toadfishes
Order Lophiiformes, including the anglerfishes
Order Gadiformes, including cods
Order Ophidiiformes, including the pearlfishes
Superorder Acanthopterygii
Order Mugiliformes, the mullets
Order Atheriniformes, including silversides and rainbowfishes
Order Beloniformes, including the flyingfishes
Order Cetomimiformes, the whalefishes
Order Cyprinodontiformes, including livebearers, killifishes
Order Stephanoberyciformes, including the ridgeheads
Order Beryciformes, including the fangtooths and pineconefishes
Order Zeiformes, including the dories
Order Gobiesociformes, the clingfishes
Order Gasterosteiformes including sticklebacks, pipefishes, seahorses
Order Syngnathiformes, including the seahorses and pipefishes
Order Synbranchiformes, including the swamp eels
Order Tetraodontiformes, including the filefishes and pufferfish
Order Pleuronectiformes, the flatfishes
Order Scorpaeniformes, including scorpionfishes and the sculpins
Order Perciformes 40% of all fish including anabantids, centrarchids (incl. bass and sunfish), cichlids, gobies, gouramis, mackerel, perches, scats, whiting, wrasses
Class Sarcopterygii: Lobe-finned fish
Subclass Actinistia (coelacanths)
Order Coelacanthiformes
Subclass Dipnoi (lungfish)
Order Ceratodontiformes
Superclass Tetrapoda: Four-limbed vertebrates
Class Amphibia: Amphibians
Order Urodela or Caudata (salamanders)
Order Anura (frogs and toads)
Order Gymnophiona or Apoda (caecilians)
Class Reptilia: Reptiles
Subclass Diapsida
Infraclass Archosauromorpha
Superorder Crocodylomorpha
Order Crocodilia (crocodilians)
Class Aves (birds)
Infraclass Lepidosauromorpha
Superorder Lepidosauria
Order Rhynchocephalia (tuataras)
Order Squamata (lizards, snakes)
Subclass Anapsida
Order Testudines (turtles and their kin)
Class Aves: Birds
Subclass Neornithes
Infraclass Palaeognathae
Order Apterygiformes, kiwis
Order Casuariiformes, cassowaries and emu
Order Dinornithiformes †, moas
Order Rheiformes, rheas
Order Struthioniformes, ostriches
Order Tinamiformes, tinamous
Infraclass Neognathae
Superorder Galloanserae (fowl)
Order Anseriformes, waterfowl
Order Gastornithiformes †, gastornis and mihirungs
Order Galliformes, fowl
Superorder Neoaves
Order Sphenisciformes, penguins
Order Gaviiformes, loons
Order Podicipediformes, grebes
Order Procellariiformes, albatrosses, petrels, and allies
Order Pelecaniformes, pelicans and allies
Order Ciconiiformes, storks and allies
Order Phoenicopteriformes, flamingos
Order Accipitriformes, eagles, hawks and allies (taxonomists have traditionally placed these groups in the Falconiformes)
Order Falconiformes, falcons
Order Cariamiformes, seriemas and terror birds
Order Opisthocomiformes, hoatzin (this enigmatic bird was traditionally treated as a family within either the Galliformes or Cuculiformes)
Order Gruiformes, cranes and allies
Order Charadriiformes, plovers and allies
Order Pterocliformes, sandgrouse (this enigmatic group was traditionally treated as a family in any of three different orders: Charadriiformes, Ciconiiformes, and Columbiformes)
Order Columbiformes, doves, pigeons and dodos
Order Psittaciformes, parrots and allies
Order Cuculiformes, cuckoos
Order Strigiformes, owls
Order Caprimulgiformes, nightjars and allies
Order Apodiformes, swifts
Order Coliiformes, mousebirds
Order Trogoniformes, trogons
Order Coraciiformes, kingfishers
Order Piciformes, woodpeckers and allies
Order Passeriformes, passerines
Class Mammalia: Mammals
Subclass Prototheria
Order Monotremata, monotremes (platypus and echidnas)
Subclass Theria
Infraclass Marsupialia
Order Didelphimorphia, opossums
Order Paucituberculata, rat opossums
Order Microbiotheria, monito del monte
Order Dasyuromorphia, marsupial carnivores (quolls, numbats, Tasmanian devils and thylacines)
Order Peramelemorphia, marsupial omnivores (bandicoots and bilbies)
Order Notoryctemorphia, marsupial moles
Order Diprotodontia, marsupial herbivores; kangaroos, wallabies, possums, koalas and allies
Order Polydolopimorphia
Infraclass Eutheria
Magnorder Atlantogenata
Superorder Afrotheria
Grandorder Afrosoricida
Order Afrosoricida, tenrecs and golden moles
Order Macroscelidea, elephant shrews
Order Tubulidentata, aardvark
Grandorder Paenungulata
Order Hyracoidea, hyraxes
Mirorder Tethytheria
Order Proboscidea, elephants
Order Sirenia, manatees and dugongs
Superorder Xenarthra
Order Cingulata, armadillos
Order Pilosa, sloths and anteaters
Magnorder Boreoeutheria
Superorder Laurasiatheria
Order Eulipotyphla, hedgehogs, shrews, moles
Grandorder Ferungulata
Order Artiodactyla, cetaceans (dolphins and whales) and even-toed ungulates (giraffes, camels, pigs, cattles and deers)
Clade Pegasoferae
Order Chiroptera, bats
Mirorder Zooamata
Order Perissodactyla, odd-toed ungulates; horses, rhinos, tapirs
Clade Ferae
Order Pholidota, pangolins
Order Carnivora, carnivores; cats, dogs, bears, racoons, seals, and others
Order †Creodonta hyaenodontidae hyeanodon, dissopsalis, sarkastostodon, and megistotherium.
Superorder Euarchontoglires
Grandorder Euarchonta
Mirorder Sundatheria
Order Dermoptera, colugos
Order Scandentia, treeshrews
Mirorder Primatomorpha
Order Primates, lemurs, monkeys, apes and allies
Grandorder Glires
Order Rodentia, rodents (rats, squirrels, capybaras and beavers)
Order Lagomorpha, rabbits, hares and pikas
See also
References
Chordate orders
Chordate orders
Chordate
List | List of chordate orders | [
"Biology"
] | 2,883 | [
"Lists of biota",
"Lists of animals",
"Animals"
] |
309,930 | https://en.wikipedia.org/wiki/Lift%20coefficient | In fluid dynamics, the lift coefficient () is a dimensionless quantity that relates the lift generated by a lifting body to the fluid density around the body, the fluid velocity and an associated reference area. A lifting body is a foil or a complete foil-bearing body such as a fixed-wing aircraft. is a function of the angle of the body to the flow, its Reynolds number and its Mach number. The section lift coefficient refers to the dynamic lift characteristics of a two-dimensional foil section, with the reference area replaced by the foil chord.
Definitions
The lift coefficient CL is defined by
,
where
is the lift force, is the relevant surface area and is the fluid dynamic pressure, in turn linked to the fluid density , and to the flow speed . The choice of the reference surface should be specified since it is arbitrary. For example, for cylindric profiles (the 3D extrusion of an airfoil in the spanwise direction), the first axis generating the surface is always in the spanwise direction. In aerodynamics and thin airfoil theory, the second axis is commonly in the chordwise direction:
resulting in a coefficient:
While in marine dynamics and for thick airfoils, the second axis is sometimes taken in the thickness direction:
resulting in a different coefficient:
The ratio between these two coefficients is the thickness ratio:
The lift coefficient can be approximated using the lifting-line theory, numerically calculated or measured in a wind tunnel test of a complete aircraft configuration.
Section lift coefficient
Lift coefficient may also be used as a characteristic of a particular shape (or cross-section) of an airfoil. In this application it is called the section lift coefficient . It is common to show, for a particular airfoil section, the relationship between section lift coefficient and angle of attack. It is also useful to show the relationship between section lift coefficient and drag coefficient.
The section lift coefficient is based on two-dimensional flow over a wing of infinite span and non-varying cross-section so the lift is independent of spanwise effects and is defined in terms of , the lift force per unit span of the wing. The definition becomes
where is the reference length that should always be specified: in aerodynamics and airfoil theory usually the airfoil chord is chosen, while in marine dynamics and for struts usually the thickness is chosen. Note this is directly analogous to the drag coefficient since the chord can be interpreted as the "area per unit span".
For a given angle of attack, cl can be calculated approximately using the thin airfoil theory, calculated numerically or determined from wind tunnel tests on a finite-length test piece, with end-plates designed to ameliorate the three-dimensional effects. Plots of cl versus angle of attack show the same general shape for all airfoils, but the particular numbers will vary. They show an almost linear increase in lift coefficient with increasing angle of attack with a gradient known as the lift slope. For a thin airfoil of any shape the lift slope is π2/90 ≃ 0.11 per degree. At higher angles a maximum point is reached, after which the lift coefficient reduces. The angle at which maximum lift coefficient occurs is the stall angle of the airfoil, which is approximately 10 to 15 degrees on a typical airfoil.
The stall angle for a given profile is also increasing with increasing values of the Reynolds number, at higher speeds indeed the flow tends to stay attached to the profile for longer delaying the stall condition. For this reason sometimes wind tunnel testing performed at lower Reynolds numbers than the simulated real life condition can sometimes give conservative feedback overestimating the profiles stall.
Symmetric airfoils necessarily have plots of cl versus angle of attack symmetric about the cl axis, but for any airfoil with positive camber, i.e. asymmetrical, convex from above, there is still a small but positive lift coefficient with angles of attack less than zero. That is, the angle at which cl = 0 is negative. On such airfoils at zero angle of attack the pressures on the upper surface are lower than on the lower surface.
See also
Lift-to-drag ratio
Drag coefficient
Foil (fluid mechanics)
Pitching moment
Circulation control wing
Zero lift axis
Notes
References
L. J. Clancy (1975): Aerodynamics. Pitman Publishing Limited, London,
Abbott, Ira H., and Doenhoff, Albert E. von (1959): Theory of Wing Sections, Dover Publications New York, # 486-60586-8
Aerodynamics
Aircraft wing design
Dimensionless numbers of fluid mechanics | Lift coefficient | [
"Chemistry",
"Engineering"
] | 922 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
310,400 | https://en.wikipedia.org/wiki/Cosmic%20string | Cosmic strings are hypothetical 1-dimensional topological defects which may have formed during a symmetry-breaking phase transition in the early universe when the topology of the vacuum manifold associated to this symmetry breaking was not simply connected.
In less formal terms, they are hypothetical long, thin defects in the fabric of space that might have formed according to string theory. They might have formed in the early universe during a process where certain symmetries were broken. Their existence was first contemplated by the theoretical physicist Tom Kibble in the 1970s.
The formation of cosmic strings is somewhat analogous to the imperfections that form between crystal grains in solidifying liquids, or the cracks that form when water freezes into ice. The phase transitions leading to the production of cosmic strings are likely to have occurred during the earliest moments of the universe's evolution, just after cosmological inflation, and are a fairly generic prediction in both quantum field theory and string theory models of the early universe.
Theories containing cosmic strings
The prototypical example of a field theory with cosmic strings is the Abelian Higgs model. The quantum field theory and string theory cosmic strings are expected to have many properties in common, but more research is needed to determine the precise distinguishing features. The F-strings for instance are fully quantum-mechanical and do not have a classical definition, whereas the field theory cosmic strings are almost exclusively treated classically.
In superstring theory, the role of cosmic strings can be played by the fundamental strings (or F-strings) themselves that define the theory perturbatively, by D-strings which are related to the F-strings by weak-strong or so called S-duality, or higher-dimensional D-, NS- or M-branes that are partially wrapped on compact cycles associated to extra spacetime dimensions so that only one non-compact dimension remains.
Dimensions
Cosmic strings, if they exist, would be extremely thin topological defects with diameters of the same order of magnitude as that of a proton, i.e. , or smaller. Given that this scale is much smaller than any cosmological scale, these strings are often studied in the zero-width, or Nambu–Goto approximation. Under this assumption, strings behave as one-dimensional objects and obey the Nambu–Goto action, which is classically equivalent to the Polyakov action that defines the bosonic sector of superstring theory.
In field theory, the string width is set by the scale of the symmetry breaking phase transition. In string theory, the string width is set (in the simplest cases) by the fundamental string scale, warp factors (associated to the spacetime curvature of an internal six-dimensional spacetime manifold) and/or the size of internal compact dimensions. (In string theory, the universe is either 10- or 11-dimensional, depending on the strength of interactions and the curvature of spacetime.)
Gravitation
A string is a geometrical deviation from Euclidean geometry in spacetime characterized by an angular deficit: a circle around the outside of a string would comprise a total angle less than 360°. From the general theory of relativity such a geometrical defect must be in tension, and would be manifested by mass. Even though cosmic strings are thought to be extremely thin, they would have immense density, and so would represent significant gravitational wave sources. A cosmic string about a kilometer in length may be more massive than the Earth.
However general relativity predicts that the gravitational potential of a straight string vanishes: there is no gravitational force on static surrounding matter. The only gravitational effect of a straight cosmic string is a relative deflection of matter (or light) passing the string on opposite sides (a purely topological effect). A closed cosmic string gravitates in a more conventional way.
During the expansion of the universe, cosmic strings would form a network of loops, and in the past it was thought that their gravity could have been responsible for the original clumping of matter into galactic superclusters. It is now calculated that their contribution to the structure formation in the universe is less than 10%.
Negative mass cosmic string
The standard model of a cosmic string is a geometrical structure with an angle deficit, which thus is in tension and hence has positive mass. In 1995, Visser et al. proposed that cosmic strings could theoretically also exist with angle excesses, and thus negative tension and hence negative mass. The stability of such exotic matter strings is problematic; however, they suggested that if a negative mass string were to be wrapped around a wormhole in the early universe, such a wormhole could be stabilized sufficiently to exist in the present day.
Super-critical cosmic string
The exterior geometry of a (straight) cosmic string can be visualized in an embedding diagram as follows: Focusing on the two-dimensional surface perpendicular to the string, its geometry is that of a cone which is obtained by cutting out a wedge of angle δ and gluing together the edges. The angular deficit δ is linearly related to the string tension (= mass per unit length), i.e. the larger the tension, the steeper the cone. Therefore, δ reaches 2π for a certain critical value of the tension, and the cone degenerates to a cylinder. (In visualizing this setup one has to think of a string with a finite thickness.) For even larger, "super-critical" values, δ exceeds 2π and the (two-dimensional) exterior geometry closes up (it becomes compact), ending in a conical singularity.
However, this static geometry is unstable in the super-critical case (unlike for sub-critical tensions): Small perturbations lead to a dynamical spacetime which expands in axial direction at a constant rate. The 2D exterior is still compact, but the conical singularity can be avoided, and the embedding picture is that of a growing cigar. For even larger tensions (exceeding the critical value by approximately a factor of 1.6), the string cannot be stabilized in radial direction anymore.
Realistic cosmic strings are expected to have tensions around 6 orders of magnitude below the critical value, and are thus always sub-critical. However, the inflating cosmic string solutions might be relevant in the context of brane cosmology, where the string is promoted to a 3-brane (corresponding to our universe) in a six-dimensional bulk.
Observational evidence
It was once thought that the gravitational influence of cosmic strings might contribute to the large-scale clumping of matter in the universe, but all that is known today through galaxy surveys and precision measurements of the cosmic microwave background (CMB) fits an evolution out of random, gaussian fluctuations. These precise observations therefore tend to rule out a significant role for cosmic strings and currently it is known that the contribution of cosmic strings to the CMB cannot be more than 10%.
The violent oscillations of cosmic strings generically lead to the formation of cusps and kinks. These in turn cause parts of the string to pinch off into isolated loops. These loops have a finite lifespan and decay (primarily) via gravitational radiation. This radiation which leads to the strongest signal from cosmic strings may in turn be detectable in gravitational wave observatories. An important open question is to what extent do the pinched off loops backreact or change the initial state of the emitting cosmic string—such backreaction effects are almost always neglected in computations and are known to be important, even for order of magnitude estimates.
Gravitational lensing of a galaxy by a straight section of a cosmic string would produce two identical, undistorted images of the galaxy. In 2003 a group led by Mikhail Sazhin reported the accidental discovery of two seemingly identical galaxies very close together in the sky, leading to speculation that a cosmic string had been found. However, observations by the Hubble Space Telescope in January 2005 showed them to be a pair of similar galaxies, not two images of the same galaxy. A cosmic string would produce a similar duplicate image of fluctuations in the cosmic microwave background, which it was thought might have been detectable by the Planck Surveyor mission. However, a 2013 analysis of data from the Planck mission failed to find any evidence of cosmic strings.
A piece of evidence supporting cosmic string theory is a phenomenon noticed in observations of the "double quasar" called Q0957+561A,B. Originally discovered by Dennis Walsh, Bob Carswell, and Ray Weymann in 1979, the double image of this quasar is caused by a galaxy positioned between it and the Earth. The gravitational lens effect of this intermediate galaxy bends the quasar's light so that it follows two paths of different lengths to Earth. The result is that we see two images of the same quasar, one arriving a short time after the other (about 417.1 days later). However, a team of astronomers at the Harvard-Smithsonian Center for Astrophysics led by Rudolph Schild studied the quasar and found that during the period between September 1994 and July 1995 the two images appeared to have no time delay; changes in the brightness of the two images occurred simultaneously on four separate occasions. Schild and his team believe that the only explanation for this observation is that a cosmic string passed between the Earth and the quasar during that time period traveling at very high speed and oscillating with a period of about 100 days.
Until 2023 the most sensitive bounds on cosmic string parameters came from the non-detection of gravitational waves by pulsar timing array data. The first detection of gravitational waves with pulsar timing array was confirmed in 2023. The earthbound Laser Interferometer Gravitational-Wave Observatory (LIGO) and especially the space-based gravitational wave detector Laser Interferometer Space Antenna (LISA) will search for gravitational waves and are likely to be sensitive enough to detect signals from cosmic strings, provided the relevant cosmic string tensions are not too small.
String theory and cosmic strings
During the early days of string theory both string theorists and cosmic string theorists believed that there was no direct connection between superstrings and cosmic strings (the names were chosen independently by analogy with ordinary string). The possibility of cosmic strings being produced in the early universe was first envisioned by quantum field theorist Tom Kibble in 1976, and this sprouted the first flurry of interest in the field.
In 1985, during the first superstring revolution, Edward Witten contemplated on the possibility of fundamental superstrings having been produced in the early universe and stretched to macroscopic scales, in which case (following the nomenclature of Tom Kibble) they would then be referred to as cosmic superstrings. He concluded that had they been produced they would have either disintegrated into smaller strings before ever reaching macroscopic scales (in the case of Type I superstring theory), they would always appear as boundaries of domain walls whose tension would force the strings to collapse rather than grow to cosmic scales (in the context of heterotic superstring theory), or having a characteristic energy scale close to the Planck energy they would be produced before cosmological inflation and hence be diluted away with the expansion of the universe and not be observable.
Much has changed since these early days, primarily due to the second superstring revolution. It is now known that string theory contains, in addition to the fundamental strings which define the theory perturbatively, other one-dimensional objects, such as D-strings, and higher-dimensional objects such as D-branes, NS-branes and M-branes partially wrapped on compact internal spacetime dimensions, while being spatially extended in one non-compact dimension. The possibility of large compact dimensions and large warp factors allows strings with tension much lower than the Planck scale.
Furthermore, various dualities that have been discovered point to the conclusion that actually all these apparently different types of string are just the same object as it appears in different regions of parameter space. These new developments have largely revived interest in cosmic strings, starting in the early 2000s.
In 2002, Henry Tye and collaborators predicted the production of cosmic superstrings during the last stages of brane inflation, a string theory construction of the early universe that gives leads to an expanding universe and cosmological inflation. It was subsequently realized by string theorist Joseph Polchinski that the expanding Universe could have stretched a "fundamental" string (the sort which superstring theory considers) until it was of intergalactic size. Such a stretched string would exhibit many of the properties of the old "cosmic" string variety, making the older calculations useful again. As theorist Tom Kibble remarks, "string theory cosmologists have discovered cosmic strings lurking everywhere in the undergrowth". Older proposals for detecting cosmic strings could now be used to investigate superstring theory.
Superstrings, D-strings or the other stringy objects mentioned above stretched to intergalactic scales would radiate gravitational waves, which could be detected using experiments like LIGO and especially the space-based gravitational wave experiment LISA. They might also cause slight irregularities in the cosmic microwave background, too subtle to have been detected yet but possibly within the realm of future observability.
Note that most of these proposals depend, however, on the appropriate cosmological fundamentals (strings, branes, etc.), and no convincing experimental verification of these has been confirmed to date. Cosmic strings nevertheless provide a window into string theory. If cosmic strings are observed, which is a real possibility for a wide range of cosmological string models, this would provide the first experimental evidence of a string theory model underlying the structure of spacetime.
Cosmic string network
There are many attempts to detect the footprint of a cosmic strings network.
Potential applications
In 1986, John G. Cramer proposed that spacecraft equipped with magnet coils could travel along cosmic strings, analogous to how a maglev train travels along a rail line.
See also
0-dimensional topological defect: magnetic monopole
2-dimensional topological defect: domain wall (e.g. of 1-dimensional topological defect: a cosmic string)
Cosmic string loop stabilised by a fermionic supercurrent: vorton
References
External links
An artistic perspective of Cosmic Strings
A simulation of cosmic string
http://www.damtp.cam.ac.uk/user/gr/public/cs_interact.html
Dr. Kip Thorne, ITP & Caltech. Spacetime Warps and the Quantum: A Glimpse of the Future. Lecture slides and audio
Cosmic strings and superstrings on arxiv.org
Large-scale structure of the cosmos
Hypothetical astronomical objects | Cosmic string | [
"Astronomy"
] | 2,988 | [
"Astronomical hypotheses",
"Hypothetical astronomical objects",
"Astronomical myths",
"Astronomical objects"
] |
310,474 | https://en.wikipedia.org/wiki/Light%20cone | In special and general relativity, a light cone (or "null cone") is the path that a flash of light, emanating from a single event (localized to a single point in space and a single moment in time) and traveling in all directions, would take through spacetime.
Details
If one imagines the light confined to a two-dimensional plane, the light from the flash spreads out in a circle after the event E occurs, and if we graph the growing circle with the vertical axis of the graph representing time, the result is a cone, known as the future light cone. The past light cone behaves like the future light cone in reverse, a circle which contracts in radius at the speed of light until it converges to a point at the exact position and time of the event E. In reality, there are three space dimensions, so the light would actually form an expanding or contracting sphere in three-dimensional (3D) space rather than a circle in 2D, and the light cone would actually be a four-dimensional version of a cone whose cross-sections form 3D spheres (analogous to a normal three-dimensional cone whose cross-sections form 2D circles), but the concept is easier to visualize with the number of spatial dimensions reduced from three to two.
This view of special relativity was first proposed by Albert Einstein's former professor Hermann Minkowski and is known as Minkowski space. The purpose was to create an invariant spacetime for all observers. To uphold causality, Minkowski restricted spacetime to non-Euclidean hyperbolic geometry.
Because signals and other causal influences cannot travel faster than light (see special relativity), the light cone plays an essential role in defining the concept of causality: for a given event E, the set of events that lie on or inside the past light cone of E would also be the set of all events that could send a signal that would have time to reach E and influence it in some way. For example, at a time ten years before E, if we consider the set of all events in the past light cone of E which occur at that time, the result would be a sphere (2D: disk) with a radius of ten light-years centered on the position where E will occur. So, any point on or inside the sphere could send a signal moving at the speed of light or slower that would have time to influence the event E, while points outside the sphere at that moment would not be able to have any causal influence on E. Likewise, the set of events that lie on or inside the future light cone of E would also be the set of events that could receive a signal sent out from the position and time of E, so the future light cone contains all the events that could potentially be causally influenced by E. Events which lie neither in the past or future light cone of E cannot influence or be influenced by E in relativity.
Mathematical construction
In special relativity, a light cone (or null cone) is the surface describing the temporal evolution of a flash of light in Minkowski spacetime. This can be visualized in 3-space if the two horizontal axes are chosen to be spatial dimensions, while the vertical axis is time.
The light cone is constructed as follows. Taking as event p a flash of light (light pulse) at time t0, all events that can be reached by this pulse from p form the future light cone of p, while those events that can send a light pulse to p form the past light cone of p.
Given an event E, the light cone classifies all events in spacetime into 5 distinct categories:
Events on the future light cone of E.
Events on the past light cone of E.
Events inside the future light cone of E are those affected by a material particle emitted at E.
Events inside the past light cone of E are those that can emit a material particle and affect what is happening at E.
All other events are in the (absolute) elsewhere of E and are those that cannot affect or be affected by E.
The above classifications hold true in any frame of reference; that is, an event judged to be in the light cone by one observer, will also be judged to be in the same light cone by all other observers, no matter their frame of reference.
The above refers to an event occurring at a specific location and at a specific time. To say that one event cannot affect another means that light cannot get from the location of one to the other in a given amount of time. Light from each event will ultimately make it to the former location of the other, but after those events have occurred.
As time progresses, the future light cone of a given event will eventually grow to encompass more and more locations (in other words, the 3D sphere that represents the cross-section of the 4D light cone at a particular moment in time becomes larger at later times). However, if we imagine running time backwards from a given event, the event's past light cone would likewise encompass more and more locations at earlier and earlier times. The farther locations will be at later times: for example, if we are considering the past light cone of an event which takes place on Earth today, a star 10,000 light years away would only be inside the past light cone at times 10,000 years or more in the past. The past light cone of an event on present-day Earth, at its very edges, includes very distant objects (every object in the observable universe), but only as they looked long ago, when the known universe was young.
Two events at different locations, at the same time (according to a specific frame of reference), are always outside each other's past and future light cones; light cannot travel instantaneously. Other observers might see the events happening at different times and at different locations, but one way or another, the two events will likewise be seen to be outside each other's cones.
If using a system of units where the speed of light in vacuum is defined as exactly 1, for example if space is measured in light-seconds and time is measured in seconds, then, provided the time axis is drawn orthogonally to the spatial axes, as the cone bisects the time and space axes, it will show a slope of 45°, because light travels a distance of one light-second in vacuum during one second. Since special relativity requires the speed of light to be equal in every inertial frame, all observers must arrive at the same angle of 45° for their light cones. Commonly a Minkowski diagram is used to illustrate this property of Lorentz transformations.
Elsewhere, an integral part of light cones is the region of spacetime outside the light cone at a given event (a point in spacetime). Events that are elsewhere from each other are mutually unobservable, and cannot be causally connected.
(The 45° figure really only has meaning in space-space, as we try to understand space-time by making space-space drawings. Space-space tilt is measured by angles, and calculated with trig functions. Space-time tilt is measured by rapidity, and calculated with hyperbolic functions.)
In general relativity
In flat spacetime, the future light cone of an event is the boundary of its causal future and its past light cone is the boundary of its causal past.
In a curved spacetime, assuming spacetime is globally hyperbolic, it is still true that the future light cone of an event includes the boundary of its causal future (and similarly for the past). However gravitational lensing can cause part of the light cone to fold in on itself, in such a way that part of the cone is strictly inside the causal future (or past), and not on the boundary.
Light cones also cannot all be tilted so that they are 'parallel'; this reflects the fact that the spacetime is curved and is essentially different from Minkowski space. In vacuum regions (those points of spacetime free of matter), this inability to tilt all the light cones so that they are all parallel is reflected in the non-vanishing of the Weyl tensor.
See also
Absolute future
Absolute past
Hyperbolic partial differential equation
Hypercone
Light-cone coordinates
Lorentz transformation
Method of characteristics
Minkowski diagram
Monge cone
Wave equation
References
External links
The Einstein-Minkowski Spacetime: Introducing the Light Cone
The Paradox of Special Relativity
RSS feed of stars in one's personal light cone
Concepts in astrophysics
Light
Lorentzian manifolds
Theory of relativity | Light cone | [
"Physics"
] | 1,729 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Concepts in astrophysics",
"Electromagnetic spectrum",
"Astrophysics",
"Waves",
"Light",
"Theory of relativity"
] |
310,480 | https://en.wikipedia.org/wiki/Lindel%C3%B6f%20space | In mathematics, a Lindelöf space is a topological space in which every open cover has a countable subcover. The Lindelöf property is a weakening of the more commonly used notion of compactness, which requires the existence of a finite subcover.
A is a topological space such that every subspace of it is Lindelöf. Such a space is sometimes called strongly Lindelöf, but confusingly that terminology is sometimes used with an altogether different meaning.
The term hereditarily Lindelöf is more common and unambiguous.
Lindelöf spaces are named after the Finnish mathematician Ernst Leonard Lindelöf.
Properties of Lindelöf spaces
Every compact space, and more generally every σ-compact space, is Lindelöf. In particular, every countable space is Lindelöf.
A Lindelöf space is compact if and only if it is countably compact.
Every second-countable space is Lindelöf, but not conversely. For example, there are many compact spaces that are not second-countable.
A metric space is Lindelöf if and only if it is separable, and if and only if it is second-countable.
Every regular Lindelöf space is normal.
Every regular Lindelöf space is paracompact.
A countable union of Lindelöf subspaces of a topological space is Lindelöf.
Every closed subspace of a Lindelöf space is Lindelöf. Consequently, every Fσ set in a Lindelöf space is Lindelöf.
Arbitrary subspaces of a Lindelöf space need not be Lindelöf.
The continuous image of a Lindelöf space is Lindelöf.
The product of a Lindelöf space and a compact space is Lindelöf.
The product of a Lindelöf space and a σ-compact space is Lindelöf. This is a corollary to the previous property.
The product of two Lindelöf spaces need not be Lindelöf. For example, the Sorgenfrey line is Lindelöf, but the Sorgenfrey plane is not Lindelöf.
In a Lindelöf space, every locally finite family of nonempty subsets is at most countable.
Properties of hereditarily Lindelöf spaces
A space is hereditarily Lindelöf if and only if every open subspace of it is Lindelöf.
Hereditarily Lindelöf spaces are closed under taking countable unions, subspaces, and continuous images.
A regular Lindelöf space is hereditarily Lindelöf if and only if it is perfectly normal.
Every second-countable space is hereditarily Lindelöf.
Every countable space is hereditarily Lindelöf.
Every Suslin space is hereditarily Lindelöf.
Every Radon measure on a hereditarily Lindelöf space is moderated.
Example: the Sorgenfrey plane is not Lindelöf
The product of Lindelöf spaces is not necessarily Lindelöf. The usual example of this is the Sorgenfrey plane which is the product of the real line under the half-open interval topology with itself. Open sets in the Sorgenfrey plane are unions of half-open rectangles that include the south and west edges and omit the north and east edges, including the northwest, northeast, and southeast corners. The antidiagonal of is the set of points such that
Consider the open covering of which consists of:
The set of all rectangles where is on the antidiagonal.
The set of all rectangles where is on the antidiagonal.
The thing to notice here is that each point on the antidiagonal is contained in exactly one set of the covering, so all the (uncountably many) sets of item (2) above are needed.
Another way to see that is not Lindelöf is to note that the antidiagonal defines a closed and uncountable discrete subspace of This subspace is not Lindelöf, and so the whole space cannot be Lindelöf either (as closed subspaces of Lindelöf spaces are also Lindelöf).
Generalisation
The following definition generalises the definitions of compact and Lindelöf: a topological space is -compact (or -Lindelöf), where is any cardinal, if every open cover has a subcover of cardinality strictly less than . Compact is then -compact and Lindelöf is then -compact.
The , or Lindelöf number is the smallest cardinal such that every open cover of the space has a subcover of size at most In this notation, is Lindelöf if The Lindelöf number as defined above does not distinguish between compact spaces and Lindelöf non-compact spaces. Some authors gave the name Lindelöf number to a different notion: the smallest cardinal such that every open cover of the space has a subcover of size strictly less than In this latter (and less used) sense the Lindelöf number is the smallest cardinal such that a topological space is -compact. This notion is sometimes also called the of the space
See also
Notes
References
Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989.
Willard, Stephen. General Topology, Dover Publications (2004)
Further reading
https://dantopology.wordpress.com/2012/05/03/when-is-a-lindelof-space-normal/
Compactness (mathematics)
General topology
Properties of topological spaces | Lindelöf space | [
"Mathematics"
] | 1,138 | [
"General topology",
"Properties of topological spaces",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
310,914 | https://en.wikipedia.org/wiki/Hodge%20star%20operator | In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge.
For example, in an oriented 3-dimensional Euclidean space, an oriented plane can be represented by the exterior product of two basis vectors, and its Hodge dual is the normal vector given by their cross product; conversely, any vector is dual to the oriented plane perpendicular to it, endowed with a suitable bivector. Generalizing this to an -dimensional vector space, the Hodge star is a one-to-one mapping of -vectors to -vectors; the dimensions of these spaces are the binomial coefficients .
The naturalness of the star operator means it can play a role in differential geometry, when applied to the cotangent bundle of a pseudo-Riemannian manifold, and hence to differential -forms. This allows the definition of the codifferential as the Hodge adjoint of the exterior derivative, leading to the Laplace–de Rham operator. This generalizes the case of 3-dimensional Euclidean space, in which divergence of a vector field may be realized as the codifferential opposite to the gradient operator, and the Laplace operator on a function is the divergence of its gradient. An important application is the Hodge decomposition of differential forms on a closed Riemannian manifold.
Formal definition for k-vectors
Let be an -dimensional oriented vector space with a nondegenerate symmetric bilinear form , referred to here as a scalar product. (In more general contexts such as pseudo-Riemannian manifolds and Minkowski space, the bilinear form may not be positive-definite.) This induces a scalar product on -vectors for , by defining it on simple -vectors and to equal the Gram determinant
extended to through linearity.
The unit -vector is defined in terms of an oriented orthonormal basis of as:
(Note: In the general pseudo-Riemannian case, orthonormality means
for all pairs of basis vectors.)
The Hodge star operator is a linear operator on the exterior algebra of , mapping -vectors to ()-vectors, for . It has the following property, which defines it completely:
for all -vectors
Dually, in the space of -forms (alternating -multilinear functions on ), the dual to is the volume form , the function whose value on is the determinant of the matrix assembled from the column vectors of in -coordinates. Applying to the above equation, we obtain the dual definition:
for all -vectors
Equivalently, taking , , and :
This means that, writing an orthonormal basis of -vectors as over all subsets of , the Hodge dual is the ()-vector corresponding to the complementary set :
where is the sign of the permutation
and is the product
. In the Riemannian case, .
Since Hodge star takes an orthonormal basis to an orthonormal basis, it is an isometry on the exterior algebra .
Geometric explanation
The Hodge star is motivated by the correspondence between a subspace of and its orthogonal subspace (with respect to the scalar product), where each space is endowed with an orientation and a numerical scaling factor. Specifically, a non-zero decomposable -vector corresponds by the Plücker embedding to the subspace with oriented basis , endowed with a scaling factor equal to the -dimensional volume of the parallelepiped spanned by this basis (equal to the Gramian, the determinant of the matrix of scalar products ). The Hodge star acting on a decomposable vector can be written as a decomposable ()-vector:
where form an oriented basis of the orthogonal space . Furthermore, the ()-volume of the -parallelepiped must equal the -volume of the -parallelepiped, and must form an oriented basis of .
A general -vector is a linear combination of decomposable -vectors, and the definition of Hodge star is extended to general -vectors by defining it as being linear.
Examples
Two dimensions
In two dimensions with the normalized Euclidean metric and orientation given by the ordering , the Hodge star on -forms is given by
Three dimensions
A common example of the Hodge star operator is the case , when it can be taken as the correspondence between vectors and bivectors. Specifically, for Euclidean R3 with the basis of one-forms often used in vector calculus, one finds that
The Hodge star relates the exterior and cross product in three dimensions: Applied to three dimensions, the Hodge star provides an isomorphism between axial vectors and bivectors, so each axial vector is associated with a bivector and vice versa, that is: .
The Hodge star can also be interpreted as a form of the geometric correspondence between an axis of rotation and an infinitesimal rotation (see also: 3D rotation group#Lie algebra) around the axis, with speed equal to the length of the axis of rotation. A scalar product on a vector space gives an isomorphism identifying with its dual space, and the vector space is naturally isomorphic to the tensor product . Thus for , the star mapping takes each vector to a bivector , which corresponds to a linear operator . Specifically, is a skew-symmetric operator, which corresponds to an infinitesimal rotation: that is, the macroscopic rotations around the axis are given by the matrix exponential . With respect to the basis of , the tensor corresponds to a coordinate matrix with 1 in the row and column, etc., and the wedge is the skew-symmetric matrix , etc. That is, we may interpret the star operator as:
Under this correspondence, cross product of vectors corresponds to the commutator Lie bracket of linear operators: .
Four dimensions
In case , the Hodge star acts as an endomorphism of the second exterior power (i.e. it maps 2-forms to 2-forms, since ). If the signature of the metric tensor is all positive, i.e. on a Riemannian manifold, then the Hodge star is an involution. If the signature is mixed, i.e., pseudo-Riemannian, then applying the operator twice will return the argument up to a sign – see below. This particular endomorphism property of 2-forms in four dimensions makes self-dual and anti-self-dual two-forms natural geometric objects to study. That is, one can describe the space of 2-forms in four dimensions with a basis that "diagonalizes" the Hodge star operator with eigenvalues (or , depending on the signature).
For concreteness, we discuss the Hodge star operator in Minkowski spacetime where with metric signature and coordinates . The volume form is oriented as . For one-forms,
while for 2-forms,
These are summarized in the index notation as
Hodge dual of three- and four-forms can be easily deduced from the fact that, in the Lorentzian signature, for odd-rank forms and for even-rank forms. An easy rule to remember for these Hodge operations is that given a form , its Hodge dual may be obtained by writing the components not involved in in an order such that . An extra minus sign will enter only if contains . (For , one puts in a minus sign only if involves an odd number of the space-associated forms , and .)
Note that the combinations
take as the eigenvalue for Hodge star operator, i.e.,
and hence deserve the name self-dual and anti-self-dual two-forms. Understanding the geometry, or kinematics, of Minkowski spacetime in self-dual and anti-self-dual sectors turns out to be insightful in both mathematical and physical perspectives, making contacts to the use of the two-spinor language in modern physics such as spinor-helicity formalism or twistor theory.
Conformal invariance
The Hodge star is conformally invariant on -forms on a -dimensional vector space , i.e. if is a metric on and , then the induced Hodge stars
are the same.
Example: Derivatives in three dimensions
The combination of the operator and the exterior derivative generates the classical operators , , and on vector fields in three-dimensional Euclidean space. This works out as follows: takes a 0-form (a function) to a 1-form, a 1-form to a 2-form, and a 2-form to a 3-form (and takes a 3-form to zero). For a 0-form , the first case written out in components gives:
The scalar product identifies 1-forms with vector fields as , etc., so that becomes .
In the second case, a vector field corresponds to the 1-form , which has exterior derivative:
Applying the Hodge star gives the 1-form:
which becomes the vector field .
In the third case, again corresponds to . Applying Hodge star, exterior derivative, and Hodge star again:
One advantage of this expression is that the identity , which is true in all cases, has as special cases two other identities: (1) , and (2) . In particular, Maxwell's equations take on a particularly simple and elegant form, when expressed in terms of the exterior derivative and the Hodge star. The expression (multiplied by an appropriate power of −1) is called the codifferential; it is defined in full generality, for any dimension, further in the article below.
One can also obtain the Laplacian in terms of the above operations:
The Laplacian can also be seen as a special case of the more general Laplace–deRham operator where in three dimensions, is the codifferential for -forms. Any function is a 0-form, and and so this reduces to the ordinary Laplacian. For the 1-form above, the codifferential is and after some straightforward calculations one obtains the Laplacian acting on .
Duality
Applying the Hodge star twice leaves a -vector unchanged up to a sign: for in an -dimensional space , one has
where is the parity of the signature of the scalar product on , that is, the sign of the determinant of the matrix of the scalar product with respect to any basis. For example, if and the signature of the scalar product is either or then . For Riemannian manifolds (including Euclidean spaces), we always have .
The above identity implies that the inverse of can be given as
If is odd then is even for any , whereas if is even then has the parity of . Therefore:
where is the degree of the element operated on.
On manifolds
For an n-dimensional oriented pseudo-Riemannian manifold M, we apply the construction above to each cotangent space and its exterior powers , and hence to the differential k-forms , the global sections of the bundle . The Riemannian metric induces a scalar product on at each point . We define the Hodge dual of a k-form , defining as the unique (n – k)-form satisfying
for every k-form , where is a real-valued function on , and the volume form is induced by the pseudo-Riemannian metric. Integrating this equation over , the right side becomes the (square-integrable) scalar product on k-forms, and we obtain:
More generally, if is non-orientable, one can define the Hodge star of a k-form as a (n – k)-pseudo differential form; that is, a differential form with values in the canonical line bundle.
Computation in index notation
We compute in terms of tensor index notation with respect to a (not necessarily orthonormal) basis in a tangent space and its dual basis in , having the metric matrix and its inverse matrix . The Hodge dual of a decomposable k-form is:
Here is the Levi-Civita symbol with , and we implicitly take the sum over all values of the repeated indices . The factorial accounts for double counting, and is not present if the summation indices are restricted so that . The absolute value of the determinant is necessary since it may be negative, as for tangent spaces to Lorentzian manifolds.
An arbitrary differential form can be written as follows:
The factorial is again included to account for double counting when we allow non-increasing indices. We would like to define the dual of the component so that the Hodge dual of the form is given by
Using the above expression for the Hodge dual of , we find:
Although one can apply this expression to any tensor , the result is antisymmetric, since contraction with the completely anti-symmetric Levi-Civita symbol cancels all but the totally antisymmetric part of the tensor. It is thus equivalent to antisymmetrization followed by applying the Hodge star.
The unit volume form is given by:
Codifferential
The most important application of the Hodge star on manifolds is to define the codifferential on -forms. Let
where is the exterior derivative or differential, and for Riemannian manifolds. Then
while
The codifferential is not an antiderivation on the exterior algebra, in contrast to the exterior derivative.
The codifferential is the adjoint of the exterior derivative with respect to the square-integrable scalar product:
where is a -form and a -form. This property is useful as it can be used to define the codifferential even when the manifold is non-orientable (and the Hodge star operator not defined). The identity can be proved from Stokes' theorem for smooth forms:
provided has empty boundary, or or has zero boundary values. (The proper definition of the above requires specifying a topological vector space that is closed and complete on the space of smooth forms. The Sobolev space is conventionally used; it allows the convergent sequence of forms (as ) to be interchanged with the combined differential and integral operations, so that and likewise for sequences converging to .)
Since the differential satisfies , the codifferential has the corresponding property
The Laplace–deRham operator is given by
and lies at the heart of Hodge theory. It is symmetric:
and non-negative:
The Hodge star sends harmonic forms to harmonic forms. As a consequence of Hodge theory, the de Rham cohomology is naturally isomorphic to the space of harmonic -forms, and so the Hodge star induces an isomorphism of cohomology groups
which in turn gives canonical identifications via Poincaré duality of with its dual space.
In coordinates, with notation as above, the codifferential of the form may be written as
where here denotes the Christoffel symbols of .
Poincare lemma for codifferential
In analogy to the Poincare lemma for exterior derivative, one can define its version for codifferential, which reads
If for , where is a star domain on a manifold, then there is such that .
A practical way of finding is to use cohomotopy operator , that is a local inverse of . One has to define a homotopy operator
where is the linear homotopy between its center and a point , and the (Euler) vector for is inserted into the form . We can then define cohomotopy operator as
,
where for .
The cohomotopy operator fulfills (co)homotopy invariance formula
where , and is the pullback along the constant map .
Therefore, if we want to solve the equation , applying cohomotopy invariance formula we get
where is a differential form we are looking for, and "constant of integration" vanishes unless is a top form.
Cohomotopy operator fulfills the following properties: . They make it possible to use it to define anticoexact forms on by , which together with exact forms make a direct sum decomposition
.
This direct sum is another way of saying that the cohomotopy invariance formula is a decomposition of unity, and the projector operators on the summands fulfills idempotence formulas: .
These results are extension of similar results for exterior derivative.
Citations
References
David Bleecker (1981) Gauge Theory and Variational Principles. Addison-Wesley Publishing. . Chpt. 0 contains a condensed review of non-Riemannian differential geometry.
Charles W. Misner, Kip S. Thorne, John Archibald Wheeler (1970) Gravitation. W.H. Freeman. . A basic review of differential geometry in the special case of four-dimensional spacetime.
Steven Rosenberg (1997) The Laplacian on a Riemannian manifold. Cambridge University Press. . An introduction to the heat equation and the Atiyah–Singer theorem.
Tevian Dray (1999) The Hodge Dual Operator. A thorough overview of the definition and properties of the Hodge star operator.
Differential forms
Riemannian geometry
Duality theories
Differential operators | Hodge star operator | [
"Mathematics",
"Engineering"
] | 3,526 | [
"Mathematical analysis",
"Mathematical structures",
"Tensors",
"Differential forms",
"Category theory",
"Duality theories",
"Geometry",
"Differential operators"
] |
310,921 | https://en.wikipedia.org/wiki/Lie%20algebroid | In mathematics, a Lie algebroid is a vector bundle together with a Lie bracket on its space of sections and a vector bundle morphism , satisfying a Leibniz rule. A Lie algebroid can thus be thought of as a "many-object generalisation" of a Lie algebra.
Lie algebroids play a similar same role in the theory of Lie groupoids that Lie algebras play in the theory of Lie groups: reducing global problems to infinitesimal ones. Indeed, any Lie groupoid gives rise to a Lie algebroid, which is the vertical bundle of the source map restricted at the units. However, unlike Lie algebras, not every Lie algebroid arises from a Lie groupoid.
Lie algebroids were introduced in 1967 by Jean Pradines.
Definition and basic concepts
A Lie algebroid is a triple consisting of
a vector bundle over a manifold
a Lie bracket on its space of sections
a morphism of vector bundles , called the anchor, where is the tangent bundle of
such that the anchor and the bracket satisfy the following Leibniz rule:
where . Here is the image of via the derivation , i.e. the Lie derivative of along the vector field . The notation denotes the (point-wise) product between the function and the vector field .
One often writes when the bracket and the anchor are clear from the context; some authors denote Lie algebroids by , suggesting a "limit" of a Lie groupoids when the arrows denoting source and target become "infinitesimally close".
First properties
It follows from the definition that
for every , the kernel is a Lie algebra, called the isotropy Lie algebra at
the kernel is a (not necessarily locally trivial) bundle of Lie algebras, called the isotropy Lie algebra bundle
the image is a singular distribution which is integrable, i.e. its admits maximal immersed submanifolds , called the orbits, satisfying for every . Equivalently, orbits can be explicitly described as the sets of points which are joined by A-paths, i.e. pairs of paths in and in such that and
the anchor map descends to a map between sections which is a Lie algebra morphism, i.e.
for all .
The property that induces a Lie algebra morphism was taken as an axiom in the original definition of Lie algebroid. Such redundancy, despite being known from an algebraic point of view already before Pradine's definition, was noticed only much later.
Subalgebroids and ideals
A Lie subalgebroid of a Lie algebroid is a vector subbundle of the restriction such that takes values in and is a Lie subalgebra of . Clearly, admits a unique Lie algebroid structure such that is a Lie algebra morphism. With the language introduced below, the inclusion is a Lie algebroid morphism.
A Lie subalgebroid is called wide if . In analogy to the standard definition for Lie algebra, an ideal of a Lie algebroid is wide Lie subalgebroid such that is a Lie ideal. Such notion proved to be very restrictive, since is forced to be inside the isotropy bundle . For this reason, the more flexible notion of infinitesimal ideal system has been introduced.
Morphisms
A Lie algebroid morphism between two Lie algebroids and with the same base is a vector bundle morphism which is compatible with the Lie brackets, i.e. for every , and with the anchors, i.e. .
A similar notion can be formulated for morphisms with different bases, but the compatibility with the Lie brackets becomes more involved. Equivalently, one can ask that the graph of to be a subalgebroid of the direct product (introduced below).
Lie algebroids together with their morphisms form a category.
Examples
Trivial and extreme cases
Given any manifold , its tangent Lie algebroid is the tangent bundle together with the Lie bracket of vector fields and the identity of as an anchor.
Given any manifold , the zero vector bundle is a Lie algebroid with zero bracket and anchor.
Lie algebroids over a point are the same thing as Lie algebras.
More generally, any bundles of Lie algebras is Lie algebroid with zero anchor and Lie bracket defined pointwise.
Examples from differential geometry
Given a foliation on , its foliation algebroid is the associated involutive subbundle , with brackets and anchor induced from the tangent Lie algebroid.
Given the action of a Lie algebra on a manifold , its action algebroid is the trivial vector bundle , with anchor given by the Lie algebra action and brackets uniquely determined by the bracket of on constant sections and by the Leibniz identity.
Given a principal G-bundle over a manifold , its Atiyah algebroid is the Lie algebroid fitting in the following short exact sequence:
The space of sections of the Atiyah algebroid is the Lie algebra of -invariant vector fields on , its isotropy Lie algebra bundle is isomorphic to the adjoint vector bundle , and the right splittings of the sequence above are principal connections on .
Given a vector bundle , its general linear algebroid, denoted by or , is the vector bundle whose sections are derivations of , i.e. first-order differential operators admitting a vector field such that for every . The anchor is simply the assignment and the Lie bracket is given by the commutator of differential operators.
Given a Poisson manifold , its cotangent algebroid is the cotangent vector bundle , with Lie bracket and anchor map .
Given a closed 2-form , the vector bundle is a Lie algebroid with anchor the projection on the first component and Lie bracketActually, the bracket above can be defined for any 2-form , but is a Lie algebroid if and only if is closed.
Constructions from other Lie algebroids
Given any Lie algebroid , there is a Lie algebroid , called its tangent algebroid, obtained by considering the tangent bundle of and and the differential of the anchor.
Given any Lie algebroid , there is a Lie algebroid , called its k-jet algebroid, obtained by considering the k-jet bundle of , with Lie bracket uniquely defined by and anchor .
Given two Lie algebroids and , their direct product is the unique Lie algebroid with anchor and such that is a Lie algebra morphism.
Given a Lie algebroid and a map whose differential is transverse to the anchor map (for instance, it is enough for to be a surjective submersion), the pullback algebroid is the unique Lie algebroid , with the pullback vector bundle, and the projection on the first component, such that is a Lie algebroid morphism.
Important classes of Lie algebroids
Totally intransitive Lie algebroids
A Lie algebroid is called totally intransitive if the anchor map is zero.
Bundle of Lie algebras (hence also Lie algebras) are totally intransitive. This actually exhaust completely the list of totally intransitive Lie algebroids: indeed, if is totally intransitive, it must coincide with its isotropy Lie algebra bundle.
Transitive Lie algebroids
A Lie algebroid is called transitive if the anchor map is surjective. As a consequence:
there is a short exact sequence
right-splitting of defines a principal bundle connections on ;
the isotropy bundle is locally trivial (as bundle of Lie algebras);
the pullback of exist for every .
The prototypical examples of transitive Lie algebroids are Atiyah algebroids. For instance:
tangent algebroids are trivially transitive (indeed, they are Atiyah algebroid of the principal -bundle )
Lie algebras are trivially transitive (indeed, they are Atiyah algebroid of the principal -bundle , for an integration of )
general linear algebroids are transitive (indeed, they are Atiyah algebroids of the frame bundle )
In analogy to Atiyah algebroids, an arbitrary transitive Lie algebroid is also called abstract Atiyah sequence, and its isotropy algebra bundle is also called adjoint bundle. However, it is important to stress that not every transitive Lie algebroid is an Atiyah algebroid. For instance:
pullbacks of transitive algebroids are transitive
cotangent algebroids associated to Poisson manifolds are transitive if and only if the Poisson structure is non-degenerate
Lie algebroids defined by closed 2-forms are transitive
These examples are very relevant in the theory of integration of Lie algebroid (see below): while any Atiyah algebroid is integrable (to a gauge groupoid), not every transitive Lie algebroid is integrable.
Regular Lie algebroids
A Lie algebroid is called regular if the anchor map is of constant rank. As a consequence
the image of defines a regular foliation on ;
the restriction of over each leaf is a transitive Lie algebroid.
For instance:
any transitive Lie algebroid is regular (the anchor has maximal rank);
any totally intransitive Lie algebroids is regular (the anchor has zero rank);
foliation algebroids are always regular;
cotangent algebroids associated to Poisson manifolds are regular if and only if the Poisson structure is regular.
Further related concepts
Actions
An action of a Lie algebroid on a manifold P along a smooth map consists of a Lie algebra morphismsuch that, for every ,Of course, when , both the anchor and the map must be trivial, therefore both conditions are empty, and we recover the standard notion of action of a Lie algebra on a manifold.
Connections
Given a Lie algebroid , an A-connection on a vector bundle consists of an -bilinear mapwhich is -linear in the first factor and satisfies the following Leibniz rule:for every , where denotes the Lie derivative with respect to the vector field .
The curvature of an A-connection is the -bilinear mapand is called flat if .
Of course, when , we recover the standard notion of connection on a vector bundle, as well as those of curvature and flatness.
Representations
A representation of a Lie algebroid is a vector bundle together with a flat A-connection . Equivalently, a representation is a Lie algebroid morphism .
The set of isomorphism classes of representations of a Lie algebroid has a natural structure of semiring, with direct sums and tensor products of vector bundles.
Examples include the following:
When , an -connection simplifies to a linear map and the flatness condition makes it into a Lie algebra morphism, therefore we recover the standard notion of representation of a Lie algebra.
When and is a representation the Lie algebra , the trivial vector bundle is automatically a representation of
Representations of the tangent algebroid are vector bundles endowed with flat connections
Every Lie algebroid has a natural representation on the line bundle , i.e. the tensor product between the determinant line bundles of and of . One can associate a cohomology class in (see below) known as the modular class of the Lie algebroid. For the cotangent algebroid associated to a Poisson manifold one recovers the modular class of .
Note that there an arbitrary Lie groupoid does not have a canonical representation on its Lie algebroid, playing the role of the adjoint representation of Lie groups on their Lie algebras. However, this becomes possible if one allows the more general notion of representation up to homotopy.
Lie algebroid cohomology
Consider a Lie algebroid and a representation . Denoting by the space of -differential forms on with values in the vector bundle , one can define a differential with the following Koszul-like formula:Thanks to the flatness of , becomes a cochain complex and its cohomology, denoted by , is called the Lie algebroid cohomology of with coefficients in the representation .
This general definition recovers well-known cohomology theories:
The cohomology of a Lie algebroid coincides with the Chevalley-Eilenberg cohomology of as a Lie algebra.
The cohomology of a tangent Lie algebroid coincides with the de Rham cohomology of .
The cohomology of a foliation Lie algebroid coincides with the leafwise cohomology of the foliation .
The cohomology of the cotangent Lie algebroid associated to a Poisson structure coincides with the Poisson cohomology of .
Lie groupoid-Lie algebroid correspondence
The standard construction which associates a Lie algebra to a Lie group generalises to this setting: to every Lie groupoid one can canonically associate a Lie algebroid defined as follows:
the vector bundle is , where is the vertical bundle of the source fibre and is the groupoid unit map;
the sections of are identified with the right-invariant vector fields on , so that inherits a Lie bracket;
the anchor map is the differential of the target map .
Of course, a symmetric construction arises when swapping the role of the source and the target maps, and replacing right- with left-invariant vector fields; an isomorphism between the two resulting Lie algebroids will be given by the differential of the inverse map .
The flow of a section is the 1-parameter bisection , defined by , where is the flow of the corresponding right-invariant vector field . This allows one to defined the analogue of the exponential map for Lie groups as .
Lie functor
The mapping sending a Lie groupoid to a Lie algebroid is actually part of a categorical construction. Indeed, any Lie groupoid morphism can be differentiated to a morphism between the associated Lie algebroids.
This construction defines a functor from the category of Lie groupoids and their morphisms to the category of Lie algebroids and their morphisms, called the Lie functor.
Structures and properties induced from groupoids to algebroids
Let be a Lie groupoid and its associated Lie algebroid. Then
The isotropy algebras are the Lie algebras of the isotropy groups
The orbits of coincides with the orbits of
is transitive and is a submersion if and only if is transitive
an action of on induces an action of (called infinitesimal action), defined by
a representation of on a vector bundle induces a representation of on , defined byMoreover, there is a morphism of semirings , which becomes an isomorphism if is source-simply connected.
there is a morphism , called Van Est morphism, from the differentiable cohomology of with coefficients in some representation on to the cohomology of with coefficients in the induced representation on . Moreover, if the -fibres of are homologically -connected, then is an isomorphism for , and is injective for .
Examples
The Lie algebroid of a Lie group is the Lie algebra
The Lie algebroid of both the pair groupoid and the fundamental groupoid is the tangent algebroid
The Lie algebroid of the unit groupoid is the zero algebroid
The Lie algebroid of a Lie group bundle is the Lie algebra bundle
The Lie algebroid of an action groupoid is the action algebroid
The Lie algebroid of a gauge groupoid is the Atiyah algebroid
The Lie algebroid of a general linear groupoid is the general linear algebroid
The Lie algebroid of both the holonomy groupoid and the monodromy groupoid is the foliation algebroid
The Lie algebroid of a tangent groupoid is the tangent algebroid , for
The Lie algebroid of a jet groupoid is the jet algebroid , for
Detailed example 1
Let us describe the Lie algebroid associated to the pair groupoid . Since the source map is , the -fibers are of the kind , so that the vertical space is . Using the unit map , one obtain the vector bundle .
The extension of sections to right-invariant vector fields is simply and the extension of a smooth function from to a right-invariant function on is . Therefore, the bracket on is just the Lie bracket of tangent vector fields and the anchor map is just the identity.
Detailed example 2
Consider the (action) Lie groupoid
where the target map (i.e. the right action of on ) is
The -fibre over a point are all copies of , so that is the trivial vector bundle .
Since its anchor map is given by the differential of the target map, there are two cases for the isotropy Lie algebras, corresponding to the fibers of :
This demonstrates that the isotropy over the origin is , while everywhere else is zero.
Integration of a Lie algebroid
Lie theorems
A Lie algebroid is called integrable if it is isomorphic to for some Lie groupoid . The analogue of the classical Lie I theorem states that:if is an integrable Lie algebroid, then there exists a unique (up to isomorphism) -simply connected Lie groupoid integrating .Similarly, a morphism between integrable Lie algebroids is called integrable if it is the differential for some morphism between two integrations of and . The analogue of the classical Lie II theorem states that: if is a morphism of integrable Lie algebroids, and is -simply connected, then there exists a unique morphism of Lie groupoids integrating .In particular, by choosing as the general linear groupoid of a vector bundle , it follows that any representation of an integrable Lie algebroid integrates to a representation of its -simply connected integrating Lie groupoid.
On the other hand, there is no analogue of the classical Lie III theorem, i.e. going back from any Lie algebroid to a Lie groupoid is not always possible. Pradines claimed that such a statement hold, and the first explicit example of non-integrable Lie algebroids, coming for instance from foliation theory, appeared only several years later. Despite several partial results, including a complete solution in the transitive case, the general obstructions for an arbitrary Lie algebroid to be integrable have been discovered only in 2003 by Crainic and Fernandes. Adopting a more general approach, one can see that every Lie algebroid integrates to a stacky Lie groupoid.
Ševera-Weinstein groupoid
Given any Lie algebroid , the natural candidate for an integration is given by , where denotes the space of -paths and the relation of -homotopy between them. This is often called the Weinstein groupoid or Ševera-Weinstein groupoid.
Indeed, one can show that is an -simply connected topological groupoid, with the multiplication induced by the concatenation of paths. Moreover, if is integrable, admits a smooth structure such that it coincides with the unique -simply connected Lie groupoid integrating .
Accordingly, the only obstruction to integrability lies in the smoothness of . This approach led to the introduction of objects called monodromy groups, associated to any Lie algebroid, and to the following fundamental result: A Lie algebroid is integrable if and only if its monodromy groups are uniformly discrete.Such statement simplifies in the transitive case:A transitive Lie algebroid is integrable if and only if its monodromy groups are discrete.The results above show also that every Lie algebroid admits an integration to a local Lie groupoid (roughly speaking, a Lie groupoid where the multiplication is defined only in a neighbourhood around the identity elements).
Integrable examples
Lie algebras are always integrable (by Lie III theorem)
Atiyah algebroids of a principal bundle are always integrable (to the gauge groupoid of that principal bundle)
Lie algebroids with injective anchor (hence foliation algebroids) are alway integrable (by Frobenius theorem)
Lie algebra bundle are always integrable
Action Lie algebroids are always integrable (but the integration is not necessarily an action Lie groupoid)
Any Lie subalgebroid of an integrable Lie algebroid is integrable.
A non-integrable example
Consider the Lie algebroid associated to a closed 2-form and the group of spherical periods associated to , i.e. the image of the following group homomorphism from the second homotopy group of
Since is transitive, it is integrable if and only if it is the Atyah algebroid of some principal bundle; a careful analysis shows that this happens if and only if the subgroup is a lattice, i.e. it is discrete. An explicit example where such condition fails is given by taking and for the area form. Here turns out to be , which is dense in .
See also
R-algebroid
Lie bialgebroid
References
Books and lecture notes
Alan Weinstein, Groupoids: unifying internal and external symmetry, AMS Notices, 43 (1996), 744-752. Also available at arXiv:math/9602220.
Kirill Mackenzie, Lie Groupoids and Lie Algebroids in Differential Geometry, Cambridge U. Press, 1987.
Kirill Mackenzie, General Theory of Lie Groupoids and Lie Algebroids, Cambridge U. Press, 2005.
Marius Crainic, Rui Loja Fernandes, Lectures on Integrability of Lie Brackets, Geometry&Topology Monographs 17 (2011) 1–107, available at arXiv:math/0611259.
Eckhard Meinrenken, Lecture notes on Lie groupoids and Lie algebroids, available at http://www.math.toronto.edu/mein/teaching/MAT1341_LieGroupoids/Groupoids.pdf.
Ieke Moerdijk, Janez Mrčun, Introduction to Foliations and Lie Groupoids, Cambridge U. Press, 2010.
Lie algebras
Differential geometry
Differential topology
Differential operators
Generalizations of the derivative
Geometry processing
Vector bundles | Lie algebroid | [
"Mathematics"
] | 4,782 | [
"Mathematical analysis",
"Topology",
"Differential operators",
"Differential topology"
] |
310,923 | https://en.wikipedia.org/wiki/Lie%20groupoid | In mathematics, a Lie groupoid is a groupoid where the set of objects and the set of morphisms are both manifolds, all the category operations (source and target, composition, identity-assigning map and inversion) are smooth, and the source and target operations
are submersions.
A Lie groupoid can thus be thought of as a "many-object generalization" of a Lie group, just as a groupoid is a many-object generalization of a group. Accordingly, while Lie groups provide a natural model for (classical) continuous symmetries, Lie groupoids are often used as model for (and arise from) generalised, point-dependent symmetries. Extending the correspondence between Lie groups and Lie algebras, Lie groupoids are the global counterparts of Lie algebroids.
Lie groupoids were introduced by Charles Ehresmann under the name differentiable groupoids.
Definition and basic concepts
A Lie groupoid consists of
two smooth manifolds and
two surjective submersions (called, respectively, source and target projections)
a map (called multiplication or composition map), where we use the notation
a map (called unit map or object inclusion map), where we use the notation
a map (called inversion), where we use the notation
such that
the composition satisfies and for every for which the composition is defined
the composition is associative, i.e. for every for which the composition is defined
works as an identity, i.e. for every and and for every
works as an inverse, i.e. and for every .
Using the language of category theory, a Lie groupoid can be more compactly defined as a groupoid (i.e. a small category where all the morphisms are invertible) such that the sets of objects and of morphisms are manifolds, the maps , , , and are smooth and and are submersions. A Lie groupoid is therefore not simply a groupoid object in the category of smooth manifolds: one has to ask the additional property that and are submersions.
Lie groupoids are often denoted by , where the two arrows represent the source and the target. The notation is also frequently used, especially when stressing the simplicial structure of the associated nerve.
In order to include more natural examples, the manifold is not required in general to be Hausdorff or second countable (while and all other spaces are).
Alternative definitions
The original definition by Ehresmann required and to possess a smooth structure such that only is smooth and the maps and are subimmersions (i.e. have locally constant rank). Such definition proved to be too weak and was replaced by Pradines with the one currently used.
While some authors introduced weaker definitions which did not require and to be submersions, these properties are fundamental to develop the entire Lie theory of groupoids and algebroids.
First properties
The fact that the source and the target map of a Lie groupoid are smooth submersions has some immediate consequences:
the -fibres , the -fibres , and the set of composable morphisms are submanifolds;
the inversion map is a diffeomorphism;
the unit map is a smooth embedding;
the isotropy groups are Lie groups;
the orbits are immersed submanifolds;
the -fibre at a point is a principal -bundle over the orbit at that point.
Subobjects and morphisms
A Lie subgroupoid of a Lie groupoid is a subgroupoid (i.e. a subcategory of the category ) with the extra requirement that is an immersed submanifold. As for a subcategory, a (Lie) subgroupoid is called wide if . Any Lie groupoid has two canonical wide subgroupoids:
the unit/identity Lie subgroupoid ;
the inner subgroupoid , i.e. the bundle of isotropy groups (which however may fail to be smooth in general).
A normal Lie subgroupoid is a wide Lie subgroupoid inside such that, for every with , one has . The isotropy groups of are therefore normal subgroups of the isotropy groups of .
A Lie groupoid morphism between two Lie groupoids and is a groupoid morphism (i.e. a functor between the categories and ), where both and are smooth. The kernel of a morphism between Lie groupoids over the same base manifold is automatically a normal Lie subgroupoid.
The quotient has a natural groupoid structure such that the projection is a groupoid morphism; however, unlike quotients of Lie groups, may fail to be a Lie groupoid in general. Accordingly, the isomorphism theorems for groupoids cannot be specialised to the entire category of Lie groupoids, but only to special classes.
A Lie groupoid is called abelian if its isotropy Lie groups are abelian. For similar reasons as above, while the definition of abelianisation of a group extends to set-theoretical groupoids, in the Lie case the analogue of the quotient may not exist or be smooth.
Bisections
A bisection of a Lie groupoid is a smooth map such that and is a diffeomorphism of . In order to overcome the lack of symmetry between the source and the target, a bisection can be equivalently defined as a submanifold such that and are diffeomorphisms; the relation between the two definitions is given by .
The set of bisections forms a group, with the multiplication defined asand inversion defined asNote that the definition is given in such a way that, if and , then and .
The group of bisections can be given the compact-open topology, as well as an (infinite-dimensional) structure of Fréchet manifold compatible with the group structure, making it into a Fréchet-Lie group.
A local bisection is defined analogously, but the multiplication between local bisections is of course only partially defined.
Examples
Trivial and extreme cases
Lie groupoids with one object are the same thing as Lie groups.
Given any manifold , there is a Lie groupoid called the pair groupoid, with precisely one morphism from any object to any other.
The two previous examples are particular cases of the trivial groupoid , with structure maps , , , and .
Given any manifold , there is a Lie groupoid called the unit groupoid, with precisely one morphism from one object to itself, namely the identity, and no morphisms between different objects.
More generally, Lie groupoids with are the same thing as bundle of Lie groups (not necessarily locally trivial). For instance, any vector bundle is a bundle of abelian groups, so it is in particular a(n abelian) Lie groupoid.
Constructions from other Lie groupoids
Given any Lie groupoid and a surjective submersion , there is a Lie groupoid , called its pullback groupoid or induced groupoid, where contains triples such that and , and the multiplication is defined using the multiplication of . For instance, the pullback of the pair groupoid of is the pair groupoid of .
Given any two Lie groupoids and , there is a Lie groupoid , called their direct product, such that the groupoid morphisms and are surjective submersions.
Given any Lie groupoid , there is a Lie groupoid , called its tangent groupoid, obtained by considering the tangent bundle of and and the differential of the structure maps.
Given any Lie groupoid , there is a Lie groupoid , called its cotangent groupoid obtained by considering the cotangent bundle of , the dual of the Lie algebroid (see below), and suitable structure maps involving the differentials of the left and right translations.
Given any Lie groupoid , there is a Lie groupoid , called its jet groupoid, obtained by considering the k-jets of the local bisections of (with smooth structure inherited from the jet bundle of ) and setting , , , and .
Examples from differential geometry
Given a submersion , there is a Lie groupoid , called the submersion groupoid or fibred pair groupoid, whose structure maps are induced from the pair groupoid (the condition that is a submersion ensures the smoothness of ). If is a point, one recovers the pair groupoid.
Given a Lie group acting on a manifold , there is a Lie groupoid , called the action groupoid or translation groupoid, with one morphism for each triple with .
Given any vector bundle , there is a Lie groupoid , called the general linear groupoid, with morphisms between being linear isomorphisms between the fibres and . For instance, if is the trivial vector bundle of rank , then is the action groupoid.
Any principal bundle with structure group defines a Lie groupoid , where acts on the pairs componentwise, called the gauge groupoid. The multiplication is defined via compatible representatives as in the pair groupoid.
Any foliation on a manifold defines two Lie groupoids, (or ) and , called respectively the monodromy/homotopy/fundamental groupoid and the holonomy groupoid of , whose morphisms consist of the homotopy, respectively holonomy, equivalence classes of paths entirely lying in a leaf of . For instance, when is the trivial foliation with only one leaf, one recovers, respectively, the fundamental groupoid and the pair groupoid of . On the other hand, when is a simple foliation, i.e. the foliation by (connected) fibres of a submersion , its holonomy groupoid is precisely the submersion groupoid but its monodromy groupoid may even fail to be Hausdorff, due to a general criterion in terms of vanishing cycles. In general, many elementary foliations give rise to monodromy and holonomy groupoids which are not Hausdorff.
Given any pseudogroup , there is a Lie groupoid , called its germ groupoid, endowed with the sheaf topology and with structure maps analogous to those of the jet groupoid. This is another natural example of Lie groupoid whose arrow space is not Hausdorff nor second countable.
Important classes of Lie groupoids
Note that some of the following classes make sense already in the category of set-theoretical or topological groupoids.
Transitive groupoids
A Lie groupoid is transitive (in older literature also called connected) if it satisfies one of the following equivalent conditions:
there is only one orbit;
there is at least a morphism between any two objects;
the map (also known as the anchor of ) is surjective.
Gauge groupoids constitute the prototypical examples of transitive Lie groupoids: indeed, any transitive Lie groupoid is isomorphic to the gauge groupoid of some principal bundle, namely the -bundle , for any point . For instance:
the trivial Lie groupoid is transitive and arise from the trivial principal -bundle . As particular cases, Lie groups and pair groupoids are trivially transitive, and arise, respectively, from the principal -bundle , and from the principal -bundle ;
an action groupoid is transitive if and only if the group action is transitive, and in such case it arises from the principal bundle with structure group the isotropy group (at an arbitrary point);
the general linear groupoid of is transitive, and arises from the frame bundle ;
pullback groupoids, jet groupoids and tangent groupoids of are transitive if and only if is transitive.
As a less trivial instance of the correspondence between transitive Lie groupoids and principal bundles, consider the fundamental groupoid of a (connected) smooth manifold . This is naturally a topological groupoid, which is moreover transitive; one can see that is isomorphic to the gauge groupoid of the universal cover of . Accordingly, inherits a smooth structure which makes it into a Lie groupoid.
Submersions groupoids are an example of non-transitive Lie groupoids, whose orbits are precisely the fibres of .
A stronger notion of transitivity requires the anchor to be a surjective submersion. Such condition is also called local triviality, because becomes locally isomorphic (as Lie groupoid) to a trivial groupoid over any open (as a consequence of the local triviality of principal bundles).
When the space is second countable, transitivity implies local triviality. Accordingly, these two conditions are equivalent for many examples but not for all of them: for instance, if is a transitive pseudogroup, its germ groupoid is transitive but not locally trivial.
Proper groupoids
A Lie groupoid is called proper if is a proper map. As a consequence
all isotropy groups of are compact;
all orbits of are closed submanifolds;
the orbit space is Hausdorff.
For instance:
a Lie group is proper if and only if it is compact;
pair groupoids are always proper;
unit groupoids are always proper;
an action groupoid is proper if and only if the action is proper;
the fundamental groupoid is proper if and only if the fundamental groups are finite.
As seen above, properness for Lie groupoids is the "right" analogue of compactness for Lie groups. One could also consider more "natural" conditions, e.g. asking that the source map is proper (then is called s-proper), or that the entire space is compact (then is called compact), but these requirements turns out to be too strict for many examples and applications.
Étale groupoids
A Lie groupoid is called étale if it satisfies one of the following equivalent conditions:
the dimensions of and are equal;
is a local diffeomorphism;
all the -fibres are discrete
As a consequence, also the -fibres, the isotropy groups and the orbits become discrete.
For instance:
a Lie group is étale if and only if it is discrete;
pair groupoids are never étale;
unit groupoids are always étale;
an action groupoid is étale if and only if is discrete;
germ groupoids of pseudogroups are always étale.
Effective groupoids
An étale groupoid is called effective if, for any two local bisections , the condition implies . For instance:
Lie groups are effective if and only if are trivial;
unit groupoids are always effective;
an action groupoid is effective if the -action is free and is discrete.
In general, any effective étale groupoid arise as the germ groupoid of some pseudogroup. However, a (more involved) definition of effectiveness, which does not assume the étale property, can also be given.
Source-connected groupoids
A Lie groupoid is called -connected if all its -fibres are connected. Similarly, one talks about -simply connected groupoids (when the -fibres are simply connected) or source-k-connected groupoids (when the -fibres are k-connected, i.e. the first homotopy groups are trivial).
Note that the entire space of arrows is not asked to satisfy any connectedness hypothesis. However, if is a source--connected Lie groupoid over a -connected manifold, then itself is automatically -connected.
For instanceː
Lie groups are source -connected if and only if they are -connected;
a pair groupoid is source -connected if and only if is -connected;
unit groupoids are always source -connected;
action groupoids are source -connected if and only if is -connected;
monodromy groupoids (hence also fundamental groupoids) are source simply connected;
a gauge groupoid associated to a principal bundle is source -connected if and only if the total space is.
Further related concepts
Actions and principal bundles
Recall that an action of a groupoid on a set along a function is defined via a collection of maps for each morphism between . Accordingly, an action of a Lie groupoid on a manifold along a smooth map consists of a groupoid action where the maps are smooth. Of course, for every there is an induced smooth action of the isotropy group on the fibre .
Given a Lie groupoid , a principal -bundle consists of a -space and a -invariant surjective submersion such thatis a diffeomorphism. Equivalent (but more involved) definitions can be given using -valued cocycles or local trivialisations.
When is a Lie groupoid over a point, one recovers, respectively, standard Lie group actions and principal bundles.
Representations
A representation of a Lie groupoid consists of a Lie groupoid action on a vector bundle , such that the action is fibrewise linear, i.e. each bijection is a linear isomorphism. Equivalently, a representation of on can be described as a Lie groupoid morphism from to the general linear groupoid .
Of course, any fibre becomes a representation of the isotropy group . More generally, representations of transitive Lie groupoids are uniquely determined by representations of their isotropy groups, via the construction of the associated vector bundle.
Examples of Lie groupoids representations include the following:
representations of Lie groups recover standard Lie group representations
representations of pair groupoids are trivial vector bundles
representations of unit groupoids are vector bundles
representations of action groupoid are -equivariant vector bundles
representations of fundamental groupoids are vector bundles endowed with flat connections
The set of isomorphism classes of representations of a Lie groupoid has a natural structure of semiring, with direct sums and tensor products of vector bundles.
Differentiable cohomology
The notion of differentiable cohomology for Lie groups generalises naturally also to Lie groupoids: the definition relies on the simplicial structure of the nerve of , viewed as a category.
More precisely, recall that the space consists of strings of composable morphisms, i.e.
and consider the map .
A differentiable -cochain of with coefficients in some representation is a smooth section of the pullback vector bundle . One denotes by the space of such -cochains, and considers the differential , defined as
Then becomes a cochain complex and its cohomology, denoted by , is called the differentiable cohomology of with coefficients in . Note that, since the differential at degree zero is , one has always .
Of course, the differentiable cohomology of as a Lie groupoid coincides with the standard differentiable cohomology of as a Lie group (in particular, for discrete groups one recovers the usual group cohomology). On the other hand, for any proper Lie groupoid , one can prove that for every .
The Lie algebroid of a Lie groupoid
Any Lie groupoid has an associated Lie algebroid , obtained with a construction similar to the one which associates a Lie algebra to any Lie groupː
the vector bundle is the vertical bundle with respect to the source map, restricted to the elements tangent to the identities, i.e. ;
the Lie bracket is obtained by identifying with the left-invariant vector fields on , and by transporting their Lie bracket to ;
the anchor map is the differential of the target map restricted to .
The Lie group–Lie algebra correspondence generalises to some extends also to Lie groupoids: the first two Lie's theorem (also known as the subgroups–subalgebras theorem and the homomorphisms theorem) can indeed be easily adapted to this setting.
In particular, as in standard Lie theory, for any s-connected Lie groupoid there is a unique (up to isomorphism) s-simply connected Lie groupoid with the same Lie algebroid of , and a local diffeomorphism which is a groupoid morphism. For instance,
given any connected manifold its pair groupoid is s-connected but not s-simply connected, while its fundamental groupoid is. They both have the same Lie algebroid, namely the tangent bundle , and the local diffeomorphism is given by .
given any foliation on , its holonomy groupoid is s-connected but not s-simply connected, while its monodromy groupoid is. They both have the same Lie algebroid, namely the foliation algebroid , and the local diffeomorphism is given by (since the homotopy classes are smaller than the holonomy ones).
However, there is no analogue of Lie's third theoremː while several classes of Lie algebroids are integrable, there are examples of Lie algebroids, for instance related to foliation theory, which do not admit an integrating Lie groupoid. The general obstructions to the existence of such integration depend on the topology of .
Morita equivalence
As discussed above, the standard notion of (iso)morphism of groupoids (viewed as functors between categories) restricts naturally to Lie groupoids. However, there is a more coarse notion of equivalence, called Morita equivalence, which is more flexible and useful in applications.
First, a Morita map (also known as a weak equivalence or essential equivalence) between two Lie groupoids and consists of a Lie groupoid morphism from G to H which is moreover fully faithful and essentially surjective (adapting these categorical notions to the smooth context). We say that two Lie groupoids and are Morita equivalent if and only if there exists a third Lie groupoid together with two Morita maps from G to K and from H to K.
A more explicit description of Morita equivalence (e.g. useful to check that it is an equivalence relation) requires the existence of two surjective submersions and together with a left -action and a right -action, commuting with each other and making into a principal bi-bundle.
Morita invariance
Many properties of Lie groupoids, e.g. being proper, being Hausdorff or being transitive, are Morita invariant. On the other hand, being étale is not Morita invariant.
In addition, a Morita equivalence between and preserves their transverse geometry, i.e. it induces:
a homeomorphism between the orbit spaces and ;
an isomorphism between the isotropy groups at corresponding points and ;
an isomorphism between the normal representations of the isotropy groups at corresponding points and .
Last, the differentiable cohomologies of two Morita equivalent Lie groupoids are isomorphic.
Examples
Isomorphic Lie groupoids are trivially Morita equivalent.
Two Lie groups are Morita equivalent if and only if they are isomorphic as Lie groups.
Two unit groupoids are Morita equivalent if and only if the base manifolds are diffeomorphic.
Any transitive Lie groupoid is Morita equivalent to its isotropy groups.
Given a Lie groupoid and a surjective submersion , the pullback groupoid is Morita equivalent to .
Given a free and proper Lie group action of on (therefore the quotient is a manifold), the action groupoid is Morita equivalent to the unit groupoid .
A Lie groupoid is Morita equivalent to an étale groupoid if and only if all isotropy groups of are discrete.
A concrete instance of the last example goes as follows. Let M be a smooth manifold and an open cover of . Its Čech groupoid is defined by the disjoint unions and , where . The source and target map are defined as the embeddings and , and the multiplication is the obvious one if we read the as subsets of M (compatible points in and actually are the same in and also lie in ). The Čech groupoid is in fact the pullback groupoid, under the obvious submersion , of the unit groupoid . As such, Čech groupoids associated to different open covers of are Morita equivalent.
Smooth stacks
Investigating the structure of the orbit space of a Lie groupoid leads to the notion of a smooth stack. For instance, the orbit space is a smooth manifold if the isotropy groups are trivial (as in the example of the Čech groupoid), but it is not smooth in general. The solution is to revert the problem and to define a smooth stack as a Morita-equivalence class of Lie groupoids. The natural geometric objects living on the stack are the geometric objects on Lie groupoids invariant under Morita-equivalence: an example is the Lie groupoid cohomology.
Since the notion of smooth stack is quite general, obviously all smooth manifolds are smooth stacks. Other classes of examples include orbifolds, which are (equivalence classes of) proper étale Lie groupoids, and orbit spaces of foliations.
References
Books
Differential geometry
Lie groups
Manifolds
Symmetry | Lie groupoid | [
"Physics",
"Mathematics"
] | 5,119 | [
"Lie groups",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Algebraic structures",
"Manifolds",
"Geometry",
"Symmetry"
] |
310,950 | https://en.wikipedia.org/wiki/Principal%20bundle | In mathematics, a principal bundle is a mathematical object that formalizes some of the essential features of the Cartesian product of a space with a group . In the same way as with the Cartesian product, a principal bundle is equipped with
An action of on , analogous to for a product space.
A projection onto . For a product space, this is just the projection onto the first factor, .
Unless it is the product space , a principal bundle lacks a preferred choice of identity cross-section; it has no preferred analog of . Likewise, there is not generally a projection onto generalizing the projection onto the second factor, that exists for the Cartesian product. They may also have a complicated topology that prevents them from being realized as a product space even if a number of arbitrary choices are made to try to define such a structure by defining it on smaller pieces of the space.
A common example of a principal bundle is the frame bundle of a vector bundle , which consists of all ordered bases of the vector space attached to each point. The group in this case, is the general linear group, which acts on the right in the usual way: by changes of basis. Since there is no natural way to choose an ordered basis of a vector space, a frame bundle lacks a canonical choice of identity cross-section.
Principal bundles have important applications in topology and differential geometry and mathematical gauge theory. They have also found application in physics where they form part of the foundational framework of physical gauge theories.
Formal definition
A principal -bundle, where denotes any topological group, is a fiber bundle together with a continuous right action such that preserves the fibers of (i.e. if then for all ) and acts freely and transitively (meaning each fiber is a G-torsor) on them in such a way that for each and , the map sending to is a homeomorphism. In particular each fiber of the bundle is homeomorphic to the group itself. Frequently, one requires the base space to be Hausdorff and possibly paracompact.
Since the group action preserves the fibers of and acts transitively, it follows that the orbits of the -action are precisely these fibers and the orbit space is homeomorphic to the base space . Because the action is free and transitive, the fibers have the structure of G-torsors. A -torsor is a space that is homeomorphic to but lacks a group structure since there is no preferred choice of an identity element.
An equivalent definition of a principal -bundle is as a -bundle with fiber where the structure group acts on the fiber by left multiplication. Since right multiplication by on the fiber commutes with the action of the structure group, there exists an invariant notion of right multiplication by on . The fibers of then become right -torsors for this action.
The definitions above are for arbitrary topological spaces. One can also define principal -bundles in the category of smooth manifolds. Here is required to be a smooth map between smooth manifolds, is required to be a Lie group, and the corresponding action on should be smooth.
Examples
Trivial bundle and sections
Over an open ball , or , with induced coordinates , any principal -bundle is isomorphic to a trivial bundleand a smooth section is equivalently given by a (smooth) function sincefor some smooth function. For example, if , the Lie group of unitary matrices, then a section can be constructed by considering four real-valued functionsand applying them to the parameterization
This same procedure valids by taking a parameterization of a collection of matrices defining a Lie group and by considering the set of functions from a patch of the base space to and inserting them into the parameterization.
Other examples
The prototypical example of a smooth principal bundle is the frame bundle of a smooth manifold , often denoted or . Here the fiber over a point is the set of all frames (i.e. ordered bases) for the tangent space . The general linear group acts freely and transitively on these frames. These fibers can be glued together in a natural way so as to obtain a principal -bundle over .
Variations on the above example include the orthonormal frame bundle of a Riemannian manifold. Here the frames are required to be orthonormal with respect to the metric. The structure group is the orthogonal group . The example also works for bundles other than the tangent bundle; if is any vector bundle of rank over , then the bundle of frames of is a principal -bundle, sometimes denoted .
A normal (regular) covering space is a principal bundle where the structure group
acts on the fibres of via the monodromy action. In particular, the universal cover of is a principal bundle over with structure group (since the universal cover is simply connected and thus is trivial).
Let be a Lie group and let be a closed subgroup (not necessarily normal). Then is a principal -bundle over the (left) coset space . Here the action of on is just right multiplication. The fibers are the left cosets of (in this case there is a distinguished fiber, the one containing the identity, which is naturally isomorphic to ).
Consider the projection given by . This principal -bundle is the associated bundle of the Möbius strip. Besides the trivial bundle, this is the only principal -bundle over .
Projective spaces provide some more interesting examples of principal bundles. Recall that the -sphere is a two-fold covering space of real projective space . The natural action of on gives it the structure of a principal -bundle over . Likewise, is a principal -bundle over complex projective space and is a principal -bundle over quaternionic projective space . We then have a series of principal bundles for each positive :
Here denotes the unit sphere in (equipped with the Euclidean metric). For all of these examples the cases give the so-called Hopf bundles.
Basic properties
Trivializations and cross sections
One of the most important questions regarding any fiber bundle is whether or not it is trivial, i.e. isomorphic to a product bundle. For principal bundles there is a convenient characterization of triviality:
Proposition. A principal bundle is trivial if and only if it admits a global section.
The same is not true in general for other fiber bundles. For instance, vector bundles always have a zero section whether they are trivial or not and sphere bundles may admit many global sections without being trivial.
The same fact applies to local trivializations of principal bundles. Let be a principal -bundle. An open set in admits a local trivialization if and only if there exists a local section on . Given a local trivialization
one can define an associated local section
where is the identity in . Conversely, given a section one defines a trivialization by
The simple transitivity of the action on the fibers of guarantees that this map is a bijection, it is also a homeomorphism. The local trivializations defined by local sections are -equivariant in the following sense. If we write
in the form
then the map
satisfies
Equivariant trivializations therefore preserve the -torsor structure of the fibers. In terms of the associated local section the map is given by
The local version of the cross section theorem then states that the equivariant local trivializations of a principal bundle are in one-to-one correspondence with local sections.
Given an equivariant local trivialization of , we have local sections on each . On overlaps these must be related by the action of the structure group . In fact, the relationship is provided by the transition functions
By gluing the local trivializations together using these transition functions, one may reconstruct the original principal bundle. This is an example of the fiber bundle construction theorem.
For any we have
Characterization of smooth principal bundles
If is a smooth principal -bundle then acts freely and properly on so that the orbit space is diffeomorphic to the base space . It turns out that these properties completely characterize smooth principal bundles. That is, if is a smooth manifold, a Lie group and a smooth, free, and proper right action then
is a smooth manifold,
the natural projection is a smooth submersion, and
is a smooth principal -bundle over .
Use of the notion
Reduction of the structure group
Given a subgroup H of G one may consider the bundle whose fibers are homeomorphic to the coset space . If the new bundle admits a global section, then one says that the section is a reduction of the structure group from to . The reason for this name is that the (fiberwise) inverse image of the values of this section form a subbundle of that is a principal -bundle. If is the identity, then a section of itself is a reduction of the structure group to the identity. Reductions of the structure group do not in general exist.
Many topological questions about the structure of a manifold or the structure of bundles over it that are associated to a principal -bundle may be rephrased as questions about the admissibility of the reduction of the structure group (from to ). For example:
A -dimensional real manifold admits an almost-complex structure if the frame bundle on the manifold, whose fibers are , can be reduced to the group .
An -dimensional real manifold admits a -plane field if the frame bundle can be reduced to the structure group .
A manifold is orientable if and only if its frame bundle can be reduced to the special orthogonal group, .
A manifold has spin structure if and only if its frame bundle can be further reduced from to the Spin group, which maps to as a double cover.
Also note: an -dimensional manifold admits vector fields that are linearly independent at each point if and only if its frame bundle admits a global section. In this case, the manifold is called parallelizable.
Associated vector bundles and frames
If is a principal -bundle and is a linear representation of , then one can construct a vector bundle with fibre , as the quotient of the product × by the diagonal action of . This is a special case of the associated bundle construction, and is called an associated vector bundle to . If the representation of on is faithful, so that is a subgroup of the general linear group GL(), then is a -bundle and provides a reduction of structure group of the frame bundle of from to . This is the sense in which principal bundles provide an abstract formulation of the theory of frame bundles.
Classification of principal bundles
Any topological group admits a classifying space : the quotient by the action of of some weakly contractible space, e.g., a topological space with vanishing homotopy groups. The classifying space has the property that any principal bundle over a paracompact manifold B is isomorphic to a pullback of the principal bundle . In fact, more is true, as the set of isomorphism classes of principal bundles over the base identifies with the set of homotopy classes of maps .
See also
Associated bundle
Vector bundle
G-structure
Reduction of the structure group
Gauge theory
Connection (principal bundle)
G-fibration
References
Sources
Differential geometry
Fiber bundles
Group actions (mathematics) | Principal bundle | [
"Physics"
] | 2,248 | [
"Group actions",
"Symmetry"
] |
311,001 | https://en.wikipedia.org/wiki/Green%27s%20function | In mathematics, a Green's function (or Green function) is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions.
This means that if is a linear differential operator, then
the Green's function is the solution of the equation where is Dirac's delta function;
the solution of the initial-value problem is the convolution
Through the superposition principle, given a linear ordinary differential equation (ODE), one can first solve for each , and realizing that, since the source is a sum of delta functions, the solution is a sum of Green's functions as well, by linearity of .
Green's functions are named after the British mathematician George Green, who first developed the concept in the 1820s. In the modern study of linear partial differential equations, Green's functions are studied largely from the point of view of fundamental solutions instead.
Under many-body theory, the term is also used in physics, specifically in quantum field theory, aerodynamics, aeroacoustics, electrodynamics, seismology and statistical field theory, to refer to various types of correlation functions, even those that do not fit the mathematical definition. In quantum field theory, Green's functions take the roles of propagators.
Definition and uses
A Green's function, , of a linear differential operator acting on distributions over a subset of the Euclidean space at a point , is any solution of
where is the Dirac delta function. This property of a Green's function can be exploited to solve differential equations of the form
If the kernel of is non-trivial, then the Green's function is not unique. However, in practice, some combination of symmetry, boundary conditions and/or other externally imposed criteria will give a unique Green's function. Green's functions may be categorized, by the type of boundary conditions satisfied, by a Green's function number. Also, Green's functions in general are distributions, not necessarily functions of a real variable.
Green's functions are also useful tools in solving wave equations and diffusion equations. In quantum mechanics, Green's function of the Hamiltonian is a key concept with important links to the concept of density of states.
The Green's function as used in physics is usually defined with the opposite sign, instead. That is,
This definition does not significantly change any of the properties of Green's function due to the evenness of the Dirac delta function.
If the operator is translation invariant, that is, when has constant coefficients with respect to , then the Green's function can be taken to be a convolution kernel, that is,
In this case, Green's function is the same as the impulse response of linear time-invariant system theory.
Motivation
Loosely speaking, if such a function can be found for the operator , then, if we multiply for the Green's function by , and then integrate with respect to , we obtain,
Because the operator is linear and acts only on the variable (and not on the variable of integration ), one may take the operator outside of the integration, yielding
This means that
is a solution to the equation
Thus, one may obtain the function through knowledge of the Green's function in and the source term on the right-hand side in . This process relies upon the linearity of the operator .
In other words, the solution of , , can be determined by the integration given in . Although is known, this integration cannot be performed unless is also known. The problem now lies in finding the Green's function that satisfies . For this reason, the Green's function is also sometimes called the fundamental solution associated to the operator .
Not every operator admits a Green's function. A Green's function can also be thought of as a right inverse of . Aside from the difficulties of finding a Green's function for a particular operator, the integral in may be quite difficult to evaluate. However the method gives a theoretically exact result.
This can be thought of as an expansion of according to a Dirac delta function basis (projecting over and a superposition of the solution on each projection. Such an integral equation is known as a Fredholm integral equation, the study of which constitutes Fredholm theory.
Green's functions for solving inhomogeneous boundary value problems
The primary use of Green's functions in mathematics is to solve non-homogeneous boundary value problems. In modern theoretical physics, Green's functions are also usually used as propagators in Feynman diagrams; the term Green's function is often further used for any correlation function.
Framework
Let be the Sturm–Liouville operator, a linear differential operator of the form
and let be the vector-valued boundary conditions operator
Let be a continuous function in Further suppose that the problem
is "regular", i.e., the only solution for for all is
Theorem
There is one and only one solution that satisfies
and it is given by
where is a Green's function satisfying the following conditions:
is continuous in and .
For
For
Derivative "jump":
Symmetry:
Advanced and retarded Green's functions
Green's function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green's function results in another Green's function. Therefore if the homogeneous equation has nontrivial solutions, multiple Green's functions exist. In some cases, it is possible to find one Green's function that is nonvanishing only for , which is called a retarded Green's function, and another Green's function that is nonvanishing only for , which is called an advanced Green's function. In such cases, any linear combination of the two Green's functions is also a valid Green's function. The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green's function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green's function depends only on the future sources and is acausal. In these problems, it is often the case that the causal solution is the physically important one. The use of advanced and retarded Green's function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation.
Finding Green's functions
Units
While it does not uniquely fix the form the Green's function will take, performing a dimensional analysis to find the units a Green's function must have is an important sanity check on any Green's function found through other means. A quick examination of the defining equation,
shows that the units of depend not only on the units of but also on the number and units of the space of which the position vectors and are elements. This leads to the relationship:
where is defined as, "the physical units of , and is the volume element of the space (or spacetime).
For example, if and time is the only variable then:
If the d'Alembert operator, and space has 3 dimensions then:
Eigenvalue expansions
If a differential operator admits a set of eigenvectors (i.e., a set of functions and scalars such that ) that is complete, then it is possible to construct a Green's function from these eigenvectors and eigenvalues.
"Complete" means that the set of functions satisfies the following completeness relation,
Then the following holds,
where represents complex conjugation.
Applying the operator to each side of this equation results in the completeness relation, which was assumed.
The general study of Green's function written in the above form, and its relationship to the function spaces formed by the eigenvectors, is known as Fredholm theory.
There are several other methods for finding Green's functions, including the method of images, separation of variables, and Laplace transforms.
Combining Green's functions
If the differential operator can be factored as then the Green's function of can be constructed from the Green's functions for and
The above identity follows immediately from taking to be the representation of the right operator inverse of analogous to how for the invertible linear operator defined by is represented by its matrix elements
A further identity follows for differential operators that are scalar polynomials of the derivative, The fundamental theorem of algebra, combined with the fact that commutes with itself, guarantees that the polynomial can be factored, putting in the form:
where are the zeros of Taking the Fourier transform of with respect to both and gives:
The fraction can then be split into a sum using a partial fraction decomposition before Fourier transforming back to and space. This process yields identities that relate integrals of Green's functions and sums of the same. For example, if then one form for its Green's function is:
While the example presented is tractable analytically, it illustrates a process that works when the integral is not trivial (for example, when is the operator in the polynomial).
Table of Green's functions
The following table gives an overview of Green's functions of frequently appearing differential operators, where is the Heaviside step function, is a Bessel function, is a modified Bessel function of the first kind, and is a modified Bessel function of the second kind. Where time () appears in the first column, the retarded (causal) Green's function is listed.
Green's functions for the Laplacian
Green's functions for linear differential operators involving the Laplacian may be readily put to use using the second of Green's identities.
To derive Green's theorem, begin with the divergence theorem (otherwise known as Gauss's theorem),
Let and substitute into Gauss' law.
Compute and apply the product rule for the ∇ operator,
Plugging this into the divergence theorem produces Green's theorem,
Suppose that the linear differential operator is the Laplacian, ∇2, and that there is a Green's function for the Laplacian. The defining property of the Green's function still holds,
Let in Green's second identity, see Green's identities. Then,
Using this expression, it is possible to solve Laplace's equation or Poisson's equation , subject to either Neumann or Dirichlet boundary conditions. In other words, we can solve for everywhere inside a volume where either (1) the value of is specified on the bounding surface of the volume (Dirichlet boundary conditions), or (2) the normal derivative of is specified on the bounding surface (Neumann boundary conditions).
Suppose the problem is to solve for inside the region. Then the integral
reduces to simply due to the defining property of the Dirac delta function and we have
This form expresses the well-known property of harmonic functions, that if the value or normal derivative is known on a bounding surface, then the value of the function inside the volume is known everywhere.
In electrostatics, is interpreted as the electric potential, as electric charge density, and the normal derivative as the normal component of the electric field.
If the problem is to solve a Dirichlet boundary value problem, the Green's function should be chosen such that vanishes when either or is on the bounding surface. Thus only one of the two terms in the surface integral remains. If the problem is to solve a Neumann boundary value problem, it might seem logical to choose Green's function so that its normal derivative vanishes on the bounding surface. However, application of Gauss's theorem to the differential equation defining the Green's function yields
meaning the normal derivative of G(x,x′) cannot vanish on the surface, because it must integrate to 1 on the surface.
The simplest form the normal derivative can take is that of a constant, namely , where is the surface area of the surface. The surface term in the solution becomes
where is the average value of the potential on the surface. This number is not known in general, but is often unimportant, as the goal is often to obtain the electric field given by the gradient of the potential, rather than the potential itself.
With no boundary conditions, the Green's function for the Laplacian (Green's function for the three-variable Laplace equation) is
Supposing that the bounding surface goes out to infinity and plugging in this expression for the Green's function finally yields the standard expression for electric potential in terms of electric charge density as
Example
Find the Green function for the following problem, whose Green's function number is X11:
First step: The Green's function for the linear operator at hand is defined as the solution to
If , then the delta function gives zero, and the general solution is
For , the boundary condition at implies
if and .
For , the boundary condition at implies
The equation of is skipped for similar reasons.
To summarize the results thus far:
Second step: The next task is to determine and
Ensuring continuity in the Green's function at implies
One can ensure proper discontinuity in the first derivative by integrating the defining differential equation (i.e., ) from to and taking the limit as goes to zero. Note that we only integrate the second derivative as the remaining term will be continuous by construction.
The two (dis)continuity equations can be solved for and to obtain
So Green's function for this problem is:
Further examples
Let and let the subset be all of . Let be . Then, the Heaviside step function is a Green's function of at .
Let and let the subset be the quarter-plane and be the Laplacian. Also, assume a Dirichlet boundary condition is imposed at and a Neumann boundary condition is imposed at . Then the X10Y20 Green's function is
Let , and all three are elements of the real numbers. Then, for any function with an -th derivative that is integrable over the interval : The Green's function in the above equation, , is not unique. How is the equation modified if is added to , where satisfies for all (for example, with Also, compare the above equation to the form of a Taylor series centered at .
See also
Bessel potential
Discrete Green's functions – defined on graphs and grids
Impulse response – the analog of a Green's function in signal processing
Transfer function
Fundamental solution
Green's function in many-body theory
Correlation function
Propagator
Green's identities
Parametrix
Volterra integral equation
Resolvent formalism
Keldysh formalism
Spectral theory
Multiscale Green's function
Footnotes
References
Chapter 5 contains a very readable account of using Green's functions to solve boundary value problems in electrostatics.
Textbook on Green's function with worked-out steps.
External links
Introduction to the Keldysh Nonequilibrium Green Function Technique by A. P. Jauho
Green's Function Library
Tutorial on Green's functions
Boundary Element Method (for some idea on how Green's functions may be used with the boundary element method for solving potential problems numerically)
At Citizendium
MIT video lecture on Green's function
Differential equations
Generalized functions
Equations of physics
Mathematical physics
Schwartz distributions | Green's function | [
"Physics",
"Mathematics"
] | 3,122 | [
"Equations of physics",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Differential equations",
"Equations",
"Mathematical physics"
] |
28,734,047 | https://en.wikipedia.org/wiki/Animalia%20Paradoxa | (Latin for "contradictory animals"; cf. paradox) are the mythical, magical or otherwise suspect animals mentioned in the first five editions of Carl Linnaeus's seminal work under the header "Paradoxa". It lists fantastic creatures found in medieval bestiaries and some animals reported by explorers from abroad and explains why they are excluded from Systema Naturae. According to Swedish historian Gunnar Broberg, it was to offer a natural explanation and demystify the world of superstition. Paradoxa was dropped from Linnaeus' classification system as of the 6th edition (1748).
Paradoxa
These 10 taxa appear in the 1st to 5th editions:
Hydra: Linnaeus wrote: "Hydra: body of a snake, with two feet, seven necks and the same number of heads, lacking wings, preserved in Hamburg, similar to the description of the Hydra of the Apocalypse of St.John chapters 12 and 13. And it is provided by very many as a true species of animal, but falsely. Nature for itself and always the similar, never naturally makes multiple heads on one body. Fraud and artifice, as we ourselves saw [on it] teeth of a weasel, different from teeth of an Amphibian [or reptile], easily detected." See Carl Linnaeus#Doctorate. (Distinguish from the small real coelenterate Hydra (genus).)
Rana-Piscis: a South American frog which is significantly smaller than its tadpole stage; it was thus (incorrectly) reported to Linnaeus that the metamorphosis in this species went from 'frog to fish'. In the Paradoxa in the 1st edition of Systema Naturae, Linnaeus wrote "Frog-Fish or Frog Changing into Fish: is much against teaching. Frogs, like all Amphibia, delight in lungs and spiny bones. Spiny fish, instead of lungs, are equipped with gills. Therefore the laws of Nature will be against this change. If indeed a fish is equipped with gills, it will be separate from the Frog and Amphibia. If truly [it has] lungs, it will be a Lizard: for under all the sky it differs from Chondropterygii and Plagiuri." In the 10th edition of Systema Naturae, Linnaeus named the species Rana paradoxa, though its genus name was changed in 1830 to Pseudis.
Monoceros (unicorn): Linnaeus wrote: "Monoceros of the older [generations], body of a horse, feet of a "wild animal", horn straight, long, spirally twisted. It is a figment of painters. The Monodon of Artedi [= narwhal] has the same manner of horn, but the other parts of its body are very different."
Pelecanus: Linnaeus wrote "Pelican: The same [sources as for the previous] hand down fabulously [the story] that it inflicts a wound with its beak on its own thigh, to feed its young with the flowing blood. A sack hanging below its throat gave a handle for the story." This source writes: "Linnaeus thought [pelicans] might reflect the over-fervent imaginations of New World explorers." This claim is incorrect; pelicans are widespread in Europe and Linnaeus was merely doubting the legendary behavior.
Satyrus: Linnaeus wrote "with a tail, hairy, bearded, with a manlike body, gesticulating much, very fallacious, is a species of monkey, if ever one has been seen."
Borometz (aka Scythian Lamb): Linnaeus wrote: "Borometz or Scythian Lamb: is reckoned with plants, and is similar to a lamb; whose stalk coming out of the ground enters an umbilicus; and the same is said to be provided with blood from by chance devouring wild animals. But it is put together artificially from roots of American ferns. But naturally it is an allegorical description of an embryo of a sheep, as has all attributed data.". This source says: "Linnaeus [...] had seen a faked vegetable lamb taken from China to Sweden by a traveler."
Phoenix: Linnaeus wrote: "Species of bird, of which only one individual exists in the world, and which when decrepit [arises?] from [its] pyre made of aromatic [plants?] is said fabulously to become again young, to undergo happy former periods of life. In reality it is the date palm, see Kæmpf".
Linnaeus wrote: The Bernicla or Scottish goose & Goose-bearing Seashell: is believed by former generations to be born from rotten wood thrown away in the sea. But the Lepas places seaweed on its featherlike internal parts, and somewhat adhering, as if indeed that goose Bernicla was arising from it. Frederick Edward Hulme noted: "[The] barnacle-goose tree was a great article of faith with our ancestors in the Middle Ages."
Draco: Linnaeus wrote that it has a "snakelike body, two feet, two wings, like a bat, which is a winged lizard or a ray artificially shaped as a monster and dried." See also Jenny Haniver.
Automa Mortis Linnaeus wrote "Death-watch: It produces the sound of a very small clock in walls, is named Pediculus pulsatorius, which perforates wood and lives in it".
The above 10 taxa and the 4 taxa following were in the 2nd (1740) edition and the 4th and 5th editions (total 14 entries):
Manticora: Linnaeus wrote merely: "face of a decrepit old man, body of a lion, tail starred with sharp points".
Antilope : Linnaeus wrote merely: "Face of a "wild animal", feet [like those] of cattle, horns like a goat's [but] saw-edged".
Lamia: Linnaeus wrote merely: "Face of a man, breasts of a virgin, body of a four-footed animal [but] scaled, forefeet of a "wild animal", hind[feet] [like those] of cattle".
Siren: Linnaeus wrote: "Art. gen. 81 Syrene Bartol: As long as it is not seen either living or dead, nor faithfully and perfectly described, it is called in doubt".
Linnaeus's reference is to Peter Artedi's writing about the Siren: "Two fins only on all the body, those on the chest. No finned tail. Head and neck and chest to the umbilicus have the human appearance. ... Our or Bartholin's Siren was found and captured in the sea near Massilia in America. From the umbilicus to the extremity of the body was unformed flesh with no sign of a tail. Two pectoral fins on the chest, with five bones or fingers, staying together, by which it swims. Its radius in the forearm is scarcely four fingers' width long. Oh that there could arise a true ichthyologist, who could examine this animal, as to whether it is a fable, or a true fish? About something which has not been seen it is preferable not to judge, than boldly to pronounce something.". Among references and quotations from other authors Artedi quoted that "some say that it is a manatee and others say completely different."
References
External links
Biological classification
Cryptozoology
European legendary creatures
Medieval European legendary creatures
Systema Naturae | Animalia Paradoxa | [
"Biology"
] | 1,553 | [
"nan"
] |
28,734,956 | https://en.wikipedia.org/wiki/Spherical%20shell | In geometry, a spherical shell is a generalization of an annulus to three dimensions. It is the region of a ball between two concentric spheres of differing radii.
Volume
The volume of a spherical shell is the difference between the enclosed volume of the outer sphere and the enclosed volume of the inner sphere:
where is the radius of the inner sphere and is the radius of the outer sphere.
Approximation
An approximation for the volume of a thin spherical shell is the surface area of the inner sphere multiplied by the thickness of the shell:
when is very small compared to ().
The total surface area of the spherical shell is .
See also
Spherical pressure vessel
Ball
Solid torus
Bubble
Sphere
References
Elementary geometry
Geometric shapes
Spherical geometry | Spherical shell | [
"Physics",
"Mathematics"
] | 146 | [
"Geometric shapes",
"Euclidean solid geometry",
"Mathematical objects",
"Elementary mathematics",
"Elementary geometry",
"Space",
"Geometric objects",
"Spacetime"
] |
1,350,181 | https://en.wikipedia.org/wiki/Petroleum%20ether | Petroleum ether is the petroleum fraction consisting of aliphatic hydrocarbons and boiling in the range 35–60 °C, and commonly used as a laboratory solvent. Despite the name, petroleum ether is not an ether; the term is used only figuratively, signifying extreme lightness and volatility.
Properties
The very lightest, most volatile liquid hydrocarbon solvents that can be bought from laboratory chemical suppliers may also be offered under the name petroleum ether. Petroleum ether consists mainly of aliphatic hydrocarbons and is usually low in aromatics. It is commonly hydrodesulfurized and may be hydrogenated to reduce the amount of aromatic and other unsaturated hydrocarbons. Petroleum ether bears normally a descriptive suffix giving the boiling range. Thus, from the leading international laboratory chemical suppliers it is possible to buy various petroleum ethers with boiling ranges such as 30–50 °C, 40–60 °C, 50–70 °C, 60–80 °C, etc. In the United States, laboratory-grade aliphatic hydrocarbon solvents with boiling ranges as high as 100–140 °C may be called petroleum ether, rather than petroleum spirit.
It is not advisable to employ a fraction with a wider boiling point range than 20 °C, because of possible loss of the more volatile portion during its use in recrystallisation, etc., and consequent different solubilising properties of the higher boiling residue.
Most of the unsaturated hydrocarbons may be removed by shaking two or three times with 10% of the volume worth of concentrated sulfuric acid; vigorous shaking is then continued with successive portions of a concentrated solution of potassium permanganate in 10% sulfuric acid until the color of the permanganate remains unchanged. The solvent is then thoroughly washed with sodium carbonate solution and then with water, dried over anhydrous calcium chloride, and distilled. If required perfectly dry, it can be allowed to stand over sodium wire, or calcium hydride.
Standards
Ligroin is assigned the CAS Registry Number 8032-32-4, which is also applied to many other products, particularly those with low boiling points, called petroleum spirit, petroleum ether, and petroleum benzine. "Naphtha" has the CAS Registry Number 8030-30-6, which also covers petroleum benzine and petroleum ether: that is, the lower boiling point non-aromatic hydrocarbon solvents.
DIN 51630 provides for petroleum spirit (also called spezialbenzine or petrolether) which is described as "a special boiling-point spirit (SBPS) commonly used in laboratory applications, having high volatility and low aromatics content." Its initial boiling point is above 25 °C, its final boiling point up to 80 °C.
Safety
Petroleum ethers are extremely volatile, have very low flash points, and present a significant fire hazard. Fires should be fought with foam, carbon dioxide, dry chemical or carbon tetrachloride.
The naphtha mixtures that are distilled at a lower boiling temperature have a higher volatility and, generally speaking, a higher degree of toxicity than the higher boiling fractions.
Exposure to petroleum ether occurs most commonly by either inhalation or through skin contact. Petroleum ether is metabolized by the liver with a biological half-life of 46–48 h.
Inhalation overexposure causes primarily central nervous system (CNS) effects (headaches, dizziness, nausea, fatigue, and incoordination). In general, the toxicity is more pronounced with petroleum ethers containing higher concentrations of aromatic compounds. n-Hexane is known to cause axonal damage in peripheral nerves.
Skin contact can cause allergic contact dermatitis.
Oral ingestion of hydrocarbons often is associated with symptoms of mucous membrane irritation, vomiting, and central nervous system depression. Cyanosis, tachycardia, and tachypnea may appear as a result of aspiration, with subsequent development of chemical pneumonitis. Other clinical findings include albuminuria, hematuria, hepatic enzyme derangement, and cardiac arrhythmias. Doses as low as 10 ml orally have been reported to be potentially fatal, whereas some patients have survived the ingestion of 60 ml of petroleum distillates. A history of coughing or choking in association with vomiting strongly suggests aspiration and hydrocarbon pneumonia. Hydrocarbon pneumonia is an acute hemorrhagic necrotizing disease that can develop within 24 h after the ingestion. Pneumonia may require several weeks for a complete resolution.
Intravenous administration produces fever and local tissue damage.
Petroleum-derived distillates have not been shown to be carcinogenic in humans. Petroleum ether degrades rapidly in soil and water.
References
Hydrocarbon solvents
Petroleum products | Petroleum ether | [
"Chemistry"
] | 995 | [
"Petroleum",
"Petroleum products"
] |
1,350,243 | https://en.wikipedia.org/wiki/Sneaker%20wave | A sneaker wave, also known as a sleeper wave, or in Australia as a king wave, is a disproportionately large coastal wave that can sometimes appear in a wave train without warning.
Terminology
The term "sneaker wave" is popular rather than scientific, derived from the observation that such a wave can "sneak up" on an unwary beachgoer. There is no scientific coverage of the phenomenon as a distinct sort of wave with respect to height or predictability as there is on other extreme wave events such as tsunamis or rogue waves, and little or no scientific evidence has been gathered to identify, describe, or define sneaker waves. Although the term "rogue wave" — meaning an unusually tall or steep wave in mid-ocean — is sometimes used as a synonym for "sneaker wave," one American oceanographer distinguishes "rogue waves" as occurring on the ocean and "sneaker waves" as occurring at the shore, while the National Oceanic and Atmospheric Administration loosely defines rogue waves as offshore waves that are at least twice the height of surrounding waves and sneaker waves as waves near shore that are unexpectedly and significantly larger than other waves reaching shore at the time. Scientists do not yet understand what causes sneaker waves, and their relationship to rogue waves, if any, has not been established.
In a 2018 paper, Oregon State University researchers wrote that sneaker waves form in offshore storms that transfer wind energy to the ocean surface. The resulting waves then arrive along a coastline during periods of calm weather, and the greater amount of energy they contain compared to the regular waves that preceded them causes them to travel far higher up the shore than the other waves. As of 2021, the National Weather Service in the United States viewed ocean conditions along the United States West Coast as favorable for sneaker waves when an offshore storm generates waves with a particularly long period — perhaps longer than 15 seconds — between swells, allowing the swells to build considerable force before reaching shore, where they might appear either as conventional large waves or as sneaker waves.
Characteristics
Sneaker waves appear suddenly on a coastline and without warning; generally, it is not obvious that they are larger than other waves until they break and suddenly surge up a beach. A sneaker wave can occur following a period of 10 to 20 minutes of gentle, lapping waves. Upon arriving, a sneaker wave can surge more than beyond the foam line, rushing up a beach with great force. In addition to containing a large volume of rapidly surging water, a sneaker wave also tends to carry a large amount of sand and gravel with it. It can be strong enough to break over rocks and float or roll large, waterlogged logs lying on the beach weighing several hundred pounds, moving them up the beach during the landward surge and then back down toward the ocean as the wave retreats. Sneaker waves appear to be more common along steep coastlines than in areas with broader, more gently sloped beaches.
Hazards
The unpredictability of sneaker waves and their tendency to arrive suddenly after lengthy periods of gentle, lapping waves makes it easy for them to surprise unwary or inexperienced beachgoers; because they are much larger than preceding waves, sneaker waves can catch inattentive swimmers, waders, and other people on beaches and ocean jetties and wash them into the sea. The force of a sneaker wave's surge and the large volume of water rushing far up a beach is enough to suddenly submerge people thigh- or waist-deep, knock them off their feet, and drag them into the ocean or trap them against rocks. Many coastlines more prone to sneaker waves lie in colder parts of the world where beachgoers tend to wear heavier clothing; the amount of sand and gravel in a sneaker wave can quickly fill such clothing and footwear such as boots with sediment that weighs a person down as he or she is swept up a beach and then back into the sea, increasing the chances of drowning. Floating and rolling logs in a sneaker wave also pose a danger, as they can badly injure people as well as pin people down when they come to rest, and it can be difficult or impossible to move such a log before a person pinned by it drowns as later waves arrive and fill the person's lungs with water and sediment.
Geographic distribution
Sneaker waves are mainly referred to in warnings and reports of incidents for the coasts of Central and Northern California (including the San Francisco Bay Area's beaches, especially Ocean Beach, Baker Beach, and those that face the Pacific Ocean, e.g. from Big Sur to the California–Oregon border), Oregon, and Washington in the Western United States. Sneaker waves also occur on the coast of British Columbia in Western Canada, especially the province's southern coast, because they commonly occur on the west coast of Vancouver Island (including Tofino, Ucluelet, and Cape Scott Provincial Park). Sneaker waves are common on the southern coast of Iceland, and warning signs were erected at Reynisfjara and Kirkjufjara beaches, following three unrelated tourist deaths at those beaches over several years, the third of them in January 2017. In Australia, where they are known as "king waves," sneaker waves occur especially in Western Australia and Tasmania, where they can be a hazard for rock fishermen.
Along much of the United States West Coast, sneaker waves kill more people than all other weather hazards combined. In Oregon, 21 deaths were attributed to sneaker waves from 1990 through March 2021, most of the deaths occurring between October and April, although sneaker waves also occurred at other times of year.
A sneaker wave incident gained worldwide media attention when two large waves suddenly and unexpectedly struck a crowd watching the Mavericks surfing competition at Mavericks in Princeton-by-the-Sea, California, on February 13, 2010, breaking over a seawall onto a narrow beach and injuring at least 13 people. The incident was caught on film.
In March 2014, a massive wave struck Roi-Namur in Kwajalein Atoll in the Marshall Islands on an otherwise calm, sunny day, penetrating well inland, flooding parts of the island and swamping coastal roads.
On September 18, 2023, a sneaker wave smashed into a beachside restaurant at Marina Beach near Southbroom, South Africa, injuring seven people. One restaurant patron was swept out to sea but rescued by lifeguards. The wave was filmed.
Rio de Janeiro's Barra de Tijuca beach in Brazil experiences sneaker waves, known locally as ressaca waves. It also is a steep beach and a December 2023 news film shows the whole beach being cleared by a sneaker wave.
On 20 January 2024, one or more sneaker or rogue waves struck the United States Army′s Ronald Reagan Ballistic Missile Defense Test Site on Roi-Namur in Kwajalein Atoll in the Marshall Islands, breaking down the doors of a dining hall, knocking several people off their feet, moderately to severely damaging the dining hall, the Outrigger Bar and Grill, the chapel, and the Tradewinds Theater, and leaving parts of the island, including the automotive complex, underwater. The flooding of the dining hall was filmed. The wave or waves penetrated inland and probably were between tall amid a significant wave height of to .
Seventh wave
In many parts of the world, local folklore predicts that out of a certain number of waves, one will be much larger than the rest. "Every seventh wave" or "every ninth wave" are examples of such common beliefs that have wide circulation and have entered popular culture through music, literature, and art. These ideas have some scientific merit, due to the occurrence of wave groups at sea, but there is no explicit evidence for this specific phenomenon, or that these wave groups are related to sneaker waves. The saying is likely derived more from a cultural fascination with certain numbers, and it may also be designed to educate shore-dwellers about the necessity of remaining vigilant when near the ocean.
See also
Wind wave
Rogue wave
Tsunami
Megatsunami
Meteotsunami
References
External links
- Video of a sneaker wave off the Oregon coast
Water waves
Weather hazards | Sneaker wave | [
"Physics",
"Chemistry"
] | 1,670 | [
"Physical phenomena",
"Weather hazards",
"Weather",
"Water waves",
"Waves",
"Fluid dynamics"
] |
1,350,362 | https://en.wikipedia.org/wiki/Gas%20focusing | Gas focusing, also known as ionic focusing.
Rather than being dispersed, a beam of charged particles travelling in an inert gas environment sometimes becomes narrower. This is ascribed to the generation of gas ions which diffuse outwards, neutralizing the particle beam globally, and producing an intense radial electric field which applies a radially inward force to the particles in the beam.
See also
Vacuum tube
Teleforce
References
Sabchevski S P and Mladenov G M 1994 J. Physics D: Applied Physics 27 690-697
Mladenov G., and Sabchevski S., Potential distribution and space-charge neutralization in intense electron beams - an overview "Vacuum", 2001,v62, N2-3, pp. 113–122
External links
Ionic focusing
Plasma technology and applications | Gas focusing | [
"Physics"
] | 166 | [
"Plasma technology and applications",
"Plasma physics stubs",
"Plasma physics"
] |
1,350,841 | https://en.wikipedia.org/wiki/Toda%20field%20theory | In mathematics and physics, specifically the study of field theory and partial differential equations, a Toda field theory, named after Morikazu Toda, is specified by a choice of Lie algebra and a specific Lagrangian.
Formulation
Fixing the Lie algebra to have rank , that is, the Cartan subalgebra of the algebra has dimension , the Lagrangian can be written
The background spacetime is 2-dimensional Minkowski space, with space-like coordinate and timelike coordinate . Greek indices indicate spacetime coordinates.
For some choice of root basis, is the th simple root. This provides a basis for the Cartan subalgebra, allowing it to be identified with .
Then the field content is a collection of scalar fields , which are scalar in the sense that they transform trivially under Lorentz transformations of the underlying spacetime.
The inner product is the restriction of the Killing form to the Cartan subalgebra.
The are integer constants, known as Kac labels or Dynkin labels.
The physical constants are the mass and the coupling constant .
Classification of Toda field theories
Toda field theories are classified according to their associated Lie algebra.
Toda field theories usually refer to theories with a finite Lie algebra. If the Lie algebra is an affine Lie algebra, it is called an affine Toda field theory (after the component of φ which decouples is removed). If it is hyperbolic, it is called a hyperbolic Toda field theory.
Toda field theories are integrable models and their solutions describe solitons.
Examples
Liouville field theory is associated to the A1 Cartan matrix, which corresponds to the Lie algebra in the classification of Lie algebras by Cartan matrices. The algebra has only a single simple root.
The sinh-Gordon model is the affine Toda field theory with the generalized Cartan matrix
and a positive value for β after we project out a component of φ which decouples.
The sine-Gordon model is the model with the same Cartan matrix but an imaginary β. This Cartan matrix corresponds to the Lie algebra . This has a single simple root, and Coxeter label , but the Lagrangian is modified for the affine theory: there is also an affine root and Coxeter label . One can expand as , but for the affine root , so the component decouples.
The sum is Then if is purely imaginary, with real and, without loss of generality, positive, then this is . The Lagrangian is then
which is the sine-Gordon Lagrangian.
References
Quantum field theory
Lattice models
Lie algebras
Exactly solvable models
Integrable systems | Toda field theory | [
"Physics",
"Materials_science"
] | 556 | [
"Quantum field theory",
"Integrable systems",
"Theoretical physics",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Condensed matter physics",
"Statistical mechanics"
] |
1,351,369 | https://en.wikipedia.org/wiki/Ligand-gated%20ion%20channel | Ligand-gated ion channels (LICs, LGIC), also commonly referred to as ionotropic receptors, are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter.
When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response.
These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels.
Cys-loop receptors
The cys-loop receptors are named after a characteristic loop formed by a disulfide bond between two cysteine residues in the N terminal extracellular domain.
They are part of a larger family of pentameric ligand-gated ion channels that usually lack this disulfide bond, hence the tentative name "Pro-loop receptors".
A binding site in the extracellular N-terminal ligand-binding domain gives them receptor specificity for (1) acetylcholine (AcCh), (2) serotonin, (3) glycine, (4) glutamate and (5) γ-aminobutyric acid (GABA) in vertebrates. The receptors are subdivided with respect to the type of ion that they conduct (anionic or cationic) and further into families defined by the endogenous ligand. They are usually pentameric with each subunit containing 4 transmembrane helices constituting the transmembrane domain, and a beta sheet sandwich type, extracellular, N terminal, ligand binding domain. Some also contain an intracellular domain like shown in the image.
The prototypic ligand-gated ion channel is the nicotinic acetylcholine receptor. It consists of a pentamer of protein subunits (typically ααβγδ), with two binding sites for acetylcholine (one at the interface of each alpha subunit). When the acetylcholine binds it alters the receptor's configuration (twists the T2 helices which moves the leucine residues, which block the pore, out of the channel pathway) and causes the constriction in the pore of approximately 3 angstroms to widen to approximately 8 angstroms so that ions can pass through. This pore allows Na+ ions to flow down their electrochemical gradient into the cell. With a sufficient number of channels opening at once, the inward flow of positive charges carried by Na+ ions depolarizes the postsynaptic membrane sufficiently to initiate an action potential.
A bacterial homologue to an LIC has been identified, hypothesized to act nonetheless as a chemoreceptor. This prokaryotic nAChR variant is known as the GLIC receptor, after the species in which it was identified; Gloeobacter Ligand-gated Ion Channel.
Structure
Cys-loop receptors have structural elements that are well conserved, with a large extracellular domain (ECD) harboring an alpha-helix and 10 beta-strands. Following the ECD, four transmembrane segments (TMSs) are connected by intracellular and extracellular loop structures. Except the TMS 3-4 loop, their lengths are only 7-14 residues. The TMS 3-4 loop forms the largest part of the intracellular domain (ICD) and exhibits the most variable region between all of these homologous receptors. The ICD is defined by the TMS 3-4 loop together with the TMS 1-2 loop preceding the ion channel pore. Crystallization has revealed structures for some members of the family, but to allow crystallization, the intracellular loop was usually replaced by a short linker present in prokaryotic cys-loop receptors, so their structures as not known. Nevertheless, this intracellular loop appears to function in desensitization, modulation of channel physiology by pharmacological substances, and posttranslational modifications. Motifs important for trafficking are therein, and the ICD interacts with scaffold proteins enabling inhibitory synapse formation.
Cationic cys-loop receptors
Anionic cys-loop receptors
Ionotropic glutamate receptors
The ionotropic glutamate receptors bind the neurotransmitter glutamate. They form tetramers, with each subunit consisting of an extracellular amino terminal domain (ATD, which is involved tetramer assembly), an extracellular ligand binding domain (LBD, which binds glutamate), and a transmembrane domain (TMD, which forms the ion channel). The transmembrane domain of each subunit contains three transmembrane helices as well as a half membrane helix with a reentrant loop. The structure of the protein starts with the ATD at the N terminus followed by the first half of the LBD which is interrupted by helices 1,2 and 3 of the TMD before continuing with the final half of the LBD and then finishing with helix 4 of the TMD at the C terminus. This means there are three links between the TMD and the extracellular domains. Each subunit of the tetramer has a binding site for glutamate formed by the two LBD sections forming a clamshell like shape. Only two of these sites in the tetramer need to be occupied to open the ion channel. The pore is mainly formed by the half helix 2 in a way which resembles an inverted potassium channel.
AMPA receptor
The α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (also known as AMPA receptor, or quisqualate receptor) is a non-NMDA-type ionotropic transmembrane receptor for glutamate that mediates fast synaptic transmission in the central nervous system (CNS).
Its name is derived from its ability to be activated by the artificial glutamate analog AMPA. The receptor was first named the "quisqualate receptor" by Watkins and colleagues after a naturally occurring agonist quisqualate and was only later given the label "AMPA receptor" after the selective agonist developed by Tage Honore and colleagues at the Royal Danish School of Pharmacy in Copenhagen. AMPARs are found in many parts of the brain and are the most commonly found receptor in the nervous system. The AMPA receptor GluA2 (GluR2) tetramer was the first glutamate receptor ion channel to be crystallized. Ligands include:
Agonists: Glutamate, AMPA, 5-Fluorowillardiine, Domoic acid, Quisqualic acid, etc.
Antagonists: CNQX, Kynurenic acid, NBQX, Perampanel, Piracetam, etc.
Positive allosteric modulators: Aniracetam, Cyclothiazide, CX-516, CX-614, etc.
Negative allosteric modulators: Ethanol, Perampanel, Talampanel, GYKI-52,466, etc.
NMDA receptors
The N-methyl-D-aspartate receptor (NMDA receptor) – a type of ionotropic glutamate receptor – is a ligand-gated ion channel that is gated by the simultaneous binding of glutamate and a co-agonist (i.e., either D-serine or glycine). Studies show that the NMDA receptor is involved in regulating synaptic plasticity and memory.
The name "NMDA receptor" is derived from the ligand N-methyl-D-aspartate (NMDA), which acts as a selective agonist at these receptors. When the NMDA receptor is activated by the binding of two co-agonists, the cation channel opens, allowing Na+ and Ca2+ to flow into the cell, in turn raising the cell's electric potential. Thus, the NMDA receptor is an excitatory receptor. At resting potentials, the binding of Mg2+ or Zn2+ at their extracellular binding sites on the receptor blocks ion flux through the NMDA receptor channel. "However, when neurons are depolarized, for example, by intense activation of colocalized postsynaptic AMPA receptors, the voltage-dependent block by Mg2+ is partially relieved, allowing ion influx through activated NMDA receptors. The resulting Ca2+ influx can trigger a variety of intracellular signaling cascades, which can ultimately change neuronal function through activation of various kinases and phosphatases". Ligands include:
Primary endogenous co-agonists: glutamate and either D-serine or glycine
Other agonists : aminocyclopropanecarboxylic acid; D-cycloserine; L-aspartate; quinolinate, etc.
Partial agonists : N-methyl-D-aspartic acid (NMDA); NRX-1074; 3,5-dibromo-L-phenylalanine, etc.
Antagonists: ketamine, PCP, dextropropoxyphene, ketobemidone, tramadol, kynurenic acid (endogenous), etc.
ATP-gated channels
ATP-gated channels open in response to binding the nucleotide ATP. They form trimers with two transmembrane helices per subunit and both the C and N termini on the intracellular side.
Clinical relevance
Ligand-gated ion channels are likely to be the major site at which anaesthetic agents and ethanol have their effects, although unequivocal evidence of this is yet to be established. In particular, the GABA and NMDA receptors are affected by anaesthetic agents at concentrations similar to those used in clinical anaesthesia.
By understanding the mechanism and exploring the chemical/biological/physical component that could function on those receptors, more and more clinical applications are proven by preliminary experiments or FDA. Memantine is approved by the U.S. F.D.A and the European Medicines Agency for the treatment of moderate-to-severe Alzheimer's disease, and has now received a limited recommendation by the UK's National Institute for Health and Care Excellence for patients who fail other treatment options. Agomelatine, is a type of drug that acts on a dual melatonergic-serotonergic pathway, which have shown its efficacy in the treatment of anxious depression during clinical trials, study also suggests the efficacy in the treatment of atypical and melancholic depression.
See also
Action potential
Acid-sensing ion channel
Calcium-activated potassium channel
Cyclic nucleotide-gated ion channel
Voltage-dependent calcium channel
Receptor (biochemistry)
Inositol trisphosphate receptor
Metabotropic receptor
Ryanodine receptor
References
External links
Ligand-Gated Ion Channel database at European Bioinformatics Institute. Verified availability April 11, 2007.
www.esf.edu
www.genenames.org
www.guidetopharmacology.org
Cell biology
Electrophysiology
Ion channels
Ionotropic receptors
Membrane biology
Molecular neuroscience
Neurochemistry
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Ligand-gated ion channel | [
"Chemistry",
"Biology"
] | 2,672 | [
"Cell biology",
"Membrane biology",
"Ionotropic receptors",
"Signal transduction",
"Protein classification",
"Membrane proteins",
"Molecular neuroscience",
"Molecular biology",
"Biochemistry",
"Protein families",
"Neurochemistry",
"Ion channels"
] |
1,352,016 | https://en.wikipedia.org/wiki/D-ring | A D-ring is an item of hardware, usually a tie-down metal ring shaped like a capital letter 'D' used primarily as a lashing or attachment point. The term is found interchangeably spelled in different forms, such as: D ring, D-ring or dee-ring.
A D-ring may be used at the end of a leather or fabric strap, or may be secured to a surface with a metal or fabric strap; though there are D-rings with a middle body designed to be welded to steel. Ideally, a D-ring swings freely after it has been secured. D-rings may vary in composition, geometry, weight, finish and load (rated) capacity.
Though there are differences, a weld-on pivoting link is commonly called a D-ring.
To minimize obstruction when the D-ring is not in use, recessed tie-down rings are designed that accommodate the D-ring so it is flush to the surface. There are some non-recessed designs that have an adhesive base. Work load limits are specified where appropriate.
For D-rings used in the bed of a truck to secure loads, regular preventative maintenance is important to avoid costly repairs.
D-rings may be made of plastic for applications such as fixtures for straps for hiking equipment.
Common uses
Applications of D-rings include:
For light loading applications such as clothing and luggage, D-rings made of plastics such as nylon may be used, as they weigh less and are impervious to rusting.
At the end of a tow-rope or chain, to allow the creation a bow around an item or part of an item that is being towed.
With a chain to tether a boat to a dock or tree when it is being moored.
In theatres, commonly used where a piece of scenery has to be lifted or "flown". D-rings are attached to the tops or bottoms of flats with a "drift line" and turnbuckle attached to adjust the "trim".
A D-ring on an M16 or variant type rifle is used to increase the pressure on the extractor and reduce malfunctions. This D-ring is a rubber grommet shaped like a "D" and fits over the extractor spring adding tension to it.
On breathing sets and scuba divers' buoyancy compensators
A bit ring used on the bit of a horse.
A D-ring carabiner has a section which opens and can be secured, to attach a rope for climbing or caving or to attach other items.
A part of a saddle (see saddle#D-ring)
A D-ring binder is a type of ring binder which uses D-shaped rings to accommodate larger documents or more pages.
To attach a leash or tag to a collar or pet harness, or to a waist belt, either for dog walking or for sports where the dog is intended to pull.
On a prisoner transport belt to accommodate a pair of handcuffs
As part of BDSM restraints, clothing, or furniture, to act as an attachment point for rope or chain
For hanging a framed picture, D-rings attached to a small metal plate with a hole in it are used: the D-ring is attached to the frame with a screw through the hole, and the wire to hang the picture passes through the D-ring.
Two adjacent D-rings can be used as an adjustable fastener for a strap in clothing such as overalls .
References
Hardware (mechanical) | D-ring | [
"Physics",
"Technology",
"Engineering"
] | 716 | [
"Physical systems",
"Machines",
"Hardware (mechanical)",
"Construction"
] |
1,352,090 | https://en.wikipedia.org/wiki/Electrospinning | Electrospinning is a fiber production method that uses electrical force (based on electrohydrodynamic principles) to draw charged threads of polymer solutions for producing nanofibers with diameters ranging from nanometers to micrometers. Electrospinning shares characteristics of both electrospraying and conventional solution dry spinning of fibers. The process does not require the use of coagulation chemistry or high temperatures to produce solid threads from solution. This makes the process particularly suited to the production of fibers using large and complex molecules. Electrospinning from molten precursors is also practiced; this method ensures that no solvent can be carried over into the final product.
Process
When a sufficiently high voltage is applied to a liquid droplet, the body of the liquid becomes charged, and electrostatic repulsion counteracts the surface tension and the droplet is stretched; at a critical point a stream of liquid erupts from the surface. This point of eruption is known as the Taylor cone. If the molecular cohesion of the liquid is sufficiently high, stream breakup does not occur (if it does, droplets are electrosprayed) and a charged liquid jet is formed.
As the jet dries in flight, the mode of current flow changes from ohmic to convective as the charge migrates to the surface of the fiber. The jet is then elongated by a whipping process caused by electrostatic repulsion initiated at small bends in the fiber, until it is finally deposited on the grounded collector. The elongation and thinning of the fiber resulting from this bending instability leads to the formation of uniform fibers with nanometer-scale diameters.
Parameters
Source:
Molecular weight, molecular-weight distribution and architecture (branched, linear etc.) of the polymer
Solution properties (viscosity, conductivity, and surface tension)
Electric potential, flow rate and concentration
Distance between the capillary and collection screen
Ambient parameters (temperature, humidity and air velocity in the chamber)
Motion and size of target screen (collector)
Needle gauge
Apparatus and range
The standard laboratory setup for electrospinning consists of a spinneret (typically a hypodermic syringe needle) connected to a high-voltage (5 to 50 kV) direct current power supply, a syringe pump, and a grounded collector. A polymer solution, sol-gel, particulate suspension or melt is loaded into the syringe and this liquid is extruded from the needle tip at a constant rate by a syringe pump. Alternatively, the droplet at the tip of the spinneret can be replenished by feeding from a header tank providing a constant feed pressure. This constant pressure type feed works better for lower viscosity feedstocks.
Scaling-up possibilities
Alternating current electrospinning
Needleless (also known as, nozzle-free) electrospinning
Multiplying the needles
High-throughput roller electrospinning
Wire electrospinning
Bubble electrospinning
Ball electrospinning
High speed electrospinning
Plate edge electrospinning
Bowl electrospinning
Hollow tube electrospinning
Rotary cone electrospinning
Spiral coil electrospinning
Electroblowing
Other techniques
Modification of the spinneret and/or the type of solution can allow for the creation of fibers with unique structures and properties. Electrospun fibers can adopt a porous or core–shell morphology depending on the type of materials being spun as well as the evaporation rates and miscibility for the solvents involved. For techniques which involve multiple spinning fluids, the general criteria for the creation of fibers depends upon the spinnability of the outer solution. This opens up the possibility of creating composite fibers which can function as drug delivery systems or possess the ability to self-heal upon failure.
Co-axial electrospinning
A coaxial setup uses a dual-solution feed system which allows for the injection of one solution into another at the tip of the spinneret. The sheath fluid is believed to act as a carrier which draws in the inner fluid at the Taylor Cone of the electrospinning jet. If the solutions are immiscible then a core shell structure is usually observed. Miscible solutions however can result in porosity or a fiber with distinct phases due to phase separation during solidification of the fiber. For more advanced setups, a triaxial or quadaxial (tetra-axial) spinneret can be used with multiple solutions.
Emulsion electrospinning
Emulsions can be used to create core shell or composite fibers without modification of the spinneret. However, these fibers are typically more difficult to produce compared to coaxial spinning due to the greater number of variables which must be accounted for in creating the emulsion. A water phase and an immiscible solvent phase are mixed in the presence of an emulsifying agent to form the emulsion. Any agent which stabilizes the interface between the immiscible phases can be used. Surfactants such as sodium dodecyl sulfate, Triton X-100 and nanoparticles have been used successfully. During the electrospinning process the emulsion droplets within the fluid are stretched and gradually confined leading to their coalescence. If the volume fraction of inner fluid is sufficiently high, a continuous inner core can be formed.
Electrospinning of blends is a variation of this technique which uses the fact that polymers are generally immiscible with each and can phase segregate without the use of surfactants. This method can be simplified further if a solvent which dissolves both polymers is used.
Melt electrospinning
Electrospinning of polymer melts eliminates the need for volatile solvents in solution electrospinning. Semi crystalline polymer fibers such as PE, PET and PP, which would otherwise be impossible or very difficult to create using solution spinning, can be created. The setup is very similar to that employed in conventional electrospinning and includes the use of a syringe or spinneret, a high voltage supply and the collector. The polymer melt is usually produced by heating from either resistance heating, circulating fluids, air heating or lasers.
Due to the high viscosity of polymer melts, the fiber diameters are usually slightly larger than those obtained from solution electrospinning. The fiber uniformity upon achieving stable flow rates and thermal equilibrium, tends to be very good. The whipping instability which is the predominant stage in which the fiber is stretched for spinning from solutions can be absent from the process due to the low melt conductivity and high viscosity of the melt. The most significant factors which affect the fiber size tend to be the feed rate, the molecular weight of the polymer and the diameter of the spinneret. Fiber sizes ranging from ~250 nm to several hundreds of micrometers have been created thus far with the lower sizes being achieved using low molecular weight polymers.
History
In the late 16th century William Gilbert set out to describe the behavior of magnetic and electrostatic phenomena. He observed that when a suitably electrically charged piece of amber was brought near a droplet of water it would form a cone shape and small droplets would be ejected from the tip of the cone: this is the first recorded observation of electrospraying.
In 1887 C. V. Boys described “the old, but little known experiment of electrical spinning”. Boys’ apparatus consisted of “a small dish, insulated and connected with an electrical machine”. He found that as his stock liquid reached the edge of the dish, that he could draw fibers from a number of materials including shellac, beeswax, sealing-wax, gutta-percha and collodion.
The process of electrospinning was patented by J.F. Cooley in May 1900 and February 1902 and by W.J. Morton in July 1902.
In 1914 John Zeleny, published work on the behavior of fluid droplets at the end of metal capillaries. His effort began the attempt to mathematically model the behavior of fluids under electrostatic forces.
Further developments toward commercialization were made by Anton Formhals, and described in a sequence of patents from 1934 to 1944 for the fabrication of textile yarns. Electrospinning from a melt rather than a solution was patented by C.L. Norton in 1936 using an air-blast to assist fiber formation.
In 1938 Nathalie D. Rozenblum and Igor V. Petryanov-Sokolov, working in Nikolai A. Fuchs' group at the Aerosol Laboratory of the L. Ya. Karpov Institute in the USSR, generated electrospun fibers, which they developed into filter materials known as "Petryanov filters". By 1939, this work had led to the establishment of a factory in Tver' for the manufacture of electrospun smoke filter elements for gas masks. The material, dubbed BF (Battlefield Filter) was spun from cellulose acetate in a solvent mixture of dichloroethane and ethanol. By the 1960s output of spun filtration material was claimed as 20 million m2 per annum.
Between 1964 and 1969 Sir Geoffrey Ingram Taylor produced the theoretical underpinning of electrospinning. Taylor’s work contributed to electrospinning by mathematically modeling the shape of the cone formed by the fluid droplet under the effect of an electric field; this characteristic droplet shape is now known as the Taylor cone. He further worked with J. R. Melcher to develop the "leaky dielectric model" for conducting fluids.
Simon, in a 1988 NIH SBIR grant report, showed that solution electrospinning could be used to produce nano- and submicron-scale polystyrene and polycarbonate fibrous mats specifically intended for use as in vitro cell substrates. This early application of electrospun fibrous lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon the fibers in vitro. Small changes in the surface chemistry of the fibers were also observed depending upon the polarity of the electric field during spinning.
In the early 1990s several research groups (notably that of Reneker and Rutledge who popularised the name electrospinning for the process) demonstrated that many organic polymers could be electrospun into nanofibers. Between 1996 and 2003 the interest in electrospinning underwent an explosive growth, with the number of publications and patent applications approximately doubling every year.
Since 1995 there have been further theoretical developments of the driving mechanisms of the electrospinning process. Reznik et al. described the shape of the Taylor cone and the subsequent ejection of a fluid jet. Hohman et al. investigated the relative growth rates of the numerous proposed instabilities in an electrically forced jet once in flight and endeavors to describe the most important instability to the electrospinning process, the bending (whipping) instability.
Uses
The size of an electrospun fiber can be in the nano scale and the fibers may possess nano scale surface texture, leading to different modes of interaction with other materials compared with macroscale materials. In addition to this, the ultra-fine fibers produced by electrospinning are expected to have two main properties, a very high surface to volume ratio, and a relatively defect free structure at the molecular level. This first property makes electrospun material suitable for activities requiring a high degree of physical contact, such as providing sites for chemical reactions, or the capture of small sized particulate material by physical entanglement – filtration. The second property should allow electrospun fibers to approach the theoretical maximum strength of the spun material, opening up the possibility of making high mechanical performance composite materials.
Filtration and adsorption
The use of nanofiber webs as a filtering medium is well established. Due to the small size of the fibers London-Van Der Waals forces are an important method of adhesion between the fibers and the captured materials. Polymeric nanofibers have been used in air filtration applications for more than seven decades. Because of poor bulk mechanical properties of thin nanowebs, they are laid over a filtration medium substrate. The small fiber diameters cause slip flows at fiber surfaces, causing an increase in the interception and inertial impaction efficiencies of these composite filter media. The enhanced filtration efficiency at the same pressure drop is possible with fibers having diameters less than 0.5 micrometer. Since the essential properties of protective clothing are high moisture vapor transport, increased fabric breath-ability, and enhanced toxic chemical resistance, electrospun nanofiber membranes are good candidates for these applications.
Given the high surface-to-volume ratio of electrospun nanofibers, they can also be used as relatively efficient adsorbents compared to micron-sized fibers. One way to achieve this is by mixing the electrospinning solution with suitable additives or by using active polymers. For example, iron oxide nanoparticles, a good arsenic adsorbent, can be trapped within poly(vinyl alcohol) electrospun nanofibers for water remmediation.
Textile manufacturing
The majority of early patents for electrospinning were for textile applications, however little woven fabric was actually produced, perhaps due to difficulties in handling the barely visible fibers. However, electrospinning has the potential to produce seamless non-woven garments by integrating advanced manufacturing with fiber electrospinning. This would introduce multi-functionality (flame, chemical, environmental protection) by blending fibers into electrospinlaced (using electrospinning to combine different fibers and coatings to form three-dimensional shapes, such as clothing) layers in combination with polymer coatings.
Medical
Electrospinning can also be used for medical purposes. The electrospun scaffolds made for tissue engineering applications can be penetrated with cells to treat or replace biological targets. Nanofibrous wound dressings have excellent capability to isolate the wound from microbial infections. Other medical textile materials such as sutures are also attainable via electrospinning. Through the addition of a drug substance into the electrospinning solution or melt diverse fibrous drug delivery systems (e.g., implants, transdermal patches, oral forms) can be prepared.
Electropsun propolis nanofibrous membrane showed an antiviral effect against the SARS-CoV-2 virus, and an antibacterial effect against Staphylococcus aureus and Salmonella enterica bacteria.
Interestingly, electrospinning allows to fabricate nanofibers with advanced architecture that can be used to promote the delivery of multiple drugs at the same time and with different kinetics.
Cosmetic
Electrospun nanomaterials have been employed to control their delivery so they can work within skin to improve its appearance. Electrospinning is an alternative to traditional nanoemulsions and nanoliposomes.
Pharmaceutical manufacturing
The continuous manner and the effective drying effect enable the integration of electrospinning into continuous pharmaceutical manufacturing systems. The synthesized liquid drug can be quickly turned into an electrospun solid product processable for tableting and other dosage forms.
Composites
Ultra-fine electrospun fibers show clear potential for the manufacture of long fiber composite materials.
Application is limited by difficulties in making sufficient quantities of fiber to make substantial large scale articles in a reasonable time scale. For this reason medical applications requiring relatively small amounts of fiber are a popular area of application for electrospun fiber reinforced materials.
Electrospinning is being investigated as a source of cost-effective, easy to manufacture wound dressings, medical implants, and scaffolds for the production of artificial human tissues. These scaffolds fulfill a similar purpose as the extracellular matrix in natural tissue. Biodegradable polymers, such as polycaprolactone and polysaccharides, are typically used for this purpose. These fibers may then be coated with collagen to promote cell attachment, although collagen has successfully been spun directly into membranes.
Transmission electron micrograph of electrospun poly(vinyl alcohol) nanofibers loaded with iron oxide nanoparticles. These nanoparticles can be used for the adsorption of water contaminants.
Catalysts
Electrospun fibers may have potential as a surface for enzymes to be immobilized on. These enzymes could be used to break down toxic chemicals in the environment, among other things.
Mass production
Thus far, at least eight countries in the world have companies which provide industrial-level and laboratory-scale electrospinning machines: three companies each in Italy and Czech Republic, two each in Iran, Japan, and Spain, and one each in the Netherlands, New Zealand, and Turkey.
References
Further reading
External links
Polish Academy of Science's page on electrospinning
How to describe the electrospinning process
Hackaday, "OpenESpin Building an Electrospinning machine for everyone."
Nanofiberlabs,"Electrospinning of Nanofibers and Nanofiber Morphology"
Electrospinning
Industrial processes
Nanotechnology
Spinning | Electrospinning | [
"Materials_science",
"Engineering"
] | 3,514 | [
"Nanotechnology",
"Materials science"
] |
1,353,128 | https://en.wikipedia.org/wiki/Microprobe | A microprobe is an instrument that applies a stable and well-focused beam of charged particles (electrons or ions) to a sample.
Types
When the primary beam consists of accelerated electrons, the probe is termed an electron microprobe, when the primary beam consists of accelerated ions, the term ion microprobe is used. The term microprobe may also be applied to optical analytical techniques, when the instrument is set up to analyse micro samples or micro areas of larger specimens. Such techniques include micro Raman spectroscopy, micro infrared spectroscopy and micro LIBS. All of these techniques involve modified optical microscopes to locate the area to be analysed, direct the probe beam and collect the analytical signal.
A laser microprobe is a mass spectrometer that uses ionization by a pulsed laser and subsequent mass analysis of the generated ions.
Uses
Scientists use this beam of charged particles to determine the elemental composition of solid materials (minerals, glasses, metals). The chemical composition of the target can be found from the elemental data extracted through emitted X-rays (in the case where the primary beam consists of charged electrons) or measurement of an emitted secondary beam of material sputtered from the target (in the case where the primary beam consists of charged ions).
When the ion energy is in the range of a few tens of keV (kilo-electronvolt) these microprobes are usually called FIB (Focused ion beam). An FIB makes a small portion of the material into a plasma; the analysis is done by the same basic techniques as the ones used in mass spectrometry.
When the ion energy is higher, hundreds of keV to a few MeV (mega-electronvolt) they are called nuclear microprobes. Nuclear microprobes are extremely powerful tools that utilize ion beam analysis techniques as microscopies with spot sizes in the micro-/nanometre range. These instruments are applied to solve scientific problems in a diverse range of fields, from microelectronics to biomedicine. In addition to the development of new ways to exploit these probes as analytical tools (this application area of the nuclear microprobes is called nuclear microscopy), strong progress has been made in the area of materials modification recently (most of which can be described as PBW, proton beam writing).
The nuclear microprobe's beam is usually composed of protons and alpha particles. Some of the most advanced nuclear microprobes have beam energies in excess of 2 MeV. This gives the device very high sensitivity to minute concentrations of elements, around 1 ppm at beam sizes smaller than 1 micrometer. This elemental sensitivity exists because when the beam interacts with the a sample it gives off characteristic X-rays of each element present in the sample. This type of detection of radiation is called PIXE. Other analysis techniques are applied to nuclear microscopy including Rutherford backscattering(RBS), STIM, etc.
Another use for microprobes is the production of micro and nano sized devices, as in microelectromechanical systems and nanoelectromechanical systems. The advantage that microprobes have over other lithography processes is that a microprobe beam can be scanned or directed over any area of the sample. This scanning of the microprobe beam can be imagined to be like using a very fine tipped pencil to draw your design on a paper or in a drawing program. Traditional lithography processes use photons which cannot be scanned and therefore masks are needed to selectively expose your sample to radiation. It is the radiation that causes changes in the sample, which in turn allows scientists and engineers to develop tiny devices such as microprocessors, accelerometers (like in most car safety systems), etc.
References
Microscopes
Measuring instruments
Spectroscopy
Microtechnology | Microprobe | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 788 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Microtechnology",
"Instrumental analysis",
"Materials science",
"Measuring instruments",
"Microscopes",
"Microscopy",
"Spectroscopy"
] |
1,984,110 | https://en.wikipedia.org/wiki/Gigantothermy | Gigantothermy (sometimes called ectothermic homeothermy or inertial homeothermy) is a phenomenon with significance in biology and paleontology, whereby large, bulky ectothermic animals are more easily able to maintain a constant, relatively high body temperature than smaller animals by virtue of their smaller surface-area-to-volume ratio. A bigger animal has proportionately less of its body close to the outside environment than a smaller animal of otherwise similar shape, and so it gains heat from, or loses heat to, the environment much more slowly.
The phenomenon is important in the biology of ectothermic megafauna, such as large turtles, and aquatic reptiles like ichthyosaurs and mosasaurs. Gigantotherms, though almost always ectothermic, generally have a body temperature similar to that of endotherms. It has been suggested that the larger dinosaurs would have been gigantothermic, rendering them virtually homeothermic.
Disadvantages
Gigantothermy allows animals to maintain body temperature, but is most likely detrimental to endurance and muscle power as compared with endotherms due to decreased anaerobic efficiency. Mammals' bodies have roughly four times as much surface area occupied by mitochondria as reptiles, necessitating larger energy demands, and consequently producing more heat to use in thermoregulation. An ectotherm the same size of an endotherm would not be able to remain as active as the endotherm, as heat is modulated behaviorally rather than biochemically. More time is dedicated to basking than eating.
Advantages
Large ectotherms displaying the same body size as large endotherms have the advantage of a slow metabolic rate, meaning that it takes reptiles longer to digest their food. Consequently gigantothermic ectotherms would not have to eat as often as large endotherms that need to maintain a constant influx of food to meet energy demands. Although lions are much smaller than crocodiles, the lions must eat more often than crocodiles because of the higher metabolic output necessary to maintain the lion's heat and energy. The crocodile needs only to lie in the sun to digest more quickly and synthesize ATP.
See also
Allen's rule
Bergmann's rule
Bradyaerobic
Bradymetabolism
Physiology of dinosaurs
Tachyaerobic
References
External links
Gigantothermy at Davidson
Animal size
Thermoregulation | Gigantothermy | [
"Biology"
] | 507 | [
"Thermoregulation",
"Animal size",
"Organism size",
"Homeostasis"
] |
1,984,187 | https://en.wikipedia.org/wiki/Biomineralization | Biomineralization, also written biomineralisation, is the process by which living organisms produce minerals, often resulting in hardened or stiffened mineralized tissues. It is an extremely widespread phenomenon: all six taxonomic kingdoms contain members that are able to form minerals, and over 60 different minerals have been identified in organisms. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. These minerals often form structural features such as sea shells and the bone in mammals and birds.
Organisms have been producing mineralized skeletons for the past 550 million years. Calcium carbonates and calcium phosphates are usually crystalline, but silica organisms (such as sponges and diatoms) are always non-crystalline minerals. Other examples include copper, iron, and gold deposits involving bacteria. Biologically formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity-sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin).
In terms of taxonomic distribution, the most common biominerals are the phosphate and carbonate salts of calcium that are used in conjunction with organic polymers such as collagen and chitin to give structural support to bones and shells. The structures of these biocomposite materials are highly controlled from the nanometer to the macroscopic level, resulting in complex architectures that provide multifunctional properties. Because this range of control over mineral growth is desirable for materials engineering applications, there is interest in understanding and elucidating the mechanisms of biologically-controlled biomineralization.
Types
Mineralization can be subdivided into different categories depending on the following: the organisms or processes that create chemical conditions necessary for mineral formation, the origin of the substrate at the site of mineral precipitation, and the degree of control that the substrate has on crystal morphology, composition, and growth. These subcategories include biomineralization, organomineralization, and inorganic mineralization, which can be subdivided further. However, the usage of these terms varies widely in the scientific literature because there are no standardized definitions. The following definitions are based largely on a paper written by Dupraz et al. (2009), which provided a framework for differentiating these terms.
Biomineralization
Biomineralization, biologically controlled mineralization, occurs when crystal morphology, growth, composition, and location are completely controlled by the cellular processes of a specific organism. Examples include the shells of invertebrates, such as molluscs and brachiopods. Additionally, the mineralization of collagen provides crucial compressive strength for the bones, cartilage, and teeth of vertebrates.
Organomineralization
This type of mineralization includes both biologically induced mineralization and biologically influenced mineralization.
Biologically induced mineralization occurs when the metabolic activity of microbes (e.g. bacteria) produces chemical conditions favorable for mineral formation. The substrate for mineral growth is the organic matrix, secreted by the microbial community, and affects crystal morphology and composition. Examples of this type of mineralization include calcareous or siliceous stromatolites and other microbial mats. A more specific type of biologically induced mineralization, remote calcification or remote mineralization, takes place when calcifying microbes occupy a shell-secreting organism and alter the chemical environment surrounding the area of shell formation. The result is mineral formation not strongly controlled by the cellular processes of the animal host (i.e., remote mineralization); this may lead to unusual crystal morphologies.
Biologically influenced mineralization takes place when chemical conditions surrounding the site of mineral formation are influenced by abiotic processes (e.g., evaporation or degassing). However, the organic matrix (secreted by microorganisms) is responsible for crystal morphology and composition. Examples include micro- to nanometer-scale crystals of various morphologies.
Biological mineralization can also take place as a result of fossilization. See also calcification.
Biological roles
Among animals, biominerals composed of calcium carbonate, calcium phosphate, or silica perform a variety of roles such as support, defense, and feeding.
If present on a supracellular scale, biominerals are usually deposited by a dedicated organ, which is often defined very early in embryological development. This organ will contain an organic matrix that facilitates and directs the deposition of crystals. The matrix may be collagen, as in deuterostomes, or based on chitin or other polysaccharides, as in molluscs.
In molluscs
The mollusc shell is a biogenic composite material that has been the subject of much interest in materials science because of its unusual properties and its model character for biomineralization. Molluscan shells consist of 95–99% calcium carbonate by weight, while an organic component makes up the remaining 1–5%. The resulting composite has a fracture toughness ≈3000 times greater than that of the crystals themselves. In the biomineralization of the mollusc shell, specialized proteins are responsible for directing crystal nucleation, phase, morphology, and growths dynamics and ultimately give the shell its remarkable mechanical strength. The application of biomimetic principles elucidated from mollusc shell assembly and structure may help in fabricating new composite materials with enhanced optical, electronic, or structural properties.
The most described arrangement in mollusc shells is the nacre, known in large shells such as Pinna or the pearl oyster (Pinctada). Not only does the structure of the layers differ, but so do their mineralogy and chemical composition. Both contain organic components (proteins, sugars, and lipids), and the organic components are characteristic of the layer and of the species. The structures and arrangements of mollusc shells are diverse, but they share some features: the main part of the shell is crystalline calcium carbonate (aragonite, calcite), though some amorphous calcium carbonate occurs as well; and although they react as crystals, they never show angles and facets.
In fungi
Fungi are a diverse group of organisms that belong to the eukaryotic domain. Studies of their significant roles in geological processes, "geomycology", have shown that fungi are involved with biomineralization, biodegradation, and metal-fungal interactions.
In studying fungi's roles in biomineralization, it has been found that fungi deposit minerals with the help of an organic matrix, such as a protein, that provides a nucleation site for the growth of biominerals. Fungal growth may produce a copper-containing mineral precipitate, such as copper carbonate produced from a mixture of (NH4)2CO3 and CuCl2. The production of the copper carbonate is produced in the presence of proteins made and secreted by the fungi. These fungal proteins that are found extracellularly aid in the size and morphology of the carbonate minerals precipitated by the fungi.
In addition to precipitating carbonate minerals, fungi can also precipitate uranium-containing phosphate biominerals in the presence of organic phosphorus that acts as a substrate for the process. The fungi produce a hyphal matrix, also known as mycelium, that localizes and accumulates the uranium minerals that have been precipitated. Although uranium is often deemed as toxic to living organisms, certain fungi such as Aspergillus niger and Paecilomyces javanicus can tolerate it.
Though minerals can be produced by fungi, they can also be degraded, mainly by oxalic acid–producing strains of fungi. Oxalic acid production is increased in the presence of glucose for three organic acid producing fungi: Aspergillus niger, Serpula himantioides, and Trametes versicolor. These fungi have been found to corrode apatite and galena minerals. Degradation of minerals by fungi is carried out through a process known as neogenesis. The order of most to least oxalic acid secreted by the fungi studied are Aspergillus niger, followed by Serpula himantioides, and finally Trametes versicolor.
In bacteria
It is less clear what purpose biominerals serve in bacteria. One hypothesis is that cells create them to avoid entombment by their own metabolic byproducts. Iron oxide particles may also enhance their metabolism.
Other roles
Biomineralization plays significant global roles terraforming the planet, as well as in biogeochemical cycles and as a carbon sink.
Composition
Most biominerals can be grouped by chemical composition into one of three distinct mineral classes: silicates, carbonates, or phosphates.
Silicates
Silicates (glass) are common in marine biominerals, where diatoms form frustules and radiolaria form capsules from hydrated amorphous silica (opal).
Carbonates
The major carbonate in biominerals is CaCO3. The most common polymorphs in biomineralization are calcite (e.g. foraminifera, coccolithophores) and aragonite (e.g. corals), although metastable vaterite and amorphous calcium carbonate can also be important, either structurally or as intermediate phases in biomineralization. Some biominerals include a mixture of these phases in distinct, organised structural components (e.g. bivalve shells). Carbonates are particularly prevalent in marine environments, but also present in freshwater and terrestrial organisms.
Phosphates
The most common biogenic phosphate is hydroxyapatite (HA), a calcium phosphate (Ca10(PO4)6(OH)2) and a naturally occurring form of apatite. It is a primary constituent of bone, teeth, and fish scales. Bone is made primarily of HA crystals interspersed in a collagen matrix—65 to 70% of the mass of bone is HA. Similarly, HA is 70 to 80% of the mass of dentin and enamel in teeth. In enamel, the matrix for HA is formed by amelogenins and enamelins instead of collagen. Remineralisation of tooth enamel involves the reintroduction of mineral ions into demineralised enamel. Hydroxyapatite is the main mineral component of enamel in teeth. During demineralisation, calcium and phosphorus ions are drawn out from the hydroxyapatite. The mineral ions introduced during remineralisation restore the structure of the hydroxyapatite crystals.
The clubbing appendages of the peacock mantis shrimp are made of an extremely dense form of the mineral which has a higher specific strength; this has led to its investigation for potential synthesis and engineering use. Their dactyl appendages have excellent impact resistance due to the impact region being composed of mainly crystalline hydroxyapatite, which offers significant hardness. A periodic layer underneath the impact layer composed of hydroxyapatite with lower calcium and phosphorus content (thus resulting in a much lower modulus) inhibits crack growth by forcing new cracks to change directions. This periodic layer also reduces the energy transferred across both layers due to the large difference in modulus, even reflecting some of the incident energy.
Other minerals
Beyond these main three categories, there are a number of less common types of biominerals, usually resulting from a need for specific physical properties or the organism inhabiting an unusual environment. For example, teeth that are primarily used for scraping hard substrates may be reinforced with particularly tough minerals, such as the iron minerals magnetite in chitons or goethite in limpets. Gastropod molluscs living close to hydrothermal vents reinforce their carbonate shells with the iron-sulphur minerals pyrite and greigite. Magnetotactic bacteria also employ magnetic iron minerals magnetite and greigite to produce magnetosomes to aid orientation and distribution in the sediments.
Celestine, the heaviest mineral in the ocean, consists of strontium sulfate, SrSO4. The mineral is named for the delicate blue colour of its crystals. Planktic acantharean radiolarians form celestine crystal shells. The denseness of the celestite ensures their shells function as mineral ballast, resulting in fast sedimentation to bathypelagic depths. High settling fluxes of acantharian cysts have been observed at times in the Iceland Basin and the Southern Ocean, as much as half of the total gravitational organic carbon flux.
Diversity
In nature, there is a wide array of biominerals, ranging from iron oxide to strontium sulfate, with calcareous biominerals being particularly notable. However, the most taxonomically widespread biomineral is silica (SiO2·nH2O), being present in all eukaryotic supergroups. Notwithstanding, the degree of silicification can vary even between closely related taxa, from being found in composite structures with other biominerals (e.g., limpet teeth; to forming minor structures (e.g., ciliate granules; or being a major structural constituent of the organism. The most extreme degree of silicification is evident in the diatoms, where almost all species have an obligate requirement for silicon to complete cell wall formation and cell division. Biogeochemically and ecologically, diatoms are the most important silicifiers in modern marine ecosystems, with radiolarians (polycystine and phaeodarian rhizarians), silicoflagellates (dictyochophyte and chrysophyte stramenopiles), and sponges with prominent roles as well. In contrast, the major silicifiers in terrestrial ecosystems are the land plants (embryophytes), with other silicifying groups (e.g., testate amoebae) having a minor role.
Broadly, biomineralized structures evolve and diversify when the energetic cost of biomineral production is less than the expense of producing an equivalent organic structure. The energetic costs of forming a silica structure from silicic acid are much less than forming the same volume from an organic structure (≈20-fold less than lignin or 10-fold less than polysaccharides like cellulose). Based on a structural model of biogenic silica, Lobel et al. (1996) identified by biochemical modeling a low-energy reaction pathway for nucleation and growth of silica. The combination of organic and inorganic components within biomineralized structures often results in enhanced properties compared to exclusively organic or inorganic materials. With respect to biogenic silica, this can result in the production of much stronger structures, such as siliceous diatom frustules having the highest strength per unit density of any known biological material, or sponge spicules being many times more flexible than an equivalent structure made of pure silica. As a result, biogenic silica structures are used for support, feeding, predation defense and environmental protection as a component of cyst walls. Biogenic silica also has useful optical properties for light transmission and modulation in organisms as diverse as plants, diatoms, sponges, and molluscs. There is also evidence that silicification is used as a detoxification response in snails and plants, biosilica has even been suggested to play a role as a pH buffer for the enzymatic activity of carbonic anhydrase, aiding the acquisition of inorganic carbon for photosynthesis.
There are questions which have yet to be resolved, such as why some organisms biomineralize while others do not, and why is there such a diversity of biominerals besides silicon when silicon is so abundant, comprising 28% of the Earth's crust. The answer to these questions lies in the evolutionary interplay between biomineralization and geochemistry, and in the competitive interactions that have arisen from these dynamics. Fundamentally whether an organism produces silica or not involves evolutionary trade-offs and competition between silicifiers themselves, and non-silicifying organisms (both those which use other biominerals, and non-mineralizing groups). Mathematical models and controlled experiments of resource competition in phytoplankton have demonstrated the rise to dominance of different algal species based on nutrient backgrounds in defined media. These have been part of fundamental studies in ecology. However, the vast diversity of organisms that thrive in a complex array of biotic and abiotic interactions in oceanic ecosystems are a challenge to such minimal models and experimental designs, whose parameterization and possible combinations, respectively, limit the interpretations that can be built on them.
Evolution
The first evidence of biomineralization dates to some , and sponge-grade organisms may have formed calcite skeletons . But in most lineages, biomineralization first occurred in the Cambrian or Ordovician periods. Organisms used whichever form of calcium carbonate was more stable in the water column at the point in time when they became biomineralized, and stuck with that form for the remainder of their biological history (but see for a more detailed analysis). The stability is dependent on the Ca/Mg ratio of seawater, which is thought to be controlled primarily by the rate of sea floor spreading, although atmospheric levels may also play a role.
Biomineralization evolved multiple times, independently, and most animal lineages first expressed biomineralized components in the Cambrian period. Many of the same processes are used in unrelated lineages, which suggests that biomineralization machinery was assembled from pre-existing "off-the-shelf" components already used for other purposes in the organism. Although the biomachinery facilitating biomineralization is complex – involving signalling transmitters, inhibitors, and transcription factors – many elements of this 'toolkit' are shared between phyla as diverse as corals, molluscs, and vertebrates. The shared components tend to perform quite fundamental tasks, such as designating that cells will be used to create the minerals, whereas genes controlling more finely tuned aspects that occur later in the biomineralization process, such as the precise alignment and structure of the crystals produced, tend to be uniquely evolved in different lineages. This suggests that Precambrian organisms were employing the same elements, albeit for a different purpose – perhaps to avoid the inadvertent precipitation of calcium carbonate from the supersaturated Proterozoic oceans. Forms of mucus that are involved in inducing mineralization in most animal lineages appear to have performed such an anticalcifatory function in the ancestral state. Further, certain proteins that would originally have been involved in maintaining calcium concentrations within cells are homologous in all animals, and appear to have been co-opted into biomineralization after the divergence of the animal lineages. The galaxins are one probable example of a gene being co-opted from a different ancestral purpose into controlling biomineralization, in this case, being 'switched' to this purpose in the Triassic scleractinian corals; the role performed appears to be functionally identical to that of the unrelated pearlin gene in molluscs. Carbonic anhydrase serves a role in mineralization broadly in the animal kingdom, including in sponges, implying an ancestral role. Far from being a rare trait that evolved a few times and remained stagnant, biomineralization pathways in fact evolved many times and are still evolving rapidly today; even within a single genus, it is possible to detect great variation within a single gene family.
The homology of biomineralization pathways is underlined by a remarkable experiment whereby the nacreous layer of a molluscan shell was implanted into a human tooth, and rather than experiencing an immune response, the molluscan nacre was incorporated into the host bone matrix. This points to the exaptation of an original biomineralization pathway. The biomineralisation capacity of brachiopods and molluscs has also been demonstrated to be homologous, building on a conserved set of genes. This indicates that biomineralisation is likely ancestral to all lophotrochozoans.
The most ancient example of biomineralization, dating back 2 billion years, is the deposition of magnetite, which is observed in some bacteria, as well as the teeth of chitons and the brains of vertebrates; it is possible that this pathway, which performed a magnetosensory role in the common ancestor of all bilaterians, was duplicated and modified in the Cambrian to form the basis for calcium-based biomineralization pathways. Iron is stored in close proximity to magnetite-coated chiton teeth, so that the teeth can be renewed as they wear. Not only is there a marked similarity between the magnetite deposition process and enamel deposition in vertebrates, but some vertebrates even have comparable iron storage facilities near their teeth.
Potential applications
Most traditional approaches to the synthesis of nanoscale materials are energy inefficient, requiring stringent conditions (e.g., high temperature, pressure, or pH), and often produce toxic byproducts. Furthermore, the quantities produced are small, and the resultant material is usually irreproducible because of the difficulties in controlling agglomeration. In contrast, materials produced by organisms have properties that usually surpass those of analogous synthetically manufactured materials with similar phase composition. Biological materials are assembled in aqueous environments under mild conditions by using macromolecules. Organic macromolecules collect and transport raw materials and assemble these substrates and into short- and long-range ordered composites with consistency and uniformity.
The aim of biomimetics is to mimic the natural way of producing minerals such as apatites. Many man-made crystals require elevated temperatures and strong chemical solutions, whereas the organisms have long been able to lay down elaborate mineral structures at ambient temperatures. Often, the mineral phases are not pure but are made as composites that entail an organic part, often protein, which takes part in and controls the biomineralization. These composites are often not only as hard as the pure mineral but also tougher, as the micro-environment controls biomineralization.
Architecture
One biological system that might be of key importance in the future development of architecture is bacterial biofilm. The term biofilm refers to complex heterogeneous structures comprising different populations of microorganisms that attach and form a community on inert (e.g. rocks, glass, plastic) or organic (e.g. skin, cuticle, mucosa) surfaces.
The properties of the surface, such as charge, hydrophobicity and roughness, determine initial bacterial attachment. A common principle of all biofilms is the production of extracellular matrix (ECM) composed of different organic substances, such as extracellular proteins, exopolysaccharides and nucleic acids. While the ability to generate ECM appears to be a common feature of multicellular bacterial communities, the means by which these matrices are constructed and function are diverse.
Bacterially induced calcium carbonate precipitation can be used to produce "self-healing" concrete. Bacillus megaterium spores and suitable dried nutrients are mixed and applied to steel-reinforced concrete. When the concrete cracks, water ingress dissolves the nutrients and the bacteria germinate triggering calcium carbonate precipitation, resealing the crack and protecting the steel reinforcement from corrosion. This process can also be used to manufacture new hard materials, such as bio-cement.
However, the full potential of bacteria-driven biomineralization is yet to be realized, as it is currently used as a passive filling rather than as a smart designable material. A future challenge is to develop ways to control the timing and the location of mineral formation, as well as the physical properties of the mineral itself, by environmental input. Bacillus subtilis has already been shown to respond to its environment, by changing the production of its ECM. It uses the polymers produced by single cells during biofilm formation as a physical cue to coordinate ECM production by the bacterial community.
Uranium contaminants
Biomineralization may be used to remediate groundwater contaminated with uranium. The biomineralization of uranium primarily involves the precipitation of uranium phosphate minerals associated with the release of phosphate by microorganisms. Negatively charged ligands at the surface of the cells attract the positively charged uranyl ion (UO22+). If the concentrations of phosphate and UO22+ are sufficiently high, minerals such as autunite (Ca(UO2)2(PO4)2•10-12H2O) or polycrystalline HUO2PO4 may form thus reducing the mobility of UO22+. Compared to the direct addition of inorganic phosphate to contaminated groundwater, biomineralization has the advantage that the ligands produced by microbes will target uranium compounds more specifically rather than react actively with all aqueous metals. Stimulating bacterial phosphatase activity to liberate phosphate under controlled conditions limits the rate of bacterial hydrolysis of organophosphate and the release of phosphate to the system, thus avoiding clogging of the injection location with metal phosphate minerals. The high concentration of ligands near the cell surface also provides nucleation foci for precipitation, which leads to higher efficiency than chemical precipitation.
Biogenic mineral controversy
The geological definition of mineral normally excludes compounds that occur only in living beings. However, some minerals are often biogenic (such as calcite) or are organic compounds in the sense of chemistry (such as mellite). Moreover, living beings often synthesize inorganic minerals (such as hydroxylapatite) that also occur in rocks.
The International Mineralogical Association (IMA) is the generally recognized standard body for the definition and nomenclature of mineral species. , the IMA recognizes 5,650 official mineral species out of 5,862 proposed or traditional ones.
The IMA's decision to exclude biogenic crystalline substances is a topic of contention among geologists and mineralogists. For example, Lowenstam (1981) stated that "organisms are capable of forming a diverse array of minerals, some of which cannot be formed inorganically in the biosphere."
Skinner (2005) views all solids as potential minerals and includes biominerals in the mineral kingdom, which are created by organisms' metabolic activities. Skinner expanded the previous definition of a mineral to classify "element or compound, amorphous or crystalline, formed through biogeochemical processes," as a mineral.
Recent advances in high-resolution genetics and X-ray absorption spectroscopy are providing revelations on the biogeochemical relations between microorganisms and minerals that may shed new light on this question. For example, the IMA-commissioned "Working Group on Environmental Mineralogy and Geochemistry " deals with minerals in the hydrosphere, atmosphere, and biosphere. The group's scope includes mineral-forming microorganisms, which exist on nearly every rock, soil, and particle surface spanning the globe to depths of at least 1,600 metres below the sea floor and 70 kilometres into the stratosphere (possibly entering the mesosphere).
Biogeochemical cycles have contributed to the formation of minerals for billions of years. Microorganisms can precipitate metals from solution, contributing to the formation of ore deposits. They can also catalyze the dissolution of minerals.
Before the International Mineralogical Association's listing, over 60 biominerals had been discovered, named, and published. These minerals (a sub-set tabulated in Lowenstam (1981)) are considered minerals proper according to Skinner's (2005) definition. These biominerals are not listed in the International Mineral Association official list of mineral names, however, many of these biomineral representatives are distributed amongst the 78 mineral classes listed in the Dana classification scheme.
Skinner's (2005) definition of a mineral considers this matter by stating that a mineral can be crystalline or amorphous. Although biominerals are not the most common form of minerals, they help to define the limits of what constitutes a mineral properly. Nickel's (1995) formal definition explicitly mentioned crystallinity as a key to defining a substance as a mineral. A 2011 article defined icosahedrite, an aluminium-iron-copper alloy as mineral; named for its unique natural icosahedral symmetry, it is a quasicrystal. Unlike a true crystal, quasicrystals are ordered but not periodic.
List of minerals
Examples of biogenic minerals include:
Apatite in bones and teeth.
Aragonite, calcite, fluorite in vestibular systems (part of the inner ear) of vertebrates.
Aragonite and calcite in travertine and biogenic silica (siliceous sinter, opal) deposited through algal action.
Goethite found as filaments in limpet teeth.
Hydroxyapatite formed by mitochondria.
Magnetite and greigite formed by magnetotactic bacteria.
Oxalate and calcium carbonate raphides, silica bodies, strontium and barium sulfate in some plants
Pyrite and marcasite in sedimentary rocks deposited by sulfate-reducing bacteria.
Quartz formed from bacterial action on fossil fuels (gas, oil, coal).
Astrobiology
Biominerals could be important indicators of extraterrestrial life and thus could play an essential role in the search for past or present life on Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions.
On 24 January 2014, NASA reported that current studies by the Curiosity and Opportunity rovers on the planet Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.
See also
Biocrystallization
Biofilm
Biointerface
Biomineralising polychaetes
Bone mineral
Microbiologically induced calcite precipitation
Mineralized tissues
Phytolith
Raphide and Druse (botany)
Susannah M. Porter history of biomineralization
Notes
References
Further reading
External links
'Data and literature on modern and fossil Biominerals': http://biomineralisation.blogspot.fr
An overview of the bacteria involved in biomineralization from the Science Creative Quarterly
Biomineralization web-book: bio-mineral.org
Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014).
Special German Research Project About the Principles of Biomineralization
Pedology
Physiology
Bioinorganic chemistry
Biological processes
Skeletal system | Biomineralization | [
"Chemistry",
"Biology"
] | 6,469 | [
"Biomineralization",
"Physiology",
"nan",
"Biochemistry",
"Bioinorganic chemistry"
] |
1,984,825 | https://en.wikipedia.org/wiki/Equivalent%20annual%20cost | In finance, the equivalent annual cost (EAC) is the cost per year of owning and operating an asset over its entire lifespan. It is calculated by dividing the negative NPV of a project by the "present value of annuity factor":
, where
where r is the annual interest rate and
t is the number of years.
Alternatively, EAC can be obtained by multiplying the NPV of the project by the "loan repayment factor".
EAC is often used as a decision-making tool in capital budgeting when comparing investment projects of unequal lifespans. However, the projects being compared must have equal risk: otherwise, EAC must not be used.
The technique was first discussed in 1923 in engineering literature, and, as a consequence, EAC appears to be a favoured technique employed by engineers, while accountants tend to prefer net present value (NPV) analysis. Such preference has been described as being a matter of professional education, as opposed to an assessment of the actual merits of either method. In the latter group, however, the Society of Management Accountants of Canada endorses EAC, having discussed it as early as 1959 in a published monograph (which was a year before the first mention of NPV in accounting textbooks).
Application
EAC can be used in the following scenarios:
Assessing alternative projects of unequal lives (where only the costs are relevant) in order to address any built-in bias favouring the longer-term investment.
Determining the optimum economic life of an asset, through charting the change in EAC that may occur due to the fluctuation of operating costs and salvage values over time.
Assessing whether leasing an asset would be more economical than purchasing it.
Assessing whether increased maintenance costs will economically change the useful life of an asset.
Calculating how much should be invested in an asset in order to achieve a desired result (i.e., purchasing a storage tank with a 20-year life, as opposed to one with a 5-year life, in order to achieve a similar EAC).
Comparing to estimated annual cost savings, in order to determine whether it makes economic sense to invest.
Estimating the cost savings required to justify the purchase of new equipment.
Determining the cost of continuing with existing equipment.
Where an asset undergoes a major overhaul, and the cost is not fully reflected in salvage values, to calculate the optimum life (i.e., lowest EAC) of holding on to the asset.
A practical example
A manager must decide on which machine to purchase, assuming an annual interest rate of 5%:
The conclusion is to invest in machine B since it has a lower EAC.
Canadian context with capital cost allowance
Such analysis can also be carried out on an after-tax basis, and extensive work has been undertaken in Canada for investment appraisal of assets subject to its capital cost allowance regime for computing depreciation for income tax purposes. It is subject to a three-part calculation:
Determination of the after-tax NPV of the investment
Calculation of the after-tax NPV of the operating cost stream
Applying a sinking fund amortization factor to the after-tax amount of any salvage value.
In mathematical notation, for assets subject to the general half-year rule of CCA calculation, this is expressed as:
where:
= Capital recovery (amortization) factor
= Sinking fund amortization factor
= Investment
= Estimated salvage value
= Operating expense stream
= CCA rate per year for tax purposes
= rate of taxation
= number of years
= cost of capital, rate of interest, or minimum rate of return (whichever is most relevant)
and where
See also
Capital budgeting
Depreciation
Net present value
References
Further reading
External links
Equivalent Annual Cost - EAC – Calculator
Management accounting
Corporate finance
Capital budgeting
Engineering economics | Equivalent annual cost | [
"Engineering"
] | 770 | [
"Engineering economics"
] |
1,984,868 | https://en.wikipedia.org/wiki/Adiabatic%20shear%20band | In physics, mechanics and engineering, an adiabatic shear band is one of the many mechanisms of failure that occur in metals and other materials that are deformed at a high rate in processes such as metal forming, machining and ballistic impact. Adiabatic shear bands are usually very narrow bands, typically 5-500 μm wide and consisting of highly sheared material. Adiabatic is a thermodynamic term meaning an absence of heat transfer – the heat produced is retained in the zone where it is created. (The opposite extreme, where all heat that is produced is conducted away, is isothermal.)
Deformation
It is necessary to include some basics of plastic deformation to understand the link between heat produced and the plastic work done. If we carry out a compression test on a cylindrical specimen to, say, 50% of its original height, the stress of the work material will increase usually significantly with reduction. This is called ‘work hardening’. During work hardening, the micro-structure, distortion of grain structure and the generation and glide of dislocations all occur. The remainder of the plastic work done – which can be as much as 90% of the total, is dissipated as heat.
If the plastic deformation is carried out under dynamic conditions, such as by drop forging, then the plastic deformation is localized more as the forging hammer speed is increased. This also means that the deformed material becomes hotter the higher the speed of the drop hammer. Now as metals become warmer, their resistance to further plastic deformation decreases. From this point we can see that there is a type of cascade effect: as more plastic deformation is absorbed by the metal, more heat is produced, making it easier for the metal to deform further. This is a catastrophic effect which almost inevitably leads to failure.
History
The first person to carry out any reported experimental programme to investigate the heat produced as a result of plastic deformation was Henri Tresca in June 1878 Tresca forged a bar of platinum (as well as many other metals); at the moment of forging the metal had just cooled down below red heat. The subsequent blow of the steam hammer, which left a depression in the bar and lengthened it, also reheated it in the direction of two lines in the form of a letter X. So great was this reheating, the metal along these lines was fully restored for some seconds to red heat. Tresca carried out many forging experiments on different metals. Tresca estimated the amount of plastic work converted into heat from a large number of experiments, and it was always above 70%.
Tungsten Heavy Alloys
Tungsten heavy alloys (WHAs) possess high density, strength and toughness, making them good candidates for kinetic energy penetrator applications. When compared with depleted uranium, another material often used for kinetic penetrators, WHAs exhibit much less adiabatic shear band formation. During ballistic impact, the formation of shear bands produces a “self-sharpening” effect, aiding penetration by minimizing the surface area at the leading edge of the projectile. The average width of the shear band, then, should also be minimized to improve performance.
It has been proposed that formation of adiabatic shear bands in WHAs could be promoted by the presence of stress concentrations. When different specimen geometries were tested, cylindrical specimens without geometric stress concentrations were the least prone to shear band formation. Shear bands tend to form at these initiation points and travel along the directions of greatest shear stress. Several WHA processing methods have been investigated to increase the propensity for shear band formation. Leveraging hot-hydrostatic extrusion and/or hot torsion has been shown to elongate the tungsten grains in the microstructure. When subjected to high strain rate deformation parallel to the direction of the grain elongation, adiabatic shear bands readily form and propagate along the Ni-Fe matrix phase. The flow stress of the matrix is much lower than that of tungsten, so texturing of the microstructure provides an easier path for propagation of shear bands.
In 2019, a novel WHA was fabricated, replacing the traditional Ni-Fe matrix phase with a high entropy alloy matrix. This system was found to easily induce adiabatic shear band formation. The matrix phase includes nanoprecipitates that increase the matrix hardness. It is posited that these precipitates dissolve upon temperature rise, leading to softening of the matrix along the shear zone, thereby reducing the barrier for shear band propagation.
References
Solid mechanics
Deformation (mechanics) | Adiabatic shear band | [
"Physics",
"Materials_science",
"Engineering"
] | 938 | [
"Solid mechanics",
"Deformation (mechanics)",
"Classical mechanics stubs",
"Classical mechanics",
"Materials science",
"Mechanics"
] |
1,984,978 | https://en.wikipedia.org/wiki/Acetogen | An acetogen is a microorganism that generates acetate (CH3COO−) as an end product of anaerobic respiration or fermentation. However, this term is usually employed in a narrower sense only to those bacteria and archaea that perform anaerobic respiration and carbon fixation simultaneously through the reductive acetyl coenzyme A (acetyl-CoA) pathway (also known as the Wood-Ljungdahl pathway). These genuine acetogens are also known as "homoacetogens" and they can produce acetyl-CoA (and from that, in most cases, acetate as the end product) from two molecules of carbon dioxide (CO2) and four molecules of molecular hydrogen (H2). This process is known as acetogenesis, and is different from acetate fermentation, although both occur in the absence of molecular oxygen (O2) and produce acetate. Although previously thought that only bacteria are acetogens, some archaea can be considered to be acetogens.
Acetogens are found in a variety of habitats, generally those that are anaerobic (lack oxygen). Acetogens can use a variety of compounds as sources of energy and carbon; the best studied form of acetogenic metabolism involves the use of carbon dioxide as a carbon source and hydrogen as an energy source. Carbon dioxide reduction is carried out by the key enzyme acetyl-CoA synthase. Together with methane-forming archaea, acetogens constitute the last limbs in the anaerobic food web that leads to the production of methane from polymers in the absence of oxygen. Acetogens may represent ancestors of the first bioenergetically active cells in evolution.
Metabolic roles
Acetogens have diverse metabolic roles, which help them thrive in different environments. One of their metabolic products is acetate which is an important nutrient for the host and its inhabiting microbial community, most seen in termite's guts. Acetogens also serve as "hydrogen sinks" in termite's GI tract. Hydrogen gas inhibits biodegradation and acetogens use up these hydrogen gases in the anaerobic environment to favor the biodegradative capacity of the host by reacting hydrogen gas and carbon dioxide to make acetate. Acetogens have the ability to use variety of substrates in an event where another competitor such as a methanogen makes hydrogen gas a limiting substrate. Acetogens can use and convert alcohols, lactates and fatty acids, which are usually restricted to syntrophs, instead of just carbon dioxide and hydrogen. This enables them to take on the roles of important contributors of food chain such as primary fermenters. Acetogens can work together with methanogens, as exemplified by the conversion of carbohydrates by a Methanosarcina barkeri and coculture of A. woodii. The methanogen takes up acetate to favor the acetogen. Sometimes the interspecies transfer of hydrogen gas between A. woodii and an H2-consuming methanogen results in hydrogen gas being released from the acetogen instead of going toward acetogenesis by the Wood–Ljungdahl pathway. Acetogens are also one of the contributors to corrosion of steel. Acetobacterium woodii utilize hydrogen gas and CO2 to make the acetate that is used as carbon source for many of the sulfate-reducing bacteria growing with hydrogen gas and sulfate.
References
Anaerobic digestion
Bacteria | Acetogen | [
"Chemistry",
"Engineering",
"Biology"
] | 732 | [
"Prokaryotes",
"Anaerobic digestion",
"Bacteria",
"Environmental engineering",
"Water technology",
"Microorganisms"
] |
1,985,376 | https://en.wikipedia.org/wiki/Methyl%20methacrylate | Methyl methacrylate (MMA) is an organic compound with the formula . This colorless liquid, the methyl ester of methacrylic acid (MAA), is a monomer produced on a large scale for the production of poly(methyl methacrylate) (PMMA).
History
MMA was discovered by Bernhard Tollens and his student W. A. Caspary in 1873, who noticed and described its tendency to change into a clear, hard, transparent substance especially in sunlight. Studies on acrylic esters slowly developed until the Staudinger's theory of macromolecules and his research into the nature of polyacrylates allowed control over polymerization. Company Rohm and Haas founded by German chemist Otto Röhm, who investigated the topic for three decades, was finally able to start its industrial production in 1931.
Production and properties
Given the scale of production, many methods have been developed starting from diverse two- to four-carbon precursors. Two principal routes appear to be commonly practiced.
Cyanohydrin route
The principal route begins with the condensation of acetone and hydrogen cyanide:
Sulfuric acid then hydrolyzes acetone cyanohydrin (ACH) to a sulfate ester-adduct, which is cracked to the ester:
Methanolysis gives ammonium bisulfate and MMA:
Laboratory scale procedures are available for some of these steps.
This technology affords more than 3 billion kilograms per year, and the economics have been optimized. Nevertheless, the ACH route coproduces substantial amounts of ammonium sulfate: roughly 1.1 kg/(kg MMA). The ammonium sulfate can be converted to diammonium sulfate, which is a common fertilizer. Also it can be combusted to give sulfuric acid.
Methyl propionate routes
The first stage involves carboalkoxylation of ethylene to produce methyl propionate (MeP):
The MeP synthesis is conducted in a continuous-stirred tank reactor at moderate temperature and pressure using proprietary agitation and gas-liquid mixing arrangement.
In a second set of reactions, MeP is condensed with formaldehyde in a single heterogeneous reaction step to form MMA:
The reaction of MeP and formaldehyde takes place over a fixed bed of catalyst. This catalyst, caesium oxide on silica, achieves good selectivity to MMA from MeP. The formation of a small amount of heavy, relatively involatile compounds poisons the catalyst. The coke is easily removed and catalyst activity and selectivity restored by controlled, in-situ regeneration. The reactor product stream is separated in a primary distillation so that a crude MMA product stream, free from water, MeP and formaldehyde, is produced. Unreacted MeP and water are recycled via the formaldehyde dehydration process. MMA (>99.9%) is purified by vacuum distillations. The separated streams are returned to the process; there being only a small heavy ester purge stream, which is disposed of in a thermal oxidizer with heat recovered for use in the process.
In 2008, Lucite International commissioned an Alpha MMA plant on Jurong Island in Singapore. This process plant was cheaper to build and run than conventional systems, produces virtually no waste and the feedstocks can even be made from biomass.
Other routes to MMA
Via propionaldehyde
Ethylene is first hydroformylated to give propanal, which is then condensed with formaldehyde to produce methacrolein, The condensation is catalyzed by a secondary amine. Air oxidation of methacrolein to methacrylic acid completes the synthesis of the acid:
From isobutyric acid
As developed by Atochem and Röhm, isobutyric acid is produced by hydrocarboxylation of propene, using HF as a catalyst:
Oxidative dehydrogenation of the isobutyric acid yields methacrylic acid. Metal oxides catalyse this process:
Methyl acetylene (propyne) process
Using Reppe chemistry, methyl acetylene is converted to MMA. As developed by Shell, this process produces MMA in one step reaction with 99% yield with a catalyst derived from palladium acetate, phosphine ligands, and Bronsted acids as catalyst:
Isobutylene routes
The reactions by the direct oxidation method consist of two-step oxidation of isobutylene or TBA with air to produce methacrylic acid and esterification by methanol to produce MMA.
A process using isobutylene as a raw material has been commercialized by Escambia Co. Isobutylene is oxidized to provide α-hydroxy isobutyric acid. The conversion uses and nitric acid at 5–10 °C in the liquid phase. After esterification and dehydration MMA is obtained. Challenges with this route, aside from yield, involve the handling of large amounts of nitric acid and . This method was discontinued in 1965 after an explosion at an operation plant.
Methacrylonitrile (MAN) process
MAN can be produced by ammoxidation from isobutylene:
This step is analogous to the industrial route to acrylonitrile, a related commodity chemical. MAN can be hydrated by sulfuric acid to methacrylamide:
Mitsubishi Gas Chemicals proposed that MAN can be hydrated to methacrylamide without using sulfuric acid and is then esterified to obtain MMA by methylformate.
Esterification of methacrolein
Asahi Chemical developed a process based on direct oxidative esterification of methacrolein, which does not produce by-products such as ammonium bisulfate. The raw material is tert-butanol, as in the direct oxidation method. In the first step, methacrolein is produced in the same way as in the direct oxidation process by gas phase catalytic oxidation, is simultaneously oxidized and is esterified in liquid methanol to get MMA directly.
Uses
The principal application, consuming approximately 75% of the MMA, is the manufacture of polymethyl methacrylate acrylic plastics (PMMA). Methyl methacrylate is also used for the production of the co-polymer methyl methacrylate-butadiene-styrene (MBS), used as a modifier for PVC. Another application is as cement used in total hip replacements as well as total knee replacements. Used as the "grout" by orthopedic surgeons to make the bone inserts fix into bone, it greatly reduces post-operative pain from the insertions but has a finite lifespan. Typically the lifespan of methylmethacrylate as bone cement is 20 years before revision surgery is required. Cemented implants are usually only done in elderly populations that require more immediate short term replacements. In younger populations, cementless implants are used because their lifespan is considerably longer. Also used in fracture repair in small exotic animal species using internal fixation.
MMA is a raw material for the manufacture of other methacrylates. These derivatives include ethyl methacrylate (EMA), butyl methacrylate (BMA) and 2-ethyl hexyl methacrylate (2-EHMA). Methacrylic acid (MAA) is used as a chemical intermediate as well as in the manufacture of coating polymers, construction chemicals and textile applications.
Wood can be impregnated with MMA and polymerized in situ to produce stabilized wood.
Environmental issues and health hazards
In terms of the acute toxicity of methyl methacrylate, the LD50 is 7–10 g/kg (oral, rat). It is an irritant to the eyes and can cause redness and pain. Irritation of the skin, eye, and nasal cavity has been observed in rodents and rabbits exposed to relatively high concentrations of methyl methacrylate. Methyl methacrylate is a mild skin irritant in humans and has the potential to induce skin sensitization in susceptible individuals.
See also
Acrylate
Methacrylates
PMMA
References
External links
Chemical data on Chemicalland
US Environmental Protection Agency, 1994 data
Intox Cheminfo data
Methacrylate Producers Association (MPA)
National Pollutant Inventory – Methyl methacrylate fact sheet
CDC – NIOSH Pocket Guide to Chemical Hazards
Methacrylate esters
Methyl esters
Hazardous air pollutants
Monomers
Commodity chemicals | Methyl methacrylate | [
"Chemistry",
"Materials_science"
] | 1,797 | [
"Monomers",
"Commodity chemicals",
"Polymer chemistry",
"Products of chemical industry"
] |
1,986,263 | https://en.wikipedia.org/wiki/Symbolic%20dynamics | In mathematics, symbolic dynamics is the study of dynamical systems defined on a discrete space consisting of infinite sequences of abstract symbols. The evolution of the dynamical system is defined as a simple shift of the sequence.
Because of their explicit, discrete nature, such systems are often relatively easy to characterize and understand. They form a key tool for studying topological or smooth dynamical systems, because in many important cases it is possible to reduce the dynamics of a more general dynamical system to a symbolic system. To do so, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another.
History
The idea goes back to Jacques Hadamard's 1898 paper on the geodesics on surfaces of negative curvature. It was applied by Marston Morse in 1921 to the construction of a nonperiodic recurrent geodesic. Related work was done by Emil Artin in 1924 (for the system now called Artin billiard), Pekka Myrberg, Paul Koebe, Jakob Nielsen, G. A. Hedlund.
The first formal treatment was developed by Morse and Hedlund in their 1938 paper. George Birkhoff, Norman Levinson and the pair Mary Cartwright and J. E. Littlewood have applied similar methods to qualitative analysis of nonautonomous second order differential equations.
Claude Shannon used symbolic sequences and shifts of finite type in his 1948 paper A mathematical theory of communication that gave birth to information theory.
During the late 1960s the method of symbolic dynamics was developed to hyperbolic toral automorphisms by Roy Adler and Benjamin Weiss, and to Anosov diffeomorphisms by Yakov Sinai who used the symbolic model to construct Gibbs measures. In the early 1970s the theory was extended to Anosov flows by Marina Ratner, and to Axiom A diffeomorphisms and flows by Rufus Bowen.
A spectacular application of the methods of symbolic dynamics is Sharkovskii's theorem about periodic orbits of a continuous map of an interval into itself (1964).
Examples
Consider the set of two-sided infinite sequences on two symbols, 0 and 1. A typical element in this set looks like: (..., 0, 1, 0, 0, 1, 0, 1, ... )
There will be exactly two fixed points under the shift map: the sequence of all zeroes, and the sequence of all ones. A periodic sequence will have a periodic orbit. For instance, the sequence (..., 0, 1, 0, 1, 0, 1, 0, 1, ...) will have period two.
More complex concepts such as heteroclinic orbits and homoclinic orbits also have simple descriptions in this system. For example, any sequence that has only a finite number of ones will have a homoclinic orbit, tending to the sequence of all zeros in forward and backward iterations.
Itinerary
Itinerary of point with respect to the partition is a sequence of symbols. It describes dynamic of the point.
Applications
Symbolic dynamics originated as a method to study general dynamical systems; now its techniques and ideas have found significant applications in data storage and transmission, linear algebra, the motions of the planets and many other areas. The distinct feature in symbolic dynamics is that time is measured in discrete intervals. So at each time interval the system is in a particular state. Each state is associated with a symbol and the evolution of the system is described by an infinite sequence of symbols—represented effectively as strings. If the system states are not inherently discrete, then the state vector must be discretized, so as to get a coarse-grained description of the system.
See also
Measure-preserving dynamical system
Combinatorics and dynamical systems
Shift space
Shift of finite type
Complex dynamics
Arithmetic dynamics
References
Further reading
Bruce Kitchens, Symbolic dynamics. One-sided, two-sided and countable state Markov shifts. Universitext, Springer-Verlag, Berlin, 1998. x+252 pp.
G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system. Math. Systems Theory, Vol. 3, No. 4 (1969) 320–3751
External links
ChaosBook.org Chapter "Transition graphs"
A simulation of the three-bumper billiard system and its symbolic dynamics, from Chaos V: Duhem's Bull
Dynamical systems
Combinatorics on words | Symbolic dynamics | [
"Physics",
"Mathematics"
] | 938 | [
"Symbolic dynamics",
"Combinatorics",
"Mechanics",
"Combinatorics on words",
"Dynamical systems"
] |
1,987,207 | https://en.wikipedia.org/wiki/Black%20hole%20electron | In physics, there is a speculative hypothesis that if there were a black hole with the same mass, charge and angular momentum as an electron, it would share other properties of the electron. Most notably, Brandon Carter showed in 1968 that the magnetic moment of such an object would match that of an electron. This is interesting because calculations ignoring special relativity and treating the electron as a small rotating sphere of charge give a magnetic moment roughly half the experimental value (see Gyromagnetic ratio).
However, Carter's calculations also show that a would-be black hole with these parameters would be "super-extremal". Thus, unlike a true black hole, this object would display a naked singularity, meaning a singularity in spacetime not hidden behind an event horizon. It would also give rise to closed timelike curves.
Standard quantum electrodynamics (QED), currently the most comprehensive theory of particles, treats the electron as a point particle. There is no evidence that the electron is a black hole (or naked singularity) or not. Furthermore, since the electron is quantum-mechanical in nature, any description purely in terms of general relativity is incomplete until a better model based on understanding of quantum nature of black holes and gravitational behaviour of quantum particles is developed. Hence, the idea of a black hole electron remains strictly hypothetical.
Details
An article published in 1938 by Albert Einstein, Leopold Infeld, and Banesh Hoffmann showed that if elementary particles are treated as singularities in spacetime, it is unnecessary to postulate geodesic motion as part of general relativity. The electron may be treated as such a singularity.
If one ignores the electron's angular momentum and charge as well as the effects of quantum mechanics, one can treat the electron as a black hole and attempt to compute its radius. The Schwarzschild radius of a mass is the radius of the event horizon for a non-rotating uncharged black hole of that mass. It is given by where is the Newtonian constant of gravitation and is the speed of light. For the electron,
= ,
so
= .
Thus, if we ignore the electric charge and angular momentum of the electron and apply general relativity on this very small length scale without taking quantum theory into account, a black hole of the electron's mass would have this radius.
In reality, physicists expect quantum-gravity effects to become significant even at much larger length scales, comparable to the Planck lengthSo the above purely classical calculation cannot be trusted. Furthermore, even classically, electric charge and angular momentum affect the properties of a black hole. To take them into account while still ignoring quantum effects, one should use the Kerr–Newman metric. If we do, we find that the angular momentum and charge of the electron are too large for a black hole of the electron's mass: a Kerr–Newman object with such a large angular momentum and charge would instead be "super-extremal", displaying a naked singularity, meaning a singularity not shielded by an event horizon.
To see that this is so, it suffices to consider the electron's charge and neglect its angular momentum. In the Reissner–Nordström metric, which describes electrically charged but non-rotating black holes, there is a quantity , defined bywhere is the electron's charge, and is the vacuum permittivity. For an electron with = − = , this gives a value
= .
Since this (vastly) exceeds the Schwarzschild radius, the Reissner–Nordström metric has a naked singularity.
If we include the effects of the electron's rotation using the Kerr–Newman metric, there is still a naked singularity, which is now a ring singularity, and spacetime also has closed timelike curves. The size of this ring singularity is on the order ofwhere as before is the electron's mass, is the speed of light, and = is the spin angular momentum of the electron. This gives
= ,
which is much larger than the length scale associated with the electron's charge. As noted by Carter, this length is on the order of the electron's Compton wavelength. Unlike the Compton wavelength, it is not quantum-mechanical in nature.
More recently, Alexander Burinskii has pursued the idea of treating the electron as a Kerr–Newman naked singularity.
See also
Quantum gravity
Abraham–Lorentz force
Black hole thermodynamics
Entropic force
Hawking radiation
List of quantum gravity researchers
Entropic elasticity of an ideal chain
Gravitation
Induced gravity
Geon (physics)
Micro black hole
Geometrodynamics
References
Further reading
Popular literature
(See chapter 13)
(See chapter 10)
Black holes
Quantum gravity
Hypothetical elementary particles | Black hole electron | [
"Physics",
"Astronomy"
] | 968 | [
"Physical phenomena",
"Black holes",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Density",
"Hypothetical elementary particles",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
1,987,293 | https://en.wikipedia.org/wiki/Contact%20angle | The contact angle (symbol ) is the angle between a liquid surface and a solid surface where they meet. More specifically, it is the angle between the surface tangent on the liquid–vapor interface and the tangent on the solid–liquid interface at their intersection.
It quantifies the wettability of a solid surface by a liquid via the Young equation.
A given system of solid, liquid, and vapor at a given temperature and pressure has a unique equilibrium contact angle. However, in practice a dynamic phenomenon of contact angle hysteresis is often observed, ranging from the advancing (maximal) contact angle to the receding (minimal) contact angle. The equilibrium contact is within those values, and can be calculated from them. The equilibrium contact angle reflects the relative strength of the liquid, solid, and vapour molecular interaction.
The contact angle depends upon the medium above the free surface of the liquid, and the nature of the liquid and solid in contact. It is independent of the inclination of solid to the liquid surface. It changes with surface tension and hence with the temperature and purity of the liquid.
Thermodynamics
The theoretical description of contact angle arises from the consideration of a thermodynamic equilibrium between the three phases: the liquid phase (L), the solid phase (S), and the gas or vapor phase (G) (which could be a mixture of ambient atmosphere and an equilibrium concentration of the liquid vapor). (The "gaseous" phase could be replaced by another immiscible liquid phase.) If the solid–vapor interfacial energy is denoted by , the solid–liquid interfacial energy by , and the liquid–vapor interfacial energy (i.e. the surface tension) by , then the equilibrium contact angle is determined from these quantities by the Young equation:
The contact angle can also be related to the work of adhesion via the Young–Dupré equation:
where is the solid – liquid adhesion energy per unit area when in the medium G.
Modified Young’s equation
The earliest study on the relationship between contact angle and surface tensions for sessile droplets on flat surfaces was reported by Thomas Young in 1805. A century later Gibbs proposed a modification to Young's equation to account for the volumetric dependence of the contact angle. Gibbs postulated the existence of a line tension, which acts at the three-phase boundary and accounts for the excess energy at the confluence of the solid-liquid-gas phase interface, and is given as:
where is the line tension in Newtons and is the droplet radius in meters. Although experimental data validates an affine relationship between the cosine of the contact angle and the inverse line radius, it does not account for the correct sign of and overestimates its value by several orders of magnitude.
Contact angle prediction while accounting for line tension and Laplace pressure
With improvements in measuring techniques such as atomic force microscopy, confocal microscopy, and scanning electron microscope, researchers were able to produce and image droplets at ever smaller scales. With the reduction in droplet size came new experimental observations of wetting. These observations confirmed that the modified Young's equation does not hold at the micro-nano scales. Jasper proposed that including a term in the variation of the free energy may be the key to solving the contact angle problem at such small scales. Given that the variation in free energy is zero at equilibrium:
The variation in the pressure at the free liquid-vapor boundary is due to Laplace pressure, which is proportional to the mean curvature. Solving the above equation for both convex and concave surfaces yields:
where
This equation relates the contact angle, a geometric property of a sessile droplet to the bulk thermodynamics, the energy at the three phase contact boundary, and the mean curvature of the droplet. For the special case of a sessile droplet on a flat surface ():
In the above equation, the first two terms are the modified Young's equation, while the third term is due to the Laplace pressure. This nonlinear equation correctly predicts the sign and magnitude of , the flattening of the contact angle at very small scales, and contact angle hysteresis.
Contact angle hysteresis
A given substrate-liquid-vapor combination yields a continuous range of contact angle values in practice. The maximum contact angle is referred to as the advancing contact angle and the minimum contact angle is referred to as the receding contact angle. The advancing and receding contact angles are measured from dynamic experiments where droplets or liquid bridges are in movement. In contrast, the equilibrium contact angle described by the Young-Laplace equation is measured from a static state. Static measurements yield values in-between the advancing and receding contact angle depending on deposition parameters (e.g. velocity, angle, and drop size) and drop history (e.g. evaporation from time of deposition). Contact angle hysteresis is defined as although the term is also used to describe the expression . The static, advancing, or receding contact angle can be used in place of the equilibrium contact angle depending on the application. The overall effect can be seen as closely analogous to static friction, i.e., a minimal amount of work per unit distance is required to move the contact line.
The advancing contact angle can be described as a measure of the liquid-solid cohesion while the receding contact angle is a measure of liquid-solid adhesion. The advancing and receding contact angles can be measured directly using different methods and can also be calculated from other wetting measurements such as force tensiometry (aka Wilhemy-Plate method).
Advancing and receding contact angles can be measured directly from the same measurement if drops are moved linearly on a surface. For example, a drop of liquid will adopt a given contact angle when static, but when the surface is tilted the drop will initially deform so that the contact area between the drop and surface remains constant. The "downhill" side of the drop will adopt a higher contact angle while the "uphill" side of the drop will adopt a lower contact angle. As the tilt angle increases the contact angles will continue to change but the contact area between the drop and surface will remain constant. At a given surface tilt angle, the advancing and receding contact angles will be met and the drop will move on the surface. In practice, the measurement can be influenced by shear forces and momentum if the tilt velocity is high. The measurement method can also be challenging in practice for systems with high (>30 degrees) or low (<10 degrees) contact angle hysteresis.
Advancing and receding contact angle measurements can be carried out by adding and removing liquid from a drop deposited on a surface. If a sufficiently small volume of liquid is added to a drop, the contact line will still be pinned, and the contact angle will increase. Similarly, if a small amount of liquid is removed from a drop, the contact angle will decrease.
The Young's equation assumes a homogeneous surface and does not account for surface texture or outside forces such as gravity. Real surfaces are not atomically smooth or chemically homogeneous so a drop will assume contact angle hysteresis. The equilibrium contact angle () can be calculated from and as was shown theoretically by Tadmor and confirmed experimentally by Chibowski as,
where
On a surface that is rough or contaminated, there will also be contact angle hysteresis, but now the local equilibrium contact angle (the Young equation is now only locally valid) may vary from place to place on the surface. According to the Young–Dupré equation, this means that the adhesion energy varies locally – thus, the liquid has to overcome local energy barriers in order to wet the surface. One consequence of these barriers is contact angle hysteresis: the extent of wetting, and therefore the observed contact angle (averaged along the contact line), depends on whether the liquid is advancing or receding on the surface.
Because liquid advances over previously dry surface but recedes from previously wet surface, contact angle hysteresis can also arise if the solid has been altered due to its previous contact with the liquid (e.g., by a chemical reaction, or absorption). Such alterations, if slow, can also produce measurably time-dependent contact angles.
Effect of roughness to contact angles
Surface roughness has a strong effect on the contact angle and wettability of a surface. The effect of roughness depends on if the droplet will wet the surface grooves or if air pockets will be left between the droplet and the surface.
If the surface is wetted homogeneously, the droplet is in Wenzel state. In Wenzel state, adding surface roughness will enhance the wettability caused by the chemistry of the surface. The Wenzel correlation can be written as
where is the measured contact angle, is the Young contact angle and is the roughness ratio. The roughness ratio is defined as the ratio between the actual and projected solid surface area.
If the surface is wetted heterogeneously, the droplet is in Cassie-Baxter state. The most stable contact angle can be connected to the Young contact angle. The contact angles calculated from the Wenzel and Cassie-Baxter equations have been found to be good approximations of the most stable contact angles with real surfaces.
Dynamic contact angles
For liquid moving quickly over a surface, the contact angle can be altered from its value at rest. The advancing contact angle will increase with speed, and the receding contact angle will decrease. The discrepancies between static and dynamic contact angles are closely proportional to the capillary number, noted .
Contact angle curvature
On the basis of interfacial energies, the profile of a surface droplet or a liquid bridge between two surfaces can be described by the Young–Laplace equation. This equation is applicable for three-dimensional axisymmetric conditions and is highly non-linear. This is due to the mean curvature term which includes products of first- and second-order derivatives of the drop shape function :
Solving this elliptic partial differential equation that governs the shape of a three-dimensional drop, in conjunction with appropriate boundary conditions, is complicated, and an alternate energy minimization approach to this is generally adopted. The shapes of three-dimensional sessile and pendant drops have been successfully predicted using this energy minimisation method.
Typical contact angles
Contact angles are extremely sensitive to contamination; values reproducible to better than a few degrees are generally only obtained under laboratory conditions with purified liquids and very clean solid surfaces. If the liquid molecules are strongly attracted to the solid molecules then the liquid drop will completely spread out on the solid surface, corresponding to a contact angle of 0°. This is often the case for water on bare metallic or ceramic surfaces, although the presence of an oxide layer or contaminants on the solid surface can significantly increase the contact angle. Generally, if the water contact angle is smaller than 90°, the solid surface is considered hydrophilic and if the water contact angle is larger than 90°, the solid surface is considered hydrophobic. Many polymers exhibit hydrophobic surfaces. Highly hydrophobic surfaces made of low surface energy (e.g. fluorinated) materials may have water contact angles as high as ≈ 120°. Some materials with highly rough surfaces may have a water contact angle even greater than 150°, due to the presence of air pockets under the liquid drop. These are called superhydrophobic surfaces.
If the contact angle is measured through the gas instead of through the liquid, then it should be replaced by 180° minus their given value. Contact angles are equally applicable to the interface of two liquids, though they are more commonly measured in solid products such as non-stick pans and waterproof fabrics.
Control of contact angles
Control of the wetting contact angle can often be achieved through the deposition or incorporation of various organic and inorganic molecules onto the surface. This is often achieved through the use of specialty silane chemicals which can form a SAM (self-assembled monolayers) layer. With the proper selection of the organic molecules with varying molecular structures and amounts of hydrocarbon and/or perfluorinated terminations, the contact angle of the surface can tune. The deposition of these specialty silanes can be achieved in the gas phase through the use of a specialized vacuum ovens or liquid-phase process. Molecules that can bind more perfluorinated terminations to the surface can results in lowering the surface energy (high water contact angle).
Measuring methods
The static sessile drop method
The sessile drop contact angle is measured by a contact angle goniometer using an optical subsystem to capture the profile of a pure liquid on a solid substrate. The angle formed between the liquid–solid interface and the liquid–vapor interface is the contact angle. Older systems used a microscope optical system with a back light. Current-generation systems employ high resolution cameras and software to capture and analyze the contact angle. Angles measured in such a way are often quite close to advancing contact angles. Equilibrium contact angles can be obtained through the application of well defined vibrations.
The pendant drop method
Measuring contact angles for pendant drops is much more complicated than for sessile drops due to the inherent unstable nature of inverted drops. This complexity is further amplified when one attempts to incline the surface. Experimental apparatus to measure pendant drop contact angles on inclined substrates has been developed recently. This method allows for the deposition of multiple microdrops on the underside of a textured substrate, which can be imaged using a high resolution CCD camera. An automated system allows for tilting the substrate and analysing the images for the calculation of advancing and receding contact angles.
The dynamic sessile drop method
The dynamic sessile drop is similar to the static sessile drop but requires the drop to be modified. A common type of dynamic sessile drop study determines the largest contact angle possible without increasing its solid–liquid interfacial area by adding volume dynamically. This maximum angle is the advancing angle. Volume is removed to produce the smallest possible angle, the receding angle. The difference between the advancing and receding angle is the contact angle hysteresis.
Dynamic Wilhelmy method
The dynamic Wilhelmy method is a method for calculating average advancing and receding contact angles on solids of uniform geometry. Both sides of the solid must have the same properties. Wetting force on the solid is measured as the solid is immersed in or withdrawn from a liquid of known surface tension. Also in that case it is possible to measure the equilibrium contact angle by applying a very controlled vibration. That methodology, called VIECA, can be implemented in a quite simple way on every Wilhelmy balance.
Single-fiber Wilhelmy method
Dynamic Wilhelmy method applied to single fibers to measure advancing and receding contact angles.
Single-fiber meniscus method
An optical variation of the single-fiber Wilhelmy method. Instead of measuring with a balance, the shape of the meniscus on the fiber is directly imaged using a high resolution camera. Automated meniscus shape fitting can then directly measure the static, advancing or receding contact angle on the fiber.
Washburn's equation capillary rise method
In case of a porous materials many issues have been raised both about the physical meaning of the calculated pore diameter and the real possibility to use this equation for the calculation of the contact angle of the solid, even if this method is often offered by much software as consolidated. Change of weight as a function of time is measured.
See also
Goniometer
Meniscus (liquid)
Porosimetry
Sessile drop technique
Surface tension
Wetting
References
Further reading
Pierre-Gilles de Gennes, Françoise Brochard-Wyart, David Quéré, Capillarity and Wetting Phenomena: Drops, Bubbles, Pearls, Waves, Springer (2004)
Jacob Israelachvili, Intermolecular and Surface Forces, Academic Press (1985–2004)
D.W. Van Krevelen, Properties of Polymers, 2nd revised edition, Elsevier Scientific Publishing Company, Amsterdam-Oxford-New York (1976)
Clegg, Carl Contact Angle Made Easy, ramé-hart (2013),
Angle
Condensed matter physics
Fluid mechanics
Surface science
Hysteresis | Contact angle | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,330 | [
"Geometric measurement",
"Scalar physical quantities",
"Physical phenomena",
"Physical quantities",
"Phases of matter",
"Materials science",
"Surface science",
"Fluid mechanics",
"Civil engineering",
"Condensed matter physics",
"Wikipedia categories named after physical quantities",
"Angle",
... |
1,987,765 | https://en.wikipedia.org/wiki/Abutment | An abutment is the substructure at the ends of a bridge span or dam supporting its superstructure. Single-span bridges have abutments at each end that provide vertical and lateral support for the span, as well as acting as retaining walls to resist lateral movement of the earthen fill of the bridge approach. Multi-span bridges require piers to support ends of spans unsupported by abutments. Dam abutments are generally the sides of a valley or gorge, but may be artificial in order to support arch dams such as Kurobe Dam in Japan.
The civil engineering term may also refer to the structure supporting one side of an arch, or masonry used to resist the lateral forces of a vault. The impost or abacus of a column in classical architecture may also serve as an abutment to an arch.
The word derives from the verb "abut", meaning to "touch by means of a mutual border".
Use
An abutment may be used to transfer loads from a superstructure to its foundation, to resist or transfer self weight, lateral loads (such as the earth pressure) and wind loads, to support one end of an approach slab, or to balance vertical and horizontal forces in an arch bridge.
Types
Types of abutments include:
Gravity abutment, resists horizontal earth pressure with its own dead weight
U abutment, U-shaped gravity abutment
Cantilever abutment, cantilever retaining wall designed for large vertical loads
Full height abutment, cantilever abutment that extends from the underpass grade line to the grade line of the overpass roadway
Stub abutment, short abutments at the top of an embankment or slope, usually supported on piles
Semi-stub abutment, size between full height and stub abutment
Counterfort abutment, similar to counterfort retaining walls
Spill-through abutment, vertical buttresses with open spaces between them
MSE systems, "Reinforced Earth" system: modular units with metallic reinforcement
Pile bent abutment, similar to spill-through abutment
References
External links
Ohio Department of Transportation
Earthen Dam
Bridges
Civil engineering
Dams
Foundations (buildings and structures)
Bridge components | Abutment | [
"Technology",
"Engineering"
] | 439 | [
"Structural engineering",
"Foundations (buildings and structures)",
"Construction",
"Civil engineering",
"Bridge components",
"Components",
"Bridges"
] |
1,988,114 | https://en.wikipedia.org/wiki/XLispStat | XLispStat is a statistical scientific package based on the XLISP language.
Many free statistical software like ARC (nonlinear curve fitting problems) and ViSta are based on this package.
It includes a variety of statistical functions and methods, including routines for nonlinear curve fit. Many add-on packages have been developed to extend XLispStat, including contingency tables and regression analysis
XLispStat has seen usage in many fields, including astronomy, GIS, speech acoustics, econometrics, and epidemiology.
XLispStat was historically influential in the field of statistical visualization.
Its author, Luke Tierney, wrote a 1990 book on it.
XLispStat dates to the late 1980s/early 1990s and probably saw its greatest popularity in the early-to-mid 1990s with greatly declining usage since. In the 1990s it was in very widespread use in statistical education, but has since been mostly replaced by R. There is a paper explaining why UCLA's Department of Statistics abandoned it in 1998, and their reasons for doing so likely hold true for many other of its former users.
Source code to XLispStat is available under a permissive license (similar terms to BSD)
See also
R (programming language)
References
External links
Lisp-Stat and XLisp-Stat documentation (historical)
XLispStat archive and related resources
Statistical software
Statistical programming languages
Lisp programming language family | XLispStat | [
"Mathematics"
] | 290 | [
"Statistical software",
"Mathematical software"
] |
1,988,157 | https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay%20filter | A Savitzky–Golay filter is a digital filter that can be applied to a set of digital data points for the purpose of smoothing the data, that is, to increase the precision of the data without distorting the signal tendency. This is achieved, in a process known as convolution, by fitting successive sub-sets of adjacent data points with a low-degree polynomial by the method of linear least squares. When the data points are equally spaced, an analytical solution to the least-squares equations can be found, in the form of a single set of "convolution coefficients" that can be applied to all data sub-sets, to give estimates of the smoothed signal, (or derivatives of the smoothed signal) at the central point of each sub-set. The method, based on established mathematical procedures, was popularized by Abraham Savitzky and Marcel J. E. Golay, who published tables of convolution coefficients for various polynomials and sub-set sizes in 1964. Some errors in the tables have been corrected. The method has been extended for the treatment of 2- and 3-dimensional data.
Savitzky and Golay's paper is one of the most widely cited papers in the journal Analytical Chemistry and is classed by that journal as one of its "10 seminal papers" saying "it can be argued that the dawn of the computer-controlled analytical instrument can be traced to this article".
Applications
The data consists of a set of points , , where is an independent variable and is an observed value. They are treated with a set of convolution coefficients, , according to the expression
Selected convolution coefficients are shown in the tables, below. For example, for smoothing by a 5-point quadratic polynomial, and the smoothed data point, , is given by
,
where, , etc. There are numerous applications of smoothing, such as avoiding the propagation of noise through an algorithm chain, or sometimes simply to make the data appear to be less noisy than it really is.
The following are applications of numerical differentiation of data. Note When calculating the nth derivative, an additional scaling factor of may be applied to all calculated data points to obtain absolute values (see expressions for , below, for details).
Location of maxima and minima in experimental data curves. This was the application that first motivated Savitzky. The first derivative of a function is zero at a maximum or minimum. The diagram shows data points belonging to a synthetic Lorentzian curve, with added noise (blue diamonds). Data are plotted on a scale of half width, relative to the peak maximum at zero. The smoothed curve (red line) and 1st derivative (green) were calculated with 7-point cubic Savitzky–Golay filters. Linear interpolation of the first derivative values at positions either side of the zero-crossing gives the position of the peak maximum. 3rd derivatives can also be used for this purpose.
Location of an end-point in a titration curve. An end-point is an inflection point where the second derivative of the function is zero. The titration curve for malonic acid illustrates the power of the method. The first end-point at 4 ml is barely visible, but the second derivative allows its value to be easily determined by linear interpolation to find the zero crossing.
Baseline flattening. In analytical chemistry it is sometimes necessary to measure the height of an absorption band against a curved baseline. Because the curvature of the baseline is much less than the curvature of the absorption band, the second derivative effectively flattens the baseline. Three measures of the derivative height, which is proportional to the absorption band height, are the "peak-to-valley" distances h1 and h2, and the height from baseline, h3.
Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrum: they have reduced half-width. This allows partially overlapping bands to be "resolved" into separate (negative) peaks. The diagram illustrates how this may be used also for chemical analysis, using measurement of "peak-to-valley" distances. In this case the valleys are a property of the 2nd derivative of a Lorentzian. (x-axis position is relative to the position of the peak maximum on a scale of half width at half height).
Resolution enhancement with 4th derivative (positive peaks). The minima are a property of the 4th derivative of a Lorentzian.
Moving average
The "moving average filter" is a trivial example of a Savitzky–Golay filter that is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles.
Each subset of the data set is fit with a straight horizontal line as opposed to a higher order polynomial. An unweighted moving average filter is the simplest convolution filter.
The moving average is often used for a quick technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series.
It was not included in some tables of Savitzky-Golay convolution coefficients as all the coefficient values are identical, with the value .
Derivation of convolution coefficients
When the data points are equally spaced, an analytical solution to the least-squares equations can be found. This solution forms the basis of the convolution method of numerical smoothing and differentiation. Suppose that the data consists of a set of n points (xj, yj) (j = 1, ..., n), where xj is an independent variable and yj is a datum value. A polynomial will be fitted by linear least squares to a set of m (an odd number) adjacent data points, each separated by an interval h. Firstly, a change of variable is made
where is the value of the central point. z takes the values (e.g. m = 5 → z = −2, −1, 0, 1, 2). The polynomial, of degree k is defined as
The coefficients a0, a1 etc. are obtained by solving the normal equations (bold a represents a vector, bold J represents a matrix).
where is a Vandermonde matrix, that is -th row of has values .
For example, for a cubic polynomial fitted to 5 points, z= −2, −1, 0, 1, 2 the normal equations are solved as follows.
Now, the normal equations can be factored into two separate sets of equations, by rearranging rows and columns, with
Expressions for the inverse of each of these matrices can be obtained using Cramer's rule
The normal equations become
and
Multiplying out and removing common factors,
The coefficients of y in these expressions are known as convolution coefficients. They are elements of the matrix
In general,
In matrix notation this example is written as
Tables of convolution coefficients, calculated in the same way for m up to 25, were published for the Savitzky–Golay smoothing filter in 1964, The value of the central point, z = 0, is obtained from a single set of coefficients, a0 for smoothing, a1 for 1st derivative etc. The numerical derivatives are obtained by differentiating Y. This means that the derivatives are calculated for the smoothed data curve. For a cubic polynomial
In general, polynomials of degree (0 and 1), (2 and 3), (4 and 5) etc. give the same coefficients for smoothing and even derivatives. Polynomials of degree (1 and 2), (3 and 4) etc. give the same coefficients for odd derivatives.
Algebraic expressions
It is not necessary always to use the Savitzky–Golay tables. The summations in the matrix JTJ can be evaluated in closed form,
so that algebraic formulae can be derived for the convolution coefficients. Functions that are suitable for use with a curve that has an inflection point are:
Smoothing, polynomial degree 2,3 : (the range of values for i also applies to the expressions below)
1st derivative: polynomial degree 3,4
2nd derivative: polynomial degree 2,3
3rd derivative: polynomial degree 3,4
Simpler expressions that can be used with curves that don't have an inflection point are:
Smoothing, polynomial degree 0,1 (moving average):
1st derivative, polynomial degree 1,2:
Higher derivatives can be obtained. For example, a fourth derivative can be obtained by performing two passes of a second derivative function.
Use of orthogonal polynomials
An alternative to fitting m data points by a simple polynomial in the subsidiary variable, z, is to use orthogonal polynomials.
where P0, ..., Pk is a set of mutually orthogonal polynomials of degree 0, ..., k. Full details on how to obtain expressions for the orthogonal polynomials and the relationship between the coefficients b and a are given by Guest. Expressions for the convolution coefficients are easily obtained because the normal equations matrix, JTJ, is a diagonal matrix as the product of any two orthogonal polynomials is zero by virtue of their mutual orthogonality. Therefore, each non-zero element of its inverse is simply the reciprocal the corresponding element in the normal equation matrix. The calculation is further simplified by using recursion to build orthogonal Gram polynomials. The whole calculation can be coded in a few lines of PASCAL, a computer language well-adapted for calculations involving recursion.
Treatment of first and last points
Savitzky–Golay filters are most commonly used to obtain the smoothed or derivative value at the central point, z = 0, using a single set of convolution coefficients. (m − 1)/2 points at the start and end of the series cannot be calculated using this process. Various strategies can be employed to avoid this inconvenience.
The data could be artificially extended by adding, in reverse order, copies of the first (m − 1)/2 points at the beginning and copies of the last (m − 1)/2 points at the end. For instance, with m = 5, two points are added at the start and end of the data y1, ..., yn.
y3,y2,y1, ... ,yn, yn−1, yn−2.
Looking again at the fitting polynomial, it is obvious that data can be calculated for all values of z by using all sets of convolution coefficients for a single polynomial, a0 .. ak.
For a cubic polynomial
Convolution coefficients for the missing first and last points can also be easily obtained. This is also equivalent to fitting the first (m + 1)/2 points with the same polynomial, and similarly for the last points.
Weighting the data
It is implicit in the above treatment that the data points are all given equal weight. Technically, the objective function
being minimized in the least-squares process has unit weights, wi = 1. When weights are not all the same the normal equations become
,
If the same set of diagonal weights is used for all data subsets, , an analytical solution to the normal equations can be written down. For example, with a quadratic polynomial,
An explicit expression for the inverse of this matrix can be obtained using Cramer's rule. A set of convolution coefficients may then be derived as
Alternatively the coefficients, C, could be calculated in a spreadsheet, employing a built-in matrix inversion routine to obtain the inverse of the normal equations matrix. This set of coefficients, once calculated and stored, can be used with all calculations in which the same weighting scheme applies. A different set of coefficients is needed for each different weighting scheme.
It was shown that Savitzky–Golay filter can be improved by introducing weights that decrease at the ends of the fitting interval.
Two-dimensional convolution coefficients
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.
Such a grid is referred as a kernel, and the data points that constitute the kernel are referred as nodes. The trick is to transform the rectangular kernel into a single row by a simple ordering of the indices of the nodes. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of the values at the m × n kernel nodes. The following example, for a bivariate polynomial of total degree 3, m = 7, and n = 5, illustrates the process, which parallels the process for the one dimensional case, above.
The rectangular kernel of 35 data values,
{| border="1" style="border-collapse:collapse;" class="wikitable"
|-
! ||−3||−2||−1||0||1
!2
!3
|-
!−2
|d1||d2||d3||d4||d5
|d6
|d7
|-
!−1
|d8||d9||d10||d11||d12
|d13
|d14
|-
!0
|d15||d16||d17||d18||d19
|d20
|d21
|-
!1
|d22||d23||d24||d25||d26
|d27
|d28
|-
!2
|d29||d30||d31||d32||d33
|d34
|d35
|}
becomes a vector when the rows are placed one after another.
d = (d1 ... d35)T
The Jacobian has 10 columns, one for each of the parameters a00 − a03, and 35 rows, one for each pair of v and w values. Each row has the form
The convolution coefficients are calculated as
The first row of C contains 35 convolution coefficients, which can be multiplied with the 35 data values, respectively, to obtain the polynomial coefficient , which is the smoothed value at the central node of the kernel (i.e. at the 18th node of the above table). Similarly, other rows of C can be multiplied with the 35 values to obtain other polynomial coefficients, which, in turn, can be used to obtain smoothed values and different smoothed partial derivatives at different nodes.
Nikitas and Pappa-Louisi showed that depending on the format of the used polynomial, the quality of smoothing may vary significantly. They recommend using the polynomial of the form
because such polynomials can achieve good smoothing both in the central and in the near-boundary regions of a kernel, and therefore they can be confidently used in smoothing both at the internal and at the near-boundary data points of a sampled domain. In order to avoid ill-conditioning when solving the least-squares problem, p < m and q < n. For software that calculates the two-dimensional coefficients and for a database of such C's, see the section on multi-dimensional convolution coefficients, below.
Multi-dimensional convolution coefficients
The idea of two-dimensional convolution coefficients can be extended to the higher spatial dimensions as well, in a straightforward manner, by arranging multidimensional distribution of the kernel nodes in a single row. Following the aforementioned finding by Nikitas and Pappa-Louisi in two-dimensional cases, usage of the following form of the polynomial is recommended in multidimensional cases:
where D is the dimension of the space, 's are the polynomial coefficients, and u's are the coordinates in the different spatial directions. Algebraic expressions for partial derivatives of any order, be it mixed or otherwise, can be easily derived from the above expression. Note that C depends on the manner in which the kernel nodes are arranged in a row and on the manner in which the different terms of the expanded form of the above polynomial is arranged, when preparing the Jacobian.
Accurate computation of C in multidimensional cases becomes challenging, as precision of standard floating point numbers available in computer programming languages no longer remain sufficient. The insufficient precision causes the floating point truncation errors to become comparable to the magnitudes of some C elements, which, in turn, severely degrades its accuracy and renders it useless. Chandra Shekhar has brought forth two open source software, Advanced Convolution Coefficient Calculator (ACCC) and Precise Convolution Coefficient Calculator (PCCC), which handle these accuracy issues adequately. ACCC performs the computation by using floating point numbers, in an iterative manner. The precision of the floating-point numbers is gradually increased in each iteration, by using GNU MPFR. Once the obtained C's in two consecutive iterations start having same significant digits until a pre-specified distance, the convergence is assumed to have reached. If the distance is sufficiently large, the computation yields a highly accurate C. PCCC employs rational number calculations, by using GNU Multiple Precision Arithmetic Library, and yields a fully accurate C, in the rational number format. In the end, these rational numbers are converted into floating point numbers, until a pre-specified number of significant digits.
A database of C's that are calculated by using ACCC, for symmetric kernels and both symmetric and asymmetric polynomials, on unity-spaced kernel nodes, in the 1, 2, 3, and 4 dimensional spaces, is made available. Chandra Shekhar has also laid out a mathematical framework that describes usage of C calculated on unity-spaced kernel nodes to perform filtering and partial differentiations (of various orders) on non-uniformly spaced kernel nodes, allowing usage of C provided in the aforementioned database. Although this method yields approximate results only, they are acceptable in most engineering applications, provided that non-uniformity of the kernel nodes is weak.
Some properties of convolution
The sum of convolution coefficients for smoothing is equal to one. The sum of coefficients for odd derivatives is zero.
The sum of squared convolution coefficients for smoothing is equal to the value of the central coefficient.
Smoothing of a function leaves the area under the function unchanged.
Convolution of a symmetric function with even-derivative coefficients conserves the centre of symmetry.
Properties of derivative filters.
Signal distortion and noise reduction
It is inevitable that the signal will be distorted in the convolution process. From property 3 above, when data which has a peak is smoothed the peak height will be reduced and the half-width will be increased. Both the extent of the distortion and S/N (signal-to-noise ratio) improvement:
decrease as the degree of the polynomial increases
increase as the width, m of the convolution function increases
For example, If the noise in all data points is uncorrelated and has a constant standard deviation, σ, the standard deviation on the noise will be decreased by convolution with an m-point smoothing function to
polynomial degree 0 or 1: (moving average)
polynomial degree 2 or 3: .
These functions are shown in the plot at the right. For example, with a 9-point linear function (moving average) two thirds of the noise is removed and with a 9-point quadratic/cubic smoothing function only about half the noise is removed. Most of the noise remaining is low-frequency noise(see Frequency characteristics of convolution filters, below).
Although the moving average function gives better noise reduction it is unsuitable for smoothing data which has curvature over m points. A quadratic filter function is unsuitable for getting a derivative of a data curve with an inflection point because a quadratic polynomial does not have one. The optimal choice of polynomial order and number of convolution coefficients will be a compromise between noise reduction and distortion.
Multipass filters
One way to mitigate distortion and improve noise removal is to use a filter of smaller width and perform more than one convolution with it. For two passes of the same filter this is equivalent to one pass of a filter obtained by convolution of the original filter with itself. For example, 2 passes of the filter with coefficients (1/3, 1/3, 1/3) is equivalent to 1 pass of the filter with coefficients
(1/9, 2/9, 3/9, 2/9, 1/9).
The disadvantage of multipassing is that the equivalent filter width for passes of an –point function is so multipassing is subject to greater end-effects. Nevertheless, multipassing has been used to great advantage. For instance, some 40–80 passes on data with a signal-to-noise ratio of only 5 gave useful results. The noise reduction formulae given above do not apply because correlation between calculated data points increases with each pass.
Frequency characteristics of convolution filters
Convolution maps to multiplication in the Fourier co-domain. The discrete Fourier transform of a convolution filter is a real-valued function which can be represented as
θ runs from 0 to 180 degrees, after which the function merely repeats itself. The plot for a 9-point quadratic/cubic smoothing function is typical. At very low angle, the plot is almost flat, meaning that low-frequency components of the data will be virtually unchanged by the smoothing operation. As the angle increases the value decreases so that higher frequency components are more and more attenuated. This shows that the convolution filter can be described as a low-pass filter: the noise that is removed is primarily high-frequency noise and low-frequency noise passes through the filter. Some high-frequency noise components are attenuated more than others, as shown by undulations in the Fourier transform at large angles. This can give rise to small oscillations in the smoothed data and phase reversal, i.e., high-frequency oscillations in the data get inverted by Savitzky–Golay filtering.
Convolution and correlation
Convolution affects the correlation between errors in the data. The effect of convolution can be expressed as a linear transformation.
By the law of error propagation, the variance-covariance matrix of the data, A will be transformed into B according to
To see how this applies in practice, consider the effect of a 3-point moving average on the first three calculated points, , assuming that the data points have equal variance and that there is no correlation between them. A will be an identity matrix multiplied by a constant, σ2, the variance at each point.
In this case the correlation coefficients,
between calculated points i and j will be
In general, the calculated values are correlated even when the observed values are not correlated. The correlation extends over calculated points at a time.
Multipass filters
To illustrate the effect of multipassing on the noise and correlation of a set of data, consider the effects of a second pass of a 3-point moving average filter. For the second pass
After two passes, the standard deviation of the central point has decreased to , compared to 0.58σ for one pass. The noise reduction is a little less than would be obtained with one pass of a 5-point moving average which, under the same conditions, would result in the smoothed points having the smaller standard deviation of 0.45σ.
Correlation now extends over a span of 4 sequential points with correlation coefficients
The advantage obtained by performing two passes with the narrower smoothing function is that it introduces less distortion into the calculated data.
Comparison with other filters and alternatives
Compared with other smoothing filters, e.g. convolution with a Gaussian or multi-pass moving-average filtering, Savitzky–Golay filters have an initially flatter response and sharper cutoff in the frequency domain, especially for high orders of the fit polynomial (see frequency characteristics). For data with limited signal bandwidth, this means that Savitzky–Golay filtering can provide better signal-to-noise ratio than many other filters; e.g., peak heights of spectra are better preserved than for other filters with similar noise suppression. Disadvantages of the Savitzky–Golay filters are comparably poor suppression of some high frequencies (poor stopband suppression) and artifacts when using polynomial fits for the first and last points.
Alternative smoothing methods that share the advantages of Savitzky–Golay filters and mitigate at least some of their disadvantages are Savitzky–Golay filters with properly chosen alternative fitting weights, Whittaker–Henderson smoothing and Hodrick–Prescott filter (equivalent methods closely related to smoothing splines), and convolution with a windowed sinc function.
Implementations in Programming Language(s)
MATLAB
sgolayfilt from Signal Processing Toolbox. Available since before version R2006b.
Python
flatten from module Lightkurve. Lightkurve is the official library for analysis of Kepler & TESS Telescope data.
scipy.signal.savgol_filter from module SciPy. SciPy is a robust library widely used for scientific computing in the academic community.
See also
Kernel smoother – Different terminology for many of the same processes, used in statistics
Local regression — the LOESS and LOWESS methods
Numerical differentiation – Application to differentiation of functions
Smoothing spline
Stencil (numerical analysis) – Application to the solution of differential equations
Hodrick–Prescott filter
Kalman filter
Appendix
Tables of selected convolution coefficients
Consider a set of data points . The Savitzky–Golay tables refer to the case that the step is constant, h. Examples of the use of the so-called convolution coefficients, with a cubic polynomial and a window size, m, of 5 points are as follows.
Smoothing ;
1st derivative ;
2nd derivative .
Selected values of the convolution coefficients for polynomials of degree 1, 2, 3, 4 and 5 are given in the following tables The values were calculated using the PASCAL code provided in Gorry.
Notes
References
External links
Advanced Convolution Coefficient Calculator (ACCC) for multidimensional least-squares filters
Savitzky–Golay filter in Fundamentals of Statistics
A wider range of coefficients for a range of data set sizes, orders of fit, and offsets from the centre point
Filter theory
Signal estimation | Savitzky–Golay filter | [
"Engineering"
] | 5,448 | [
"Telecommunications engineering",
"Filter theory"
] |
27,103,948 | https://en.wikipedia.org/wiki/Acute%20beryllium%20poisoning | Acute beryllium poisoning is acute chemical pneumonitis resulting from the toxic effect of beryllium in its elemental form or in various chemical compounds, and is distinct from berylliosis (also called chronic beryllium disease). After occupational safety procedures were put into place following the realization that the metal caused berylliosis around 1950, acute beryllium poisoning became extremely rare.
Signs and symptoms
Generally associated with exposure to beryllium levels at or above 100 μg/m3, it produces severe cough, sore nose and throat, weight loss, labored breathing, anorexia, and increased fatigue.
In addition to beryllium's toxicity when inhaled, when brought into contact with skin at relatively low doses, beryllium can cause local irritation and contact dermatitis, and contact with skin that has been scraped or cut may cause rashes or ulcers. Beryllium dust or powder can irritate the eyes.
Risk factors
Acute beryllium poisoning is an occupational disease. Relevant occupations are those where beryllium is mined, processed or converted into metal alloys, or where machining of metals containing beryllium or recycling of scrap alloys occurs.
Metallographic preparation equipment and laboratory work surfaces must be damp-wiped occasionally to inhibit buildup of particles. Cutting, grinding, and polishing procedures that generate dust or fumes must be handled within sufficiently vented coverings supplied with particulate filters.
Diagnosis
Management
Therapy is supportive and includes removal from further beryllium exposure. For very severe cases mechanical ventilation may be required.
Prognosis
The signs and symptoms of acute beryllium pneumonitis usually resolve over several weeks to months, but may be fatal in 10 percent of cases, and about 15–20% of cases may progress to chronic beryllium disease.
Acute beryllium poisoning approximately doubles the risk of lung cancer. The mechanism by which beryllium is carcinogenic is unclear, but may be due to ionic beryllium binding to nucleic acids; it is not mutagenic.
Pathophysiology
History
Acute beryllium disease was first reported in Europe in 1933 and in the United States in 1943.
References
External links
Beryllium
Biology and pharmacology of chemical elements
Element toxicology
Toxic effects of metals | Acute beryllium poisoning | [
"Chemistry",
"Biology"
] | 469 | [
"Pharmacology",
"Element toxicology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Biochemistry"
] |
27,106,033 | https://en.wikipedia.org/wiki/IBM%20and%20World%20War%20II | Both the United States and Nazi Germany used IBM-punched card technology for some parts of their operations and record keeping.
By country
Germany
In Germany, during World War II, IBM engaged in business practices which have been the source of controversy. Much attention focuses on the role of IBM's German subsidiary, known as Deutsche Hollerith Maschinen Gesellschaft, or Dehomag. Topics in this regard include:
documenting operations by Dehomag which allowed the Nazis to better organize their war effort, in particular the Holocaust and use of Nazi concentration camps;
comparing these efforts to operations by other IBM subsidiaries which aided other nations' war efforts;
and ultimately, assessing the degree to which IBM should be held culpable for atrocities which were made possible by its actions.
the selection methods they developed and used had the purpose of selecting and killing civilians.
United States
In the United States, IBM was, at the request of the government, the subcontractor of the punched card project for the internment camps of Japanese Americans:
IBM equipment was used for cryptography by US Army and Navy organisations, Arlington Hall and OP-20-G and similar Allied organisations using Hollerith punched cards (Central Bureau and the Far East Combined Bureau).
The company developed and built the Automatic Sequence Controlled Calculator which was used to perform computations for the Manhattan project.
Criticism of IBM's actions during World War II
A 2001 book by Edwin Black, entitled IBM and the Holocaust, reached the conclusion that IBM's commercial activities in Germany during World War II make it morally complicit in the Holocaust. An updated 2002 paperback edition of the book included new evidence of the connection between IBM's United States headquarters, which controlled a Polish subsidiary, and the Nazis. Oliver Burkeman wrote for The Guardian, "The paperback provides the first evidence that the company's dealings with the Nazis were controlled from its New York headquarters throughout the second world war."
In February 2001, an Alien Tort Claims Act claim was filed in U.S. federal court on behalf of concentration camp survivors against IBM. The suit accused IBM of allegedly providing the punched card technology that facilitated the Holocaust, and for covering up German IBM subsidiary Dehomag's activities. In April 2001, the lawsuit was dropped after lawyers feared the suit would slow down payments from a German Holocaust fund for Holocaust survivors who had suffered under Nazi persecution. IBM's German division had paid $3 million into the fund, while making it clear they were not admitting liability.
In 2004, the human rights organization Gypsy International Recognition and Compensation Action (GIRCA) filed suit against IBM in Switzerland. The case was dismissed in 2006, as the statute of limitations had expired.
Responses to critics
In an "IBM Statement on Nazi-era Book and Lawsuit", IBM responded in February 2001 that:
Richard Bernstein, writing for The New York Times Book Review in 2001, pointed out that "many American companies did what I.B.M. did. ... What then makes I.B.M. different?" He states that Black's case in his book IBM and the Holocaust "is long and heavily documented, and yet he does not demonstrate that bears some unique or decisive responsibility for the evil that was done." IBM quoted this claim in a March 2002 "Addendum to IBM Statement on Nazi-era Book and Lawsuit," after the publication of Black's revised paperback edition:
See also
IBM and the Holocaust
German re-armament
List of International subsidiaries of IBM
Never Again pledge
References
World War II
Intelligence of World War II
Science and technology during World War II
Companies involved in the Holocaust | IBM and World War II | [
"Technology"
] | 732 | [
"Science and technology during World War II",
"Science and technology by war"
] |
27,108,670 | https://en.wikipedia.org/wiki/Structure-based%20assignment | Structure-Based Assignment (SBA) is a technique to accelerate the resonance assignment which is a key bottleneck of NMR (Nuclear magnetic resonance) structural biology. A homologous (similar) protein is used as a template to the target protein in SBA. This template protein provides prior structural information about the target protein and leads to faster resonance assignment . By analogy, in X-ray Crystallography, the molecular replacement technique allows solution of the crystallographic phase problem when a homologous structural model is
known, thereby facilitating rapid structure determination. Some of the SBA algorithms
are CAP which is an RNA assignment algorithm which performs an exhaustive search over
all permutations, MARS which is a program for robust automatic backbone assignment and Nuclear Vector Replacement (NVR) which is a molecular replacement like approach for SBA of resonances and
sparse Nuclear Overhauser Effect (NOE)'s.
References
Nuclear magnetic resonance | Structure-based assignment | [
"Physics",
"Chemistry"
] | 189 | [
"Nuclear chemistry stubs",
"Nuclear magnetic resonance",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
27,108,706 | https://en.wikipedia.org/wiki/Gene%20therapy%20for%20color%20blindness | Gene therapy for color blindness is an experimental gene therapy of the human retina aiming to grant typical trichromatic color vision to individuals with congenital color blindness by introducing typical alleles for opsin genes. Animal testing for gene therapy began in 2007 with a 2009 breakthrough in squirrel monkeys suggesting an imminent gene therapy in humans. While progress in gene therapy for red-green color blindness has slowed since then, successful human trials are currently underway for achromatopsia, a different form of color vision deficiency. Congenital color vision deficiency affects over 200 million people worldwide, highlighting the significant demand for effective gene therapies targeting this condition.
The retina of the human eye contains photoreceptive cells called cones that allow color vision. A normal trichromat possesses three different types of cones to distinguish different colors within the visible spectrum. The three types of cones are designated L, M, and S cones, each containing an opsin sensitive to a different portion of the visible spectrum. More specifically, the L cone absorbs around 560 nm, the M cone absorbs near 530 nm, and the S cone absorbs near 420 nm. These cones transduce the absorbed light into electrical information to be relayed through other cells along the phototransduction pathway, before reaching the visual cortex in the brain.
The signals from the 3 cones are compared to each other to generate 3 opponent process channels. The channels are perceived as balances between red-green, blue-yellow and black-white.
Color vision deficiency
Color vision deficiency (CVD) is the deviation of an individual's color vision from typical human trichromatic vision. Relevant to gene therapy, CVD can be classified in 2 groups.
Dichromacy
Dichromats have partial color vision. The most common form of dichromacy is red-green colorblindness. Dichromacy usually arises when one of the three opsin genes is deleted or otherwise fully nonfunctional. The effects and diagnosis depend on the missing opsin. Protanopes (very common) have no L-opsin, Deuteranopes (very common) have no M-opsin, and Tritanopes (rare) have no S-opsin. Accordingly, a missing cone means one of the opponent channels is inactive: red-green for protanopes/deuteranopes and blue-yellow for tritanopes. They therefore perceive a much reduced color space. Although dichromacy poses few critical problems in daily life, a lack of access to many occupations (where color vision may be safety-critical) is a large disadvantage.
Anomalous Trichromats are not missing an opsin gene, but rather have a mutated (or chimeric) gene. They have trichromatic vision, but with a smaller color gamut than typical color vision. Regarding gene therapy, they are equivalent to dichromats.
Blue Cone Monochromats are missing both the L- and M-opsin and therefore have no color vision. They are treated as a subset of dichromacy since a combination of gene therapies for protanopia and deuteranopia would be used.
Achromatopsia
Individuals with congenital achromatopsia tend to have typical opsin genes, but have a mutation in another gene downstream in the phototransduction pathway (e.g. GNAT2 protein) that prevents their cones (and therefore photopic vision) from functioning. Achromats rely solely on their scotopic vision. The severity of achromatopsia is much higher than dichromacy, not only in the lack of color vision, but also in co-occurring symptoms photophobia, nystagmus and poor visual acuity.
Retinal gene therapy
Gene therapies aim to inject functional copies of missing or mutated genes into affected individuals by the use of viral vectors. Using a replication-defective recombinant adeno-associated virus (rAAV) as a vector, the cDNA of the affected gene can be delivered to the cones at the back of the retina typically via subretinal injection. Intravitreal injections are much less invasive, but not yet as effective as subretinal injections. Upon gaining the gene, the cone begins to express the new photopigment. The effect is ideally permanent.
Research
The first retinal gene therapy to be approved by the FDA was Voretigene neparvovec in 2017, which treats Leber's congenital amaurosis, a genetic disorder that can lead to blindness. These treatments also use subretinal injections of AAV vector and are therefore foundational to research in gene therapy for color blindness.
Human L-cone photopigment have been introduced into mice. Since the mice possess only S cones and M cones, they are dichromats. M-opsin was replaced with a cDNA of L-opsin in the X chromosome of some mice. By breeding these "knock-in" transgenic mice, they generated heterozygous females with both an M cone and an L cone. These mice had improved range of color vision and have gained trichromacy, as tested by electroretinogram and behavioral tests. However, this is more difficult to apply in the form of gene therapy.
Recombinant AAV vector was to introduce the green fluorescent protein (GFP) gene in the cones of gerbils. The genetic insert was designed to only be expressed in S or M cones, and the expression of GFP in vivo was observed over time. Gene expression could stabilize if a sufficiently high dose of the viral vector is given.
In 2009, adult dichromatic squirrel monkeys were converted into trichromats using gene therapy. New world monkeys are polymorphic in their M-opsin, such that females can be trichromatic, but all males are dichromatic. Recombinant AAV vector was used to deliver a human L-opsin gene subretinally. A subset of the monkey's M-cones gained the L-opsin genes and began co-expressing the new and old photopigments. Electroretinograms demonstrated that the cones were expressing the new opsin and after 20 weeks a pseudoisochromatic color vision test demonstrated that the treated monkeys had indeed developed functional trichromatic vision.
Gene therapy was to restore some of the sight of mice with achromatopsia. The results were positive for 80% of the mice treated.
In 2010, gene therapy for a form of achromatopsia was performed in dogs. Cone function and day vision have been restored for at least 33 months in two young dogs with achromatopsia. However, this therapy was less efficient for older dogs.
In 2022, 4 young human ACHM2 and ACHM3 achromats were shown to have neurological responses (as measured with fMRI) to photopic vision that matched patterns generated by their scotopic vision after gene therapy. This inferred a photopic cone-driven system that was at least marginally functional. The methodology did not investigate novel color vision, though one respondent claimed to more easily interpret traffic lights. This may be considered the first case of a cure for colorblindness in humans.
In July 2023, a study found positive but limited improvements on congenital CNGA3 achromatopsia.
Challenges
While the benefits of gene therapy to achromats typically outweigh the current risks, there are several challenges before large acceptance of gene therapy in dichromats can occur.
Safety
The procedure – namely the subretinal injection – is quite invasive, requiring several incisions and punctures in the eyeball. This poses a significant risk of infection and other complications. Subretinal injections methods promise to become less invasive with their application in other retinal gene therapies. They could also be replaced by intravitreal injections, which are significantly less invasive and can in theory be performed by a family doctor, but are less effective.
The permanence of these therapies is also in question. Mancuso et al. reported that the treated squirrel monkeys maintained 2 years of color vision after the treatment. However, if repeat injections are needed, there is also the concern of the body developing an immune reaction to the virus. If a body develops sensitivity to the viral vector, the success of the therapy could be jeopardized and/or the body may respond unfavorably. An editorial by J. Bennett points to Mancuso et al.'s use of an "unspecified postinjection corticosteroid therapy". Bennett suggests that the monkeys may have experienced inflammation due to the injection. However, the AAV virus that is commonly used for this study is non-pathogenic, and the body is less likely to develop an immune reaction.
Neuroplasticity
According to research by David H. Hubel and Torsten Wiesel, suturing shut one eye of monkeys at an early age resulted in an irreversible loss of vision in that eye, even after the suture was removed. The study concluded that the neural circuitry for vision is wired during a "critical period" in childhood, after which the visual circuitry can no longer be rewired to process new sensory input. Contrary to this finding, Mancuso et al.’s success in conferring trichromacy to adult squirrel monkeys suggests that it is possible to adapt the preexisting circuit to allow greater acuity in color vision. The researchers concluded that integrating the stimulus from the new photopigment as an adult was not analogous to vision loss following visual deprivation.
It is yet unknown how the animals that gain a new photopigment are perceiving the new color. While the article by Mancuso et al. states that the monkey has indeed gained trichromacy and gained the ability to discriminate between red and green, they claim no knowledge of how the animal internally perceives the sensation.
Ethics
As a way to introduce new genetic information to change a person's phenotype, a gene therapy for color blindness is open to the same ethical questions and criticisms as gene therapy in general. These include issues around the governance of the therapy, whether treatment should be available only to those who can afford it, and whether the availability of treatment creates a stigma for those with color blindness. Given the large number of people with color blindness, there is also the question of whether color blindness is a disorder. Furthermore, even if gene therapy succeeds in converting incomplete colorblind individuals to trichromats, the degree of satisfaction among the subjects is unknown. It is uncertain how the quality of life will improve (or worsen) after the therapy.
The gene therapy for converting dichromats to trichromats can also be used hypothetically to "upgrade" typical trichromats to tetrachromats by introducing a new opsin genes. This begs the ethics of designer babies that contain genes not available naturally in the human gene pool. In 2022, the lab of Jay Neitz engineered a novel opsin sensitive to wavelengths between the typical human S- (420 nm) and M- (530 nm) opsins, i.e. the novel opsin at 493 nm. This allowed the opsin to be clearly visible in ERGs, but could be used to create tetrachromacy.
Cost and accessibility: The high costs of gene therapy development and treatment could limit access to those who need it, especially in low-resource settings.
Technical complexity: Precisely delivering functional opsin genes to the retina without causing immune reactions or unintended side effects remains a significant hurdle.
Long-term efficacy: Ensuring the lasting effectiveness of gene therapy, as retinal cells have limited regeneration potential, is a key concern.
See also
Color vision
Gene therapy
Achromatopsia
Color blindness
Gene therapy of the human retina
Stem cell therapy for macular degeneration
References
2007 introductions
Genetic engineering
Gene therapy
Color blindness | Gene therapy for color blindness | [
"Chemistry",
"Engineering",
"Biology"
] | 2,487 | [
"Biological engineering",
"Gene therapy",
"Genetic engineering",
"Molecular biology"
] |
27,110,987 | https://en.wikipedia.org/wiki/Gene%20therapy%20of%20the%20human%20retina | Retinal gene therapy holds a promise in treating different forms of non-inherited and inherited blindness.
In 2008, three independent research groups reported that patients with the rare genetic retinal disease Leber's congenital amaurosis had been successfully treated using gene therapy with adeno-associated virus (AAV). In all three studies, an AAV vector was used to deliver a functional copy of the RPE65 gene, which restored vision in children suffering from LCA. These results were widely seen as a success in the gene therapy field, and have generated excitement and momentum for AAV-mediated applications in retinal disease.
In retinal gene therapy, the most widely used vectors for ocular gene delivery are based on adeno-associated virus. The great advantage in using adeno-associated virus for the gene therapy is that it poses minimal immune responses and mediates long-term transgene expression in a variety of retinal cell types. For example, tight junctions that form the blood-retina barrier, separate subretinal space from the blood supply, providing protection from microbes and decreasing most immune-mediated damages.
There is still a lot of knowledge missing in regards of retina dystrophies. Detail characterization is needed in order to improve knowledge. To address this issue, creation of Registries is an attempt to grouped and characterize rare diseases. Registries help to localize, and measure all the phenotype of these conditions and therefore to provide easy follow-ups and provide a source of information to scientist community. Registry designs varies from region to region, however localization and characterization of the phenotype are the standard gold.
Examples of Registries are:
RetMxMap<ARVO 2009>. A Mexican and Latin-American registry created since 2009. This registry was created by Dr Adda Lízbeth Villanueva Avilés. She is a clinical-scientist gene mapping inherited retina dystrophies in Mexico and other Latin countries.
Clinical trials
Leber's congenital amaurosis
Preclinical studies in mouse models of Leber's congenital amaurosis (LCA) were published in 1996 and a study in dogs published in 2001. In 2008, three groups reported results of clinical trials using adeno-associated virus for LCA. In these studies, an AAV vector encoding the RPE65 gene was delivered via a "subretinal injection", where a small amount of fluid is injected underneath the retina in a short surgical procedure. Development continued, and in December 2017 the FDA approved Voretigene neparvovec (Luxturna), an adeno-associated virus vector-based gene therapy for children and adults with biallelic RPE65 gene mutations responsible for retinal dystrophy, including Leber congenital amaurosis. People must have viable retinal cells as a prerequisite for the intraocular administration of the drug.
Age-related macular degeneration
Following the successful clinical trials in LCA, researchers have been developing similar treatments using adeno-associated virus for age-related macular degeneration (AMD). To date, efforts have focused on long-term delivery of VEGF inhibitors to treat the wet form of macular degeneration. Whereas wet AMD is currently treated using frequent injections of recombinant protein into the eyeball, the goal of these treatments is long-term disease management following a single administration. One such study is being conducted at the Lions Eye Institute in Australia in collaboration with Avalanche Biotechnologies, a US-based biotechnology start-up. Another early-stage study is sponsored by Genzyme Corporation.
Ixo-vec for Wet AMD
Ixoberogene soroparvovec (Ixo-vec) is an investigational intravitreal gene therapy treatment targeting wet age-related macular degeneration (AMD) that aims to reduce the treatment burden by decreasing the frequency of anti-VEGF injections. Delivered as a single intravitreal injection, Ixo-vec enables sustained release of aflibercept, an anti-VEGF protein that helps control abnormal blood vessel growth and fluid leakage, which are key in AMD progression. Results from the OPTIC and LUNA trials demonstrate Ixo-vec’s effectiveness in significantly reducing the need for regular injections over extended periods. Patients in these trials experienced a reduction in injection frequency by as much as 90%, with many remaining injection-free for extended periods. Visual acuity remained stable, and anatomical outcomes, like reductions in central subfield thickness (CST), were achieved. Mild intraocular inflammation was the most common side effect, with steroid prophylaxis proving effective in managing this issue. This treatment approach, if proven in further studies, could offer AMD patients a more convenient, long-lasting alternative to frequent anti-VEGF injections, enhancing quality of life and treatment adherence.
Choroideremia
In October 2011, the first clinical trial was announced for the treatment of choroideremia. Dr. Robert MacLaren of the University of Oxford, who lead the trial, co-developed the treatment with Dr. Miguel Seabra of the Imperial College, London. This Phase 1/2 trial used subretinal AAV to restore the REP gene in affected patients.
Initial results of the trial were reported in January 2014 as promising as all six patients had better vision.
Color blindness
Recent research has shown that AAV can successfully restore color vision to treat color blindness in adult monkeys. Although this treatment has not yet entered clinical trials for humans, this work was considered a breakthrough for the ability to target cone photoreceptors.
Mechanism
Physiological components in retinal gene therapy
The vertebrate neural retina composed of several layers and distinct cell types (see anatomy of the human retina). A number of these cell types are implicated in retinal diseases, including retinal ganglion cells, which degenerate in glaucoma, the rod and cone photoreceptors, which are responsive to light and degenerate in retinitis pigmentosa, macular degeneration, and other retinal diseases, and the retinal pigment epithelium (RPE), which supports the photoreceptors and is also implicated in retinitis pigmentosa and macular degeneration.
In retinal gene therapy, AAV is capable of "transducing" these various cell types by entering the cells and expressing the therapeutic DNA sequence. Since the cells of the retina are non-dividing, AAV continues to persist and provide expression of the therapeutic DNA sequence over a long time period that can last several years.
AAV tropism and routes of administration
AAV is capable of transducing multiple cell types within the retina. AAV serotype 2, the most well-studied type of AAV, is commonly administered in one of two routes: intravitreal or subretinal. Using the intravitreal route, AAV is injected in the vitreous humor of the eye. Using the subretinal route, AAV is injected underneath the retina, taking advantage of the potential space between the photoreceptors and RPE layer, in a short surgical procedure. Although this is more invasive than the intravitreal route, the fluid is absorbed by the RPE and the retina flattens in less than 14 hours without complications. Intravitreal AAV targets retinal ganglion cells and a few Muller glial cells. Subretinal AAV efficiently targets photoreceptors and RPE cells.
The reason that different routes of administration lead to different cell types being transfected (e.g., different tropism) is that the inner limiting membrane (ILM) and the various retinal layers act as physical barriers for the delivery of drugs and vectors to the deeper retinal layers. Thus overall, subretinal AAV is 5-10 times more efficient than delivery using the intravitreal route.
Tropism modification and novel AAV vectors
One important factor in gene delivery is developing altered cell tropisms to narrow or broaden rAAV-mediated gene delivery and to increase its efficiency in tissues. Specific properties like capsid conformation, cell targeting strategies can determine which cell types are affected and also the efficiency of the gene transfer process. Different kinds of modification can be undertaken. For example, modification by chemical, immunological or genetic changes that enables the AAV2 capsid to interact with specific cell surface molecules.
Initial studies with AAV in the retina have utilized AAV serotype 2. Researchers are now beginning to develop new variants of AAV, based on naturally-occurring AAV serotypes and engineered AAV variants.
Several naturally-occurring serotypes of AAV have been isolated that can transduce retinal cells. Following intravitreal injection, only AAV serotypes 2 and 8 were capable of transducing retinal ganglion cells. Occasional Muller cells were transduced by AAV serotypes 2, 8, and 9. Following subretinal injection, serotypes 2, 5, 7, and 8 efficiently transduced photoreceptors, and serotypes 1, 2, 5, 7, 8, and 9 efficiently transduce RPE cells.
One example of an engineered variant has recently been described that efficiently transduces Muller glia following intravitreal injection, and has been used to rescue an animal model of aggressive, autosomal-dominant retinitis pigmentosa.
AAV and immune privilege in the retina
Importantly, the retina is immune-privileged, and thus does not experience a significant inflammation or immune-response when AAV is injected. Immune response to gene therapy vectors is what has caused previous attempts at gene therapy to fail, and is considered a key advantage of gene therapy in the eye. Re-administration has been successful in large animals, indicating that no long-lasting immune response is mounted.
Recent data indicates that the subretinal route may be subject to a greater degree of immune privilege compared to the intravitreal route.
Promoter sequence
Expression in various retinal cell types can be determined by the promoter sequence. In order to restrict expression to a specific cell type, a tissue-specific or cell-type specific promoter can be used.
For example, in rats the murine rhodopsin gene drive the expression in AAV2, GFP reporter product was found only in rat photoreceptors, not in any other retinal cell type or in the adjacent RPE after subretinal injection. On the other hand, if ubiquitously expressed immediate-early cytomegalovirus (CMV) enhancer-promoter is expressed in a wide variety of transfected cell types. Other ubiquitous promoters such as the CBA promoter, a fusion of the chicken-actin promoter and CMV immediate-early enhancer, allows stable GFP reporter expression in both RPE and photoreceptor cells after subretinal injections.
Modulation of expression
Sometimes modulation of transgene expression may be necessary since strong constitutive expression of a therapeutic gene in retinal tissues could be deleterious for long-term retinal function. Different methods have been utilized for the expression modulation. One way is using exogenously regulatable promoter system in AAV vectors. For example, the tetracycline-inducible expression system uses a silencer/transactivator AAV2 vector and a separate inducible doxycycline-responsive coinjection. When induction occurs by oral doxycycline, this system shows tight regulation of gene expression in both photoreceptor and RPE cells.
Examples and animal models
Targeting RPE
One study that was done by Royal College of Surgeons (RCS) in rat model shows that a recessive mutation in a receptor tyrosine kinase gene, mertk results in a premature stop codon and impaired phagocytosis function by RPE cells. This mutation causes the accumulation of outer segment debris in the subretinal space, which causes photoreceptor cell death. The model organism with this disease received a subretinal injection of AAV serotype 2 carrying a mouse Mertk cDNA under the control of either the CMV or RPE65 promoters. This treatment was found to prolong photoreceptor cell survival for several months and also the number of photoreceptor was 2.5 fold higher in AAV-Mertk- treated eyes compared with controls 9 weeks after injection, also they found decreased amount of debris in the subretinal space.
The protein RPE65 is used in the retinoid cycle where the all-trans-retinol within the rod outer segment is isomerized to its 11-cis form and oxidized to 11-cis retinal before it goes back to the photoreceptor and joins with opsin molecule to form functional rhodopsin. In animal knockout model (RPE65-/-), gene transfer experiment shows that early intraocular delivery of human RPE65 vector on embryonic day 14 shows efficient transduction of retinal pigment epithelium in the RPE65-/- knockout mice and rescues visual functions. This shows successful gene therapy can be attributed to early intraocular deliver to the diseased animal.
Targeting of photoreceptors
Juvenile retinoschisis is a disease that affects the nerve tissue in the eye. This disease is an X-linked recessive degenerative disease of the central macula region, and it is caused by mutation in the RSI gene encoding the protein retinoschisin. Retinoschisin is produced in the photoreceptor and bipolar cells and it is critical in maintaining the synaptic integrity of the retina.
Specifically the AAV 5 vector containing the wild-type human RSI cDNA driven by a mouse opsin promoter showed long-term retinal functional and structural recovery. Also the retinal structural reliability improved greatly after the treatment, characterized by an increase in the outer nuclear layer thickness.
Retinitis pigmentosa
Retinitis pigmentosa is an inherited disease which leads to progressive night blindness and loss of peripheral vision as a result of photoreceptor cell death. Most people who suffer from RP are born with rod cells that are either dead or dysfunctional, so they are effectively blind at nighttime, since these are the cells responsible for vision in low levels of light. What follows often is the death of cone cells, responsible for color vision and acuity, at light levels present during the day. Loss of cones leads to full blindness as early as five years old, but may not onset until many years later. There have been multiple hypotheses about how the lack of rod cells can lead to the death of cone cells. Pinpointing a mechanism for RP is difficult because there are more than 39 genetic loci and genes correlated with this disease. In an effort to find the cause of RP, there have been different gene therapy techniques applied to address each of the hypotheses.
Different types of inheritance can attribute to this disease; autosomal recessive, autosomal dominant, X-linked type, etc. The main function of rhodopsin is initiating the phototransduction cascade. The opsin proteins are made in the photoreceptor inner segments, then transported to the outer segment, and eventually phagocytized by the RPE cells. When mutations occur in the rhodopsin the directional protein movement is affected because the mutations can affect protein folding, stability, and intracellular trafficking. One approach is introducing AAV-delivered ribozymes designed to target and destroy a mutant mRNA.
The way this system operates was shown in animal model that have a mutant rhodopsin gene. The injected AAV-ribozymes were optimized in vitro and used to cleave the mutant mRNA transcript of P23H (where most mutation occur) in vivo.
Another mutation in the rhodopsin structural protein, specifically peripherin 2 which is a membrane glycoprotein involved in the formation of photoreceptor outersegment disk, can lead to recessive RP and macular degeneration in human (19). In a mouse experiment, AAV2 carrying a wild-type peripherin 2 gene driven by a rhodopsin promoter was delivered to the mice by subretinal injection. The result showed improvement in photoreceptor structure and function which was detected by ERG (electroretinogram). The result showed improvement of photoreceptor structure and function which was detected by ERG. Also peripherin 2 was detected at the outer segment layer of the retina 2 weeks after injection and therapeutic effects were noted as soon as 3 weeks after injection. A well-defined outer segment containing both peripherin2 and rhodopsin was present 9-month after injection.
Since apoptosis can be the cause of photoreceptor death in most of the retinal dystrophies. It has been known that survival factors and antiapoptoic reagents can be an alternative treatment if the mutation is unknown for gene replacement therapy. Some scientists have experimented with treating this issue by injecting substitute trophic factors into the eye. One group of researchers injected the rod derived cone viability factor (RdCVF) protein (encoded for by the Nxnl1 (Txnl6) gene) into the eye of the most commonly occurring dominant RP mutation rat models. This treatment demonstrated success in promoting the survival of cone activity, but the treatment served even more significantly to prevent progression of the disease by increasing the actual function of the cones. Experiments were also carried out to study whether supplying AAV2 vectors with cDNA for glial cell line-derived neurotrophic factor (GDNF) can have an anti-apoptosis effect on the rod cells.
In looking at an animal model, the opsin transgene contains a truncated protein lacking the last 15 amino acids of the C terminus, which causes alteration in rhodopsin transport to the outer segment and leads to retinal degeneration. When the AAV2-CBA-GDNF vector is administered to the subretinal space, photoreceptor stabilized and rod photoreceptors increased and this was seen in the improved function of the ERG analysis. Successful experiments in animals have also been carried out using ciliary neurotrophic factor (CNTF), and CNTF is currently being used as a treatment in human clinical trials.
AAV-based treatment for retinal neovascular diseases
Ocular neovascularization (NV) is the abnormal formation of new capillaries from already existing blood vessels in the eye, and this is a characteristics for ocular diseases such as diabetic retinopathy (DR), retinopathy of prematurity (ROP) and (wet form) age-related macular degeneration (AMD). One of the main players in these diseases is VEGF (Vascular endothelial growth factor) which is known to induce vessel leakage and which is also known to be angiogenic. In normal tissues VEGF stimulates endothelial cell proliferation in a dose dependent manner, but such activity is lost with other angiogenic factors.
Many angiostatic factors have been shown to counteract the effect of increasing local VEGF. The naturally occurring form of soluble Flt-1 has been shown to reverse neovascularization in rats, mice, and monkeys.
Pigment epithelium-derived factor (PEDF) also acts as an inhibitor of angiogenesis. The secretion of PEDF is noticeably decreased under hypoxic conditions allowing the endothelial mitogenic activity of VEGF to dominate, suggesting that the loss of PEDF plays a central role in the development of ischemia-driven NV. One clinical finding shows that the levels of PEDF in aqueous humor of human are decreased with increasing age, indicating that the reduction may lead to the development of AMD. In animal model, an AAV with human PEDF cDNA under the control of the CMV promoter prevented choroidal and retinal NV.
The finding suggests that the AAV-mediated expression of angiostatic factors can be implemented to treat NV. This approach could be useful as an alternative to frequent injections of recombinant protein into the eye. In addition, PEDF and sFlt-1 may be able to diffuse through sclera tissue, allowing for the potential to be relatively independent of the intraocular site of administration.
See also
Retina
Gene therapy
Retinitis pigmentosa
Macular degeneration
Gene therapy for color blindness
References
Medical genetics
Molecular biology
Gene delivery
Gene therapy | Gene therapy of the human retina | [
"Chemistry",
"Engineering",
"Biology"
] | 4,374 | [
"Genetics techniques",
"Genetic engineering",
"Gene therapy",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry",
"Gene delivery"
] |
24,186,720 | https://en.wikipedia.org/wiki/LCF%20notation | In the mathematical field of graph theory, LCF notation or LCF code is a notation devised by Joshua Lederberg, and extended by H. S. M. Coxeter and Robert Frucht, for the representation of cubic graphs that contain a Hamiltonian cycle. The cycle itself includes two out of the three adjacencies for each vertex, and the LCF notation specifies how far along the cycle each vertex's third neighbor is. A single graph may have multiple different representations in LCF notation.
Description
In a Hamiltonian graph, the vertices can be arranged in a cycle, which accounts for two edges per vertex. The third edge from each vertex can then be described by how many positions clockwise (positive) or counter-clockwise (negative) it leads. The basic form of the LCF notation is just the sequence of these numbers of positions, starting from an arbitrarily chosen vertex and written in square brackets.
The numbers between the brackets are interpreted modulo N, where N is the number of vertices. Entries congruent modulo N to 0, 1, or N − 1 do not appear in this sequence of numbers, because they would correspond either to a loop or multiple adjacency, neither of which are permitted in simple graphs.
Often the pattern repeats, and the number of repetitions can be indicated by a superscript in the notation. For example, the Nauru graph, shown on the right, has four repetitions of the same six offsets, and can be represented by the LCF notation [5, −9, 7, −7, 9, −5]4. A single graph may have multiple different LCF notations, depending on the choices of Hamiltonian cycle and starting vertex.
Applications
LCF notation is useful in publishing concise descriptions of Hamiltonian cubic graphs, such as the examples below. In addition, some software packages for manipulating graphs include utilities for creating a graph from its LCF notation.
If a graph is represented by LCF notation, it is straightforward to test whether the graph is bipartite: this is true if and only if all of the offsets in the LCF notation are odd.
Examples
Extended LCF notation
A more complex extended version of LCF notation was provided by Coxeter, Frucht, and Powers in later work. In particular, they introduced an "anti-palindromic" notation: if the second half of the numbers between the square brackets was the reverse of the first half, but with all the signs changed, then it was replaced by a semicolon and a dash. The Nauru graph satisfies this condition with [5, −9, 7, −7, 9, −5]4, and so can be written [5, −9, 7; −]4 in the extended notation.
References
External links
"Cubic Hamiltonian Graphs from LCF Notation" – JavaScript interactive application, built with D3js library
Graph description languages
Hamiltonian paths and cycles | LCF notation | [
"Mathematics"
] | 610 | [
"Graph description languages",
"Mathematical relations",
"Graph theory"
] |
24,187,079 | https://en.wikipedia.org/wiki/C17H13ClN4 | {{DISPLAYTITLE:C17H13ClN4}}
The molecular formula C17H13ClN4 (molar mass: 308.76 g/mol, exact mass: 308.0829 u) may refer to:
Alprazolam
4'-Chlorodeschloroalprazolam
Liarozole
Molecular formulas | C17H13ClN4 | [
"Physics",
"Chemistry"
] | 76 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,188,246 | https://en.wikipedia.org/wiki/Glossary%20of%20gastropod%20terms | The following is a glossary of common English language and scientific terms used in the description of gastropods.
Abapical – away from the apex of a shell toward the base
Acephalous – Headless.
Acinose – Full of small bulgings; resembling the kernel in a nut.
Aculeate – Very sharply pointed, as the teeth on the radula of some snails.
Acuminate – gradually tapering to a point, as the spire of some shells.
Acute – Sharp or pointed, as the spire of a shell, or the lip of a shell.
Adapical – toward the apex of a shell (<--> abapical)
Admedian – Next to the central object, as the lateral teeth on the lingual membrane.
Adpressed – with overlapping whorls or with a suture tightly pressed to the previous whorl (preferred to the term appressed)
Afferent – To bring in; when relating to a vessel or duct, indicating that it brings in its contents.
Amoeboid – Shaped like an amoeba, a small animalcule.
Amorphous – Without distinct form.
Amphibious – Inhabiting both land and water.
Amphidetic – With the ligament on both sides of the umbones.
Anal canal – Tubular of gutter-like opening in the shell of a gastropod through which excrements are expelled (see also: siphonal canal)
Analogue – A likeness between two objects when otherwise they are totally different, as the wing of a bird and the wing of a butterfly.
Anastomosing – Coming together.
Annular – Made up of rings.
Anterior – The front or fore end.
Aquatic – Inhabiting the water.
Arborescent – Branching like a tree.
Arched – Bowed or bent in a curve.
Arcti-spiral – Tightly coiled, as some spiral shells.
Asphyxiating – Causing suspended animation; apparent death.
Assimilation – Act of converting one substance into another, as the changing of food-stuffs into living bodies.
Asymmetrical – Not symmetrical.
Atrium
Atrophied – Wasted away.
Attenuate – Long and slender, as in some shells.
Auditory – Connected with the hearing.
Auricled – Eared, or with ear-like appendages.
Basal – The bottom or lower part.
Biangulate – With two angles.
Bicuspid or bicuspidate – Having two cusps.
Bifid – Having two arms or prongs.
Bifurcated – Having two branches.
Bilateral – With two sides.
Bilobed – With two lobes.
Blood sinus
Bulbous – Swollen.
Calcareous – Composed of carbonate of lime.
Callosity – A hardened and raised bunch, as the callus on the columella of some shells.
Callus – A deposit of shelly matter.
Campanulate – Formed like a bell.
Canaliculate – Resembling a canal, as the deep sutures in some shells.
Cancellated – Formed of cross-bars, as the longitudinal and spiral lines which cross in some shells.
Cardiac pouch – Containing the heart and placed near the umb'ones of the shell.
Carinate – Keeled. With keel.
Cartilaginous – Like cartilage.
Caudal – Tail-like, or with a tail-like appendage.
Cellular – Made up of cells.
Cerebral – Pertaining to the brain.
Channeled – Grooved or formed like a channel.
Chitinous – Formed of chitin, as the radulas of gastropods.
Ciliary – By means of cilia.
Ciliated – Having cilia.
Cilium (plural cilia) – A lash; used to designate the hairs on the mantle, gills, etc.
Clavate – Club-shaped.
Coarctate – Pressed together, narrowed.
Concave – Excavated, hollowed out.
Conchiolin
Conic – Shaped like a cone.
Connective – A part connecting two other parts, as a muscle connecting two parts of the body, or a nerve connecting two ganglia.
Constricted – Narrowed.
Contractile – Capable of being contracted or drawn in, as the tentacle of a snail.
Convex – Bulged out, as the whorls of some snails.
Convoluted – Rolled together.
Cordate – Heart-shaped.
Corneous – Horn-like, as the opercula of some gastropods.
Corrugated – Roughened by wrinkles.
Costate – Having rib-like ridges.
Crenulate – Wrinkled on the edges.
Crescentic – Like a crescent.
Cylindrical – Like a cylinder.
Decollated – Cut off, as the apex of some shells.
Decussated – With spiral and longitudinal lines intersecting, as the sculpture of some shells.
Deflexed – Bent downward, as the last whorl in some snails.
Dentate – With points or nodules resembling teeth, as the aperture of some snails.
Denticulate – Finely dentate.
Depressed – Flattened, as the spire in some snails.
Dextral – Right-handed.
Digitiform – Finger-like.
Dilated – Expanded in all directions, as the aperture of a shell.
Dimorphism – With two forms or conditions.
Dioecious – Having the sexes in two individuals, one male and one female.
Distal – The farthest part from an object.
Discoidal – Shaped like a flat disk.
Diverticulum – A pouch or hole, as the pouch containing the radula, or that containing the dart in helices.
Dormant – In a state of torpor or sleep.
Dorsal – The back. In gastropods the opposite to the aperture.
Ectocone – The outer cusp on the teeth of the radula.
Edentulous – Without teeth or folds, as the aperture in some gastropods.
Efferent – Carrying out.
Elliptical – With an oval form.
Elongated – Drawn out, as the spire of a shell.
Emarginate – Bluntly notched.
Encysted – Enclosed in a cyst.
Entocone – The inner cusp on the teeth of the radula.
Entire – With even, unbroken edges, as the aperture of some shells.
Epiphallus – A portion of the vas deferens which becomes modified into a tube-like organ and is continued beyond the apex of the penis; it frequently bears a blind duct, or flagellum.
Epithelium – All tissues bounding a free surface.
Equidistant – Equally spaced, as the spiral lines on some snail shells.
Equilibrating – Balancing equally.
Eroded – Worn away, as the epidermis on some shells.
Erosive – Capable of erosion.
Excavated – Hollowed out, as the columella of some snails.
Excurrent – Referring to the siphon which carries out the waste matter of the body.
Exoskeleton – The outer skeleton; all shells are exoskeletons.
Exserted – Brought out.
Expanded – Spread out, as the lip of some shells.
Falcate – Scythe-shaped.
Fasciculus – A little bundle.
Flagellate – Animals with a flagellum or lash.
Flexuous – Formed in a series of curves or turnings, as the columella in some shells.
Flocculent – Clinging together in bunches.
Fluviatile – Living in running streams.
Fusiform – Thick in the middle and tapering at each end.
Gelatinous – Like jelly, as the eggs of some mollusks.
Gibbous – Very much rounded, as the whorls in some snails.
Glandular – Like a gland.
Globose – Rounded.
Granulated – Covered with little grains.
Gravid – A female mollusk with ovaries distended with young.
Gregarious – Living in colonies.
Gular – Relating to the windpipe or palate. In mollusks, referring to the innermost part of the aperture.
Habitat – Locality of a species.
Hasmolymph – Molluscan blood.
Heliciform – In form like Helix.
Hemispherical – Half a sphere.
Herbivorous – Subsisting upon vegetable food.
Hermaphrodite – Having the sexes united in the same individual.
Hibernation – The act of hibernating or going to sleep for the winter months.
Hirsute – Covered with hairs, as some snails.
Hispid – Same as hirsute.
Homologous – Having the same position or value, as the wing of a bird and of a bat.
Hyaline – Glassy.
Imperforate – Not perforated or umbilicated.
Impressed – Marked by a furrow, as the impressed spiral lines on some gastropod shells.
Incrassate – Thickened.
Incurved – Leaned or bent over, as the apex in some snails.
Indented – Notched.
Inflected – Turned in, as the teeth of some snails.
Inhalent – Same as incurrent.
Inoperculate – Without an operculum.
Intercostate – Between the ribs or ridges.
Invaginate – One part bending into another, as the tentacles of some land snails.
Invertible – Capable of being inverted, or drawn in, as the eye-peduncles of a land snail.
Juvenile
Keeled – With a more or less sharp projection at the periphery.
Lamellated – Covered with scales.
Lamelliform – Having the form of scales.
Laminated – Consisting of plates or scales laid over each other.
Lanceolate – Gradually tapering to a point.
Lateral – Pertaining to the side.
Latticed – (See decussated.)
Lobulate – Composed of lobes.
Longitudinal – The length of a shell.
Lunate – Shaped like a half moon, as the aperture in some shells.
Malleated – Appearing as though hammered.
Manducatory – Relating to the apparatus for masticating food. In snails, the jaws and radula.
Median – Middle, as the middle tooth on the radula.
Mesocene – The middle cusp on the teeth of the radula.
Monoecius – Having the sexes united in the same individual.
Multifid – Made up of many lobes or projections, as the cusps on some radulae.
Multispiral – Consisting of many whorls, as some fresh-water snails.
Nacreous – Pearly or iridescent.
Nepionic – The second stage of the embryonic shell, as the glochidium.
Notched – Nicked or indented, as the anterior canal of some gastropods.
Nucleus – The first part or beginning, as the apex in a gastropod shell.
Nucleated – Having a nucleus.
Obconic – In the form of a reversed cone.
Oblique – Slanting, as the aperture of some shells when not parallel to the longitudinal axis.
Obovate – Reversed ovate, as some shells when the diameter is greater near the upper than at the lower part.
Obtuse – Dull or blunt, as the apex of some gastropods.
Olfactory – Pertaining to the smell.
Olivaceous – Colored like an olive.
Organism – An organized being, or living object made up of organs.
Ovate – Egg-shaped.
Ovately conic – Shaped like an egg, but with a somewhat conic apex, as some gastropods.
Oviparous – Bringing forth young in an egg which is hatched after it is laid.
Ovisac – A pouch in which the eggs or embryos are contained.
Ovoviviparous – In this case the young are formed in an egg but are hatched inside the parent.
Pallial lung
Papillose – Covered with many little bulgings or pimples.
Parallel – Having the same relative distance in all parts, as when the spiral lines in univalve shells are the same distance apart all the way around.
Patelliform – Shaped like a flattened-out cone, as an Ancylus.
Patulous – Open and spreading, as the aperture in some gastropods.
Paucispiral – Only slightly spiral, as some opercula.
Pectinate – Like the teeth of a comb, as the gills of some mollusks.
Pedal – Pertaining to the foot.
Pedunculated – Supported on a stem or stalk, as the eyes of land snails.
Pellucid – Transparent or clear, as the shells of some snails; e. g. Vitrea.
Penultimate – The whorl before the last in gastropod shells.
Pericardium – The chamber containing the heart.
Periostracum – The epidermal covering of some shells.
Pervious – Very narrowly open, as the umbilicus in some snails.
Phytophagus – Vegetable-feeding.
Pilose – Covered with hairs.
Pinnate – Branched like a feather, as the gills of some mollusks.
Plaited – Folded.
Planispiral shell
Planorboid – Flat and orb-like, as some snails.
Pleurae – Relating to the side of a body.
Plexus – A network of vessels, as the form of the lungs in snails.
Plicated – Made up of folds.
Plumose – Resembling plumes.
Polygonal – Having many angles.
Porcellanous – Like porcelain.
Prismatic – Like a prism.
Prodissoconch – The embryonic shell.
Protoconch – The embryonic shell.
Protract – To push out.
Protractor pedis – The foot protractor muscle.
Protrusile – Capable of being pushed out.
Proximal – The nearest end of an object.
Pulsation – A throb, as the throbbing of the heart.
Pupiform – Like a pupa; one of the stages in the development of an insect.
Pustulate – Covered with pustules or little pimples.
Pustulose – Same as pustulate.
Pyramidal – Having the form of a pyramid.
Pyriform – Shaped like a pear.
Reflected – Bent backward, as the lip in some snails.
Reflexed – Same as Reflected.
Renal – Relating to the kidneys.
Reticulated – Resembling a network, as when the longitudinal and spiral lines cross in a snail.
Retractile – Capable of being drawn in, as the eye peduncles in land snails.
Retractor pedis – Foot retractor muscle.
Revolving lines – Spiral lines on a snail shell which run parallel with the sutures.
Rhombic – Having four sides, the angles being oblique.
Rhomboid – Four-sided, but two of the sides being longer than the others.
Rimate – Provided with a very small hole or crack, as some snails in which the umbilicus is very narrowly open.
Roundly lunate – Rounder than lunate (which see).
Rostriform – In the form of a rostrum.
Rudimentary – Not fully formed; imperfect.
Rugose – Rough or wrinkled, as parts of some shells.
Sacculated – Somewhat like a sac, or composed of sac-like parts.
Scalar – Resembling a ladder.
Secreted – Produced or deposited from the blood or glands, as the shell material in mollusks.
Semicircular – Half round or circular, as the aperture in some snails.
Semidentate – Half toothed, as the parietal wall in some land snails.
Semielliptic – Half elliptical.
Semiglobose – Half, or not quite globose.
Semilunate – Half lunate.
Semioval – Half, or not quite oval.
Serrated – Notched, like the teeth on a saw.
Serriform – In the form of series.
Sessile – Attached without a stem, as the eyes in some water snails.
Shouldered – Ridged, as the whorls in some snails.
Sigmoid – Shaped like the letter S.
Siliceous – Made up of silex.
Sinistral – Having the aperture on the left side.
Sinusigerid – with a diagonally cancellate (structure)
Sinuous – Curved in and out, as the edge of some bivalves and the lips of some snails.
Siphonal canal – semi-tubular extension of the aperture of the shell through which the siphon is extended when the animal is active
Spatulate – In the form of a spatula, a flat-bladed instrument used by druggists in pulverizing drugs.
Spherical – Shaped like a sphere.
Spiral – Wound about a central cavity, as the whorls of snails.
Striated – Marked by lines or striae.
Subangulated – Moderately angled.
Subcarinated – Moderately carinated.
Subcentral – Not quite in the center.
Subcircular – Not quite circular.
Subconical – Moderately conical.
Subcylindrical – Moderately cylindrical.
Subequal – Not quite equal.
Subexcavated – A little excavated.
Subfusiform – Moderately fusiform.
Subglobose – Moderately globose.
Subglobular – Moderately globular.
Subhyaline – Moderately glassy.
Subimperforate – Not much perforated.
Suboblong – Moderately oblong.
Subobsolete – Almost disappearing.
Subovate – Nearly ovate.
Subparallel – Almost parallel.
Subperforated – Almost perforated.
Subquadrate – Almost four-sided.
Subreflected – Moderately turned back.
Subrotund – Moderately round.
Subspiral – Moderately spiral.
Subtriangulate – Moderately or almost triangular.
Subtrigonal – Moderately three-angled.
Subtruncate – Moderately cut off.
Subumbilicated – Moderately umbilicated.
Sulcated – Grooved.
Sulcus – A longitudinal furrow.
Superanal – Above the anus.
Supra-peripheral – Above the periphery.
Symmetrical – Alike on both sides or uniform in all parts.
Terrestrial – Living on the land.
Testaceous – Composed of shelly matter.
Torsion – A twisting around.
Tortuous – Twisted or winding.
Torpid – Half unconscious or asleep, as a snail during hibernation.
Translucent – Not quite transparent; light is seen through the thin edges of the object.
Transparent – Objects may be seen through the substance.
Transverse – Referring to the form of a shell when it is wider than high.
Tricuspidate – Having three cusps.
Trifid – Having three branches.
Trigonal – Having three angles.
Trilobate – Having three lobes.
Tripartite – Divided into three parts, as the foot of some snails.
Truncate – Having the end cut off squarely.
Tuberculate – Covered with tubercles or rounded knobs.
Turbinate – Having the form of a top.
Turriculated – Having the form of a tower.
Turreted – Having the form of a tower.
Umbilicated – Having an opening in the base of the shell.
Undulated – Having undulations or waves.
Univalve – Having the shell composed of a single piece, as a snail.
Varicose – Swollen or enlarged.
Vascular – Containing or made up of blood vessels.
Vermiform – Formed like a worm.
Ventral – The lower border or side.
Ventricose – Swollen or inflated on the ventral side.
Vibratile – Moving from side to side.
Vitreous – Resembling glass, as some snails.
See also
outline of gastropods
Glossary of biology
Glossary of scientific names
Glossary of scientific naming
References
This article include public domain text from Baker, The Mollusca of the Chicago area, 1898-1902.
Gastropods
Gastropod terms
Wikipedia glossaries using unordered lists | Glossary of gastropod terms | [
"Biology"
] | 4,079 | [
"Glossaries of zoology",
"Glossaries of biology"
] |
24,188,897 | https://en.wikipedia.org/wiki/Rouging | Rouging is a form of corrosion found in stainless steel. It can be due to iron contamination of the stainless steel surface due to welding of non-stainless steel for support columns, or other temporary means, which when welded off leaves a low chromium area.
There are three classes of rouging: Class I, Class II, and Class III.
Class I – stainless steel surface and the Cr/Fe ratio of the metal surface beneath such deposits usually remain unaltered.
Class II – Iron particles originating in-situ on unpassivated or improperly passivated stainless steel surfaces. By their formation the Cr/Fe ratio of the metal surface is altered.
Class III – Iron oxide (or scale) which forms on surfaces in high temperature steam systems. The Cr/Fe ratio of the protective film is usually altered.
References
Corrosion | Rouging | [
"Chemistry",
"Materials_science",
"Engineering"
] | 171 | [
"Mechanical engineering stubs",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Mechanical engineering",
"Electrochemistry stubs",
"Materials degradation",
"Physical chemistry stubs",
"Chemical process stubs"
] |
18,934,464 | https://en.wikipedia.org/wiki/Embrace%2C%20extend%2C%20and%20extinguish | "Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found was used internally by Microsoft to describe its strategy for entering product categories involving widely used open standards, extending those standards with proprietary capabilities, and using the differences to strongly disadvantage its competitors.
Origin
The strategy and phrase "embrace and extend" were first described outside Microsoft in a 1996 article in The New York Times titled "Tomorrow, the World Wide Web! Microsoft, the PC King, Wants to Reign Over the Internet", in which writer John Markoff said, "Rather than merely embrace and extend the Internet, the company's critics now fear, Microsoft intends to engulf it." The phrase "embrace and extend" also appears in a facetious motivational song by an anonymous Microsoft employee, and in an interview of Steve Ballmer by The New York Times.
A variant of the phrase, "embrace, extend then innovate", is used in J Allard's 1994 memo "Windows: The Next Killer Application on the Internet" to Paul Maritz and other executives at Microsoft. The memo starts with a background on the Internet in general, and then proposes a strategy on how to turn Windows into the next "killer app" for the Internet:
The addition "extinguish" was introduced in the United States v. Microsoft Corp. antitrust trial when then vice president of Intel, Steven McGeady, used the phrase to explain Maritz's statement in a 1995 meeting with Intel that described Microsoft's strategy to "kill HTML by extending it".
Strategy
The strategy's three phases are:
Embrace: Development of software substantially compatible with an Open Standard.
Extend: Addition of features not supported by the Open Standard, creating interoperability problems.
Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors who are unable to support the new extensions.
Microsoft claims the original strategy is not anti-competitive, but rather an exercise to implement features it believes customers want.
Examples by Microsoft
Browser incompatibilities:
The plaintiffs in an antitrust case claimed Microsoft had added support for ActiveX controls in the Internet Explorer Web browser to break compatibility with Netscape Navigator, which used components based on Java and Netscape's own plugin system.
On CSS, data:, etc.: A decade after the original Netscape-related antitrust suit, the Web browser company Opera Software filed an antitrust complaint against Microsoft with the European Union, saying it "calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious 'Embrace, Extend and Extinguish' strategy".
Office documents: In a memo to the Office product group in 1998, Bill Gates stated: "One thing we have got to change in our strategy – allowing Office documents to be rendered very well by other people's browsers is one of the most destructive things we could do to the company. We have to stop putting any effort into this and make sure that Office documents very well depends on PROPRIETARY IE capabilities. Anything else is suicide for our platform. This is a case where Office has to avoid doing something to destory Windows."
Breaking Java's portability: The antitrust case's plaintiffs also accused Microsoft of using an "embrace and extend" strategy with regard to the Java platform, which was designed explicitly with the goal of developing programs that could run on any operating system, be it Windows, Mac, or Linux. They claimed that, by omitting the Java Native Interface (JNI) from its implementation and providing J/Direct for a similar purpose, Microsoft deliberately tied Windows Java programs to its platform, making them unusable on Linux and Mac systems. According to an internal communication, Microsoft sought to downplay Java's cross-platform capability and make it "just the latest, best way to write Windows applications". Microsoft paid Sun Microsystems US$20 million in January 2001 (equivalent to $ million in ) to settle the resulting legal implications of their breach of contract.
More Java issues: Sun sued Microsoft over Java again in 2002 and Microsoft agreed to settle out of court for US$2 billion (equivalent to US$ billion in ).
Instant messaging: In 2001, CNET described an instance concerning Microsoft's instant messaging program. "Embrace" AOL's IM protocol, the de facto standard of the 1990s and early 2000s. "Extend" the standard with proprietary Microsoft addons which added new features, but broke compatibility with AOL's software. Gain dominance, since Microsoft had 95% OS share and their MSN Messenger was provided for free. Finally, "extinguish" and lock out AOL's IM software, since AOL was unable to use the modified MS-patented protocol.
Email protocols: Microsoft supported POP3, IMAP, and SMTP email protocols in their Microsoft Outlook email client. At the same time, they developed their own email protocol, MAPI, which has since been documented but is largely unused by third parties. Microsoft has announced that they would end support for the less secure basic authentication, which lacks support for multi-factor authentication, access to Exchange Online APIs for Office 365 customers, which disables most use of IMAP or POP3 and requires significant upgrades to support the more secure OAuth2 based authentication in applications in order to continue to use those protocols; some customers have responded by simply shutting off older protocols.
Web browsers
Netscape
During the browser wars, Netscape implemented the "font" tag, among other HTML extensions, without seeking review from a standards body. With the rise of Internet Explorer, the two companies became locked in a dead heat to out-implement each other with non-standards-compliant features. In 2004, to prevent a repeat of the "browser wars", and the resulting morass of conflicting standards, the browser vendors Apple Inc. (Safari), Mozilla Foundation (Firefox), and Opera Software (Opera browser) formed the Web Hypertext Application Technology Working Group (WHATWG) to create open standards to complement those of the World Wide Web Consortium. Microsoft refused to join, citing the group's lack of a patent policy as the reason.
Google Chrome
With its dominance in the web browser market, Google has been accused of using Google Chrome and Blink development to push new web standards that are proposed in-house by Google and subsequently implemented by its services first and foremost. These have led to performance disadvantages and compatibility issues with competing browsers, and in some cases, developers intentionally refusing to test their websites on any other browser than Chrome. Tom Warren of The Verge went as far as comparing Chrome to Internet Explorer 6, the default browser of Windows XP that was often targeted by competitors due to its similar ubiquity in the early 2000s.
See also
32-bit vs 64-bit
AARD code
Criticism of Microsoft
Halloween documents
Microsoft and open source
Network effect
Path dependence
Vendor lock-in
Enshittification
Planned obsolescence
References
External links
Report on Microsoft documents relating to Office and IE Embrace, extend and extinguish
Microsoft criticisms and controversies
Interoperability
Marketing techniques
Spheres of influence
Standards | Embrace, extend, and extinguish | [
"Engineering"
] | 1,492 | [
"Telecommunications engineering",
"Interoperability"
] |
18,934,536 | https://en.wikipedia.org/wiki/ADSL | Asymmetric digital subscriber line (ADSL) is a type of digital subscriber line (DSL) technology, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. ADSL differs from the less common symmetric digital subscriber line (SDSL). In ADSL, bandwidth and bit rate are said to be asymmetric, meaning greater toward the customer premises (downstream) than the reverse (upstream). Providers usually market ADSL as an Internet access service primarily for downloading content from the Internet, but not for serving content accessed by others.
Overview
ADSL works by using spectrum above the band used by voice telephone calls. With a DSL filter, often called splitter, the frequency bands are isolated, permitting a single telephone line to be used for both ADSL service and telephone calls at the same time. ADSL is generally only installed for short distances from the telephone exchange (the last mile), typically less than , but has been known to exceed if the originally laid wire gauge allows for further distribution.
At the telephone exchange, the line generally terminates at a digital subscriber line access multiplexer (DSLAM) where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL are typically routed over the telephone company's data network and eventually reach a conventional Internet Protocol network.
There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk from other circuits at the DSLAM end (where the wires from many local loops are close to each other) than at the customer premises. Thus the upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL.
The marketing reasons for an asymmetric connection are that, firstly, most users of internet traffic will require less data to be uploaded than downloaded. For example, in normal web browsing, a user will visit a number of web sites and will need to download the data that comprises the web pages from the site, images, text, sound files etc. but they will only upload a small amount of data, as the only uploaded data is that used for the purpose of verifying the receipt of the downloaded data (in very common TCP connections) or any data inputted by the user into forms etc. This provides a justification for internet service providers to offer a more expensive service aimed at commercial users who host websites, and who therefore need a service which allows for as much data to be uploaded as downloaded. File sharing applications are an obvious exception to this situation. Secondly internet service providers, seeking to avoid overloading of their backbone connections, have traditionally tried to limit uses such as file sharing which generate a lot of uploads.
Operation
Currently, most ADSL communication is full-duplex. Full-duplex ADSL communication is usually achieved on a wire pair by either frequency-division duplex (FDD), echo-cancelling duplex (ECD), or time-division duplex (TDD). FDD uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user.
With commonly deployed ADSL over POTS (Annex A), the band from 26.075 kHz to 137.825 kHz is used for upstream communication, while 138–1104 kHz is used for downstream communication. Under the usual discrete multitone modulation (DMT) scheme, each of these is further divided into smaller frequency channels of 4.3125 kHz. These frequency channels are sometimes termed bins. During initial training to optimize transmission quality and speed, the ADSL modem tests each of the bins to determine the signal-to-noise ratio at each bin's frequency. Distance from the telephone exchange, cable characteristics, interference from AM radio stations, and local interference and electrical noise at the modem's location can adversely affect the signal-to-noise ratio at particular frequencies. Bins for frequencies exhibiting a reduced signal-to-noise ratio will be used at a lower throughput rate or not at all; this reduces the maximum link capacity but allows the modem to maintain an adequate connection. The DSL modem will make a plan on how to exploit each of the bins, sometimes termed "bits per bin" allocation. Those bins that have a good signal-to-noise ratio (SNR) will be chosen to transmit signals chosen from a greater number of possible encoded values (this range of possibilities equating to more bits of data sent) in each main clock cycle. The number of possibilities must not be so large that the receiver might incorrectly decode which one was intended in the presence of noise. Noisy bins may only be required to carry as few as two bits, a choice from only one of four possible patterns, or only one bit per bin in the case of ADSL2+, and very noisy bins are not used at all. If the pattern of noise versus frequencies heard in the bins changes, the DSL modem can alter the bits-per-bin allocations, in a process called "bitswap", where bins that have become noisier are only required to carry fewer bits and other channels will be chosen to be given a higher burden.
The data transfer capacity the DSL modem therefore reports is determined by the total of the bits-per-bin allocations of all the bins combined. Higher signal-to-noise ratios and more bins being in use gives a higher total link capacity, while lower signal-to-noise ratios or fewer bins being used gives a low link capacity. The total maximum capacity derived from summing the bits-per-bin is reported by DSL modems and is sometimes termed sync rate. This will always be rather misleading: the true maximum link capacity for user data transfer rate will be significantly lower because extra data are transmitted that are termed protocol overhead, reduced figures for PPPoA connections of around 84–87 percent, at most, being common. In addition, some ISPs will have traffic policies that limit maximum transfer rates further in the networks beyond the exchange, and traffic congestion on the Internet, heavy loading on servers and slowness or inefficiency in customers' computers may all contribute to reductions below the maximum attainable. When a wireless access point is used, low or unstable wireless signal quality can also cause reduction or fluctuation of actual speed.
In fixed-rate mode, the sync rate is predefined by the operator and the DSL modem chooses a bits-per-bin allocation that yields an approximately equal error rate in each bin. In variable-rate mode, the bits-per-bin are chosen to maximize the sync rate, subject to a tolerable error risk. These choices can either be conservative, where the modem chooses to allocate fewer bits per bin than it possibly could, a choice that makes for a slower connection, or less conservative in which more bits per bin are chosen in which case there is a greater risk case of error should future signal-to-noise ratios deteriorate to the point where the bits-per-bin allocations chosen are too high to cope with the greater noise present. This conservatism, involving a choice of using fewer bits per bin as a safeguard against future noise increases, is reported as the signal-to-noise ratio margin or SNR margin.
The telephone exchange can indicate a suggested SNR margin to the customer's DSL modem when it initially connects, and the modem may make its bits-per-bin allocation plan accordingly. A high SNR margin will mean a reduced maximum throughput, but greater reliability and stability of the connection. A low SNR margin will mean high speeds, provided the noise level does not increase too much; otherwise, the connection will have to be dropped and renegotiated (resynced). ADSL2+ can better accommodate such circumstances, offering a feature termed seamless rate adaptation (SRA), which can accommodate changes in total link capacity with less disruption to communications.
Vendors may support the usage of higher frequencies as a proprietary extension to the standard. However, this requires matching vendor-supplied equipment on both ends of the line, and will likely result in crosstalk problems that affect other lines in the same bundle.
There is a direct relationship between the number of channels available and the throughput capacity of the ADSL connection. The exact data capacity per channel depends on the modulation method used.
ADSL initially existed in two versions (similar to VDSL), namely CAP and DMT. CAP was the de facto standard for ADSL deployments up until 1996, deployed in 90 percent of ADSL installations at the time. However, DMT was chosen for the first ITU-T ADSL standards, G.992.1 and G.992.2 (also called G.dmt and G.lite respectively). Therefore, all modern installations of ADSL are based on the DMT modulation scheme.
Interleaving and fastpath
ISPs (but users rarely, apart from Australia where it's the default) have the option to use interleaving of packets to counter the effects of burst noise on the telephone line. An interleaved line has a depth, usually 8 to 64, which describes how many Reed–Solomon codewords are accumulated before they are sent. As they can all be sent together, their forward error correction codes can be made more resilient. Interleaving adds latency as all the packets have to first be gathered (or replaced by empty packets) and they, of course, all take time to transmit. 8 frame interleaving adds 5 ms round-trip time, while 64 deep interleaving adds 25 ms. Other possible depths are 16 and 32.
"Fastpath" connections have an interleaving depth of 1, that is one packet is sent at a time. This has a low latency, usually around 10 ms (interleaving adds to it, this is not greater than interleaved) but it is extremely prone to errors, as any burst of noise can take out the entire packet and so require it all to be retransmitted. Such a burst on a large interleaved packet only blanks part of the packet, it can be recovered from error correction information in the rest of the packet. A "fastpath" connection will result in extremely high latency on a poor line, as each packet will take many retries.
Installation problems
ADSL deployment on an existing plain old telephone service (POTS) telephone line presents some problems because the DSL is within a frequency band that might interact unfavorably with existing equipment connected to the line. It is therefore necessary to install appropriate frequency filters at the customer's premises to avoid interference between the DSL, voice services, and any other connections to the line (for example intruder alarms). This is desirable for the voice service and essential for a reliable ADSL connection.
In the early days of DSL, installation required a technician to visit the premises. A splitter or microfilter was installed near the demarcation point, from which a dedicated data line was installed. This way, the DSL signal is separated as close as possible to the central office and is not attenuated inside the customer's premises. However, this procedure was costly, and also caused problems with customers complaining about having to wait for the technician to perform the installation. So, many DSL providers started offering a "self-install" option, in which the provider provided equipment and instructions to the customer. Instead of separating the DSL signal at the demarcation point, the DSL signal is filtered at each telephone outlet by use of a low-pass filter for voice and a high-pass filter for data, usually enclosed in what is known as a microfilter. This microfilter can be plugged by an end user into any telephone jack: it does not require any rewiring at the customer's premises.
Commonly, microfilters are only low-pass filters, so beyond them only low frequencies (voice signals) can pass. In the data section, a microfilter is not used because digital devices that are intended to extract data from the DSL signal will, themselves, filter out low frequencies. Voice telephone devices will pick up the entire spectrum so high frequencies, including the ADSL signal, will be "heard" as noise in telephone terminals, and will affect and often degrade the service in fax, dataphones and modems. From the point of view of DSL devices, any acceptance of their signal by POTS devices mean that there is a degradation of the DSL signal to the devices, and this is the central reason why these filters are required.
A side effect of the move to the self-install model is that the DSL signal can be degraded, especially if more than 5 voiceband (that is, POTS telephone-like) devices are connected to the line. Once a line has had DSL enabled, the DSL signal is present on all telephone wiring in the building, causing attenuation and echo. A way to circumvent this is to go back to the original model, and install one filter upstream from all telephone jacks in the building, except for the jack to which the DSL modem will be connected. Since this requires wiring changes by the customer, and may not work on some household telephone wiring, it is rarely done. It is usually much easier to install filters at each telephone jack that is in use.
DSL signals may be degraded by older telephone lines, surge protectors, poorly designed microfilters, repetitive electrical impulse noise, and by long telephone extension cords. Telephone extension cords are typically made with small-gauge, multi-strand copper conductors which do not maintain a noise-reducing pair twist. Such cable is more susceptible to electromagnetic interference and has more attenuation than solid twisted-pair copper wires typically wired to telephone jacks. These effects are especially significant where the customer's phone line is more than 4 km from the DSLAM in the telephone exchange, which causes the signal levels to be lower relative to any local noise and attenuation. This will have the effect of reducing speeds or causing connection failures. FTTx usually has no surge problems, for example, caused by lightning.
Transport protocols
ADSL defines three "Transmission protocol-specific transmission convergence (TPS-TC)" layers:
Synchronous Transport Module (STM), which allows the transmission of frames of the Synchronous Digital Hierarchy (SDH)
Asynchronous Transfer Mode (ATM)
Packet Transfer Mode (starting with ADSL2, see below)
In home installation, the prevalent transport protocol is ATM. On top of ATM, there are multiple possibilities of additional layers of protocols (two of them are abbreviated in a simplified manner as "PPPoA" or "PPPoE"), with TCP/IP providing the connection to the Internet.
ADSL standards
See also
ADSL loop extender can be used to expand the reach and rate of ADSL services.
Attenuation distortion
Digital subscriber line access multiplexer
Flat rate
List of interface bit rates
Rate-Adaptive Digital Subscriber Line (RADSL)
Single-pair high-speed digital subscriber line (SHDSL)
Symmetric digital subscriber line (SDSL)
VDSL (Very high-speed digital subscriber line)
References
External links
Digital
Digital subscriber line
ITU-T recommendations
Internet terminology
Telecommunications-related introductions in 1998
Telecommunication protocols
sv:Digital Subscriber Line#ADSL | ADSL | [
"Physics",
"Technology"
] | 3,334 | [
"Computing terminology",
"Internet terminology",
"Symmetry",
"Asymmetry"
] |
18,935,256 | https://en.wikipedia.org/wiki/Electrothermal%20instability |
The electrothermal instability (also known as ionization instability, non-equilibrium instability or Velikhov instability in the literature) is a magnetohydrodynamic (MHD) instability appearing in magnetized non-thermal plasmas used in MHD converters. It was first theoretically discovered in 1962 and experimentally measured into a MHD generator in 1963 by Evgeny Velikhov.
Physical explanation and characteristics
This instability is a turbulence of the electron gas in a non-equilibrium plasma (i.e. where the electron temperature Te is greatly higher than the overall gas temperature Tg). It arises when a magnetic field powerful enough is applied in such a plasma, reaching a critical Hall parameter βcr.
Locally, the number of electrons and their temperature fluctuate (electron density and thermal velocity) as the electric current and the electric field.
The Velikhov instability is a kind of ionization wave system, almost frozen in the two temperature gas. The reader can evidence such a stationary wave phenomenon just applying a transverse magnetic field with a permanent magnet on the low-pressure control gauge (Geissler tube) provided on vacuum pumps. In this little gas-discharge bulb a high voltage electric potential is applied between two electrodes which generates an electric glow discharge (pinkish for air) when the pressure has become low enough. When the transverse magnetic field is applied on the bulb, some oblique grooves appear in the plasma, typical of the electrothermal instability.
The electrothermal instability occurs extremely quickly, in a few microseconds. The plasma becomes non-homogeneous, transformed into alternating layers of high free electron and poor free electron densities. Visually the plasma appears stratified, as a "pile of plates".
Hall effect in plasmas
The Hall effect in ionized gases has nothing to do with the Hall effect in solids (where the Hall parameter is always very inferior to unity). In a plasma, the Hall parameter can take any value.
The Hall parameter β in a plasma is the ratio between the electron gyrofrequency Ωe and the electron-heavy particles collision frequency ν:
where
e is the electron charge (1.6 × 10−19 coulomb)
B is the magnetic field (in teslas)
me is the electron mass (0.9 × 10−30 kg)
The Hall parameter value increases with the magnetic field strength.
Physically, when the Hall parameter is low, the trajectories of electrons between two encounters with heavy particles (neutral or ion) are almost linear. But if the Hall parameter is high, the electron movements are highly curved. The current density vector J is no more colinear with the electric field vector E. The two vectors J and E make the Hall angle θ which also gives the Hall parameter:
Plasma conductivity and magnetic fields
In a non-equilibrium ionized gas with high Hall parameter, Ohm's law,
where σ is the electrical conductivity (in siemens per metre),
is a matrix, because the electrical conductivity σ is a matrix:
σS is the scalar electrical conductivity:
where ne is the electron density (number of electrons per cubic meter).
The current density J has two components:
Therefore,
The Hall effect makes electrons "crabwalk".
When the magnetic field B is high, the Hall parameter β is also high, and
Thus both conductivities
become weak, therefore the electric current cannot flow in these areas. This explains why the electron current density is weak where the magnetic field is the strongest.
Critical Hall parameter
The electrothermal instability occurs in a plasma at a (Te > Tg) regime when the Hall parameter is higher than a critical value βcr.
We have
where μ is the electron mobility (in m2/(V·s))
and
where Ei is the ionization energy (in electron volts) and k the Boltzmann constant.
The growth rate of the instability is
And the critical Hall parameter is
The critical Hall parameter βcr greatly varies according to the degree of ionization α :
where ni is the ion density and nn the neutral density (in particles per cubic metre).
The electron-ion collision frequency νei is much greater than the electron-neutral collision frequency νen.
Therefore, with a weak energy degree of ionization α, the electron-ion collision frequency νei can equal the electron-neutral collision frequency νen.
For a weakly ionized gas (non-Coulombian plasma, when νei < νen ):
For a fully ionized gas (Coulombian plasma, when νei > νen ):
NB: The term "fully ionized gas", introduced by Lyman Spitzer, does not mean the degree of ionization is unity, but only that the plasma is Coulomb-collision dominated, which can correspond to a degree of ionization as low as 0.01%.
Technical problems and solutions
A two-temperature gas, globally cool but with hot electrons (Te >> Tg) is a key feature for practical MHD converters, because it allows the gas to reach sufficient electrical conductivity while protecting materials from thermal ablation. This idea was first introduced for MHD generators in the early 1960s by Jack L. Kerrebrock and Alexander E. Sheindlin.
But the unexpected large and quick drop of current density due to electrothermal instability ruined many MHD projects worldwide, while previous calculations had envisaged energy conversion efficiencies over 60% with these devices. Although some studies were made into the instability by various researchers, no real solution was found at that time. This prevented further developments of non-equilibrium MHD generators and caused most engaged countries to cancel their MHD power plant programs and to retire completely from this field of research in the early 1970s, because this technical problem was then considered to be an impassable stumbling block.
Nevertheless, experimental studies about the growth rate of the electrothermal instability and the critical conditions showed that a stability region still exists for high electron temperatures. The stability is gained by a quick transition to "fully ionized" conditions (fast enough to overtake the growth rate of the electrothermal instability) where the Hall parameter decreases because of the collision frequency rising, below its critical value which is then about 2. Stable operation with several megawatts of power output had been experimentally achieved as from 1967 with high electron temperature. But this electrothermal control cannot provide an adequate decrease of Tg over long durations (to avoid thermal ablation), so such a solution is not practical for industrial energy conversion.
Another idea to control the instability is to increase the non-thermal ionisation rate by using a laser which would act like a guidance system for streamers between electrodes, increasing the electron density and the conductivity, therefore lowering the Hall parameter to below its critical value along these paths. But this concept has never been tested experimentally.
In the 1970s and more recently, some researchers tried to master the instability with oscillating fields. Oscillations of the electric field or of an additional RF electromagnetic field locally modify the Hall parameter.
Finally, a solution has been found in the early 1980s to completely remove the electrothermal instability within MHD converters, by means of non-homogeneous magnetic fields. A strong magnetic field implies a high Hall parameter, and therefore a low electrical conductivity in the medium. So the idea is to create some "paths" linking one electrode to the other, where the magnetic field is locally attenuated. Then the electric current tends to flow in these low B-field paths as thin plasma cords or streamers, where the electron density and temperature increase. The plasma becomes locally Coulombian, and the local Hall parameter value falls, while its critical threshold rises. Experiments where streamers do not present any inhomogeneity have been obtained with this method. This effect, strongly nonlinear, was unexpected but led to a very effective system for streamer guidance.
But this last working solution was discovered too late, 10 years after all the international effort about MHD power generation had been abandoned in most nations. Vladimir S. Golubev, coworker of Evgeny Velikhov, who met Jean-Pierre Petit in 1983 at the 9th MHD International conference in Moscow, made the following comment to the inventor of the magnetic stabilization method:
However, this electrothermal stabilization by magnetic confinement, although found too late for the development of MHD power plants, might be of interest for future applications of MHD to aerodynamics (magnetoplasma-aerodynamics for hypersonic flight).
See also
Magnetohydrodynamics
MHD generator
Evgeny Velikhov
External links
M. Mitchner, C.H. Kruger Jr., Two-temperature ionization instability: Chapter 4 (MHD) – Section 10, pp. 230–241. From the plasma physics course book Partially Ionized Gases, John Wiley & Sons, 1973 (reprint 1992), Mechanical Engineering Department, Stanford University, CA, USA.
References
Plasma instabilities | Electrothermal instability | [
"Physics"
] | 1,838 | [
"Plasma phenomena",
"Physical phenomena",
"Plasma instabilities"
] |
25,590,312 | https://en.wikipedia.org/wiki/Gravitomagnetic%20time%20delay | According to general relativity, a massive spinning body endowed with angular momentum S will alter the space-time fabric around it in such a way that several effects on moving test particles and propagating electromagnetic waves occur.
In particular, the direction of motion with respect to the sense of rotation of the central body is relevant because co-and counter-propagating waves carry a "gravitomagnetic" time delay ΔtGM which could be, in principle, be measured if S is known.
On the contrary, if the validity of general relativity is assumed, it is possible to use ΔtGM to measure S. Such effect must not be confused with the much larger Shapiro time delay ΔtGE induced by the "gravitoelectric" Schwarzschild-like component of the gravitational field of a planet of mass M considered non-rotating. Unlike the small ΔtGM, the Shapiro time delay has been accurately measured in several radar-ranging experiments with Solar System interplanetary spacecraft.
See also
Introduction to general relativity
Gravitomagnetic clock effect
References
General relativity
Spacetime | Gravitomagnetic time delay | [
"Physics",
"Mathematics"
] | 222 | [
"Vector spaces",
"Space (mathematics)",
"General relativity",
"Relativity stubs",
"Theory of relativity",
"Spacetime"
] |
25,590,565 | https://en.wikipedia.org/wiki/Sharp-SAT | In computer science, the Sharp Satisfiability Problem (sometimes called Sharp-SAT, #SAT or model counting) is the problem of counting the number of interpretations that satisfy a given Boolean formula, introduced by Valiant in 1979. In other words, it asks in how many ways the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. For example, the formula is satisfiable by three distinct boolean value assignments of the variables, namely, for any of the assignments ( = TRUE, = FALSE), ( = FALSE, = FALSE), and ( = TRUE, = TRUE), we have
#SAT is different from Boolean satisfiability problem (SAT), which asks if there exists a solution of Boolean formula. Instead, #SAT asks to enumerate all the solutions to a Boolean Formula. #SAT is harder than SAT in the sense that, once the total number of solutions to a Boolean formula is known, SAT can be decided in constant time. However, the converse is not true, because knowing a Boolean formula has a solution does not help us to count all the solutions, as there are an exponential number of possibilities.
#SAT is a well-known example of the class of counting problems, known as #P-complete (read as sharp P complete). In other words, every instance of a problem in the complexity class #P can be reduced to an instance of the #SAT problem. This is an important result because many difficult counting problems arise in Enumerative Combinatorics, Statistical physics, Network Reliability, and Artificial intelligence without any known formula. If a problem is shown to be hard, then it provides a complexity theoretic explanation for the lack of nice looking formulas.
#P-Completeness
#SAT is #P-complete. To prove this, first note that #SAT is obviously in #P.
Next, we prove that #SAT is #P-hard. Take any problem #A in #P. We know that A can be solved using a Non-deterministic Turing Machine M. On the other hand, from the proof for Cook-Levin Theorem, we know that we can reduce M to a boolean formula F. Now, each valid assignment of F corresponds to a unique acceptable path in M, and vice versa. However, each acceptable path taken by M represents a solution to A. In other words, there is a bijection between the valid assignments of F and the solutions to A. So, the reduction used in the proof for Cook-Levin Theorem is parsimonious. This implies that #SAT is #P-hard.
Intractable special cases
Counting solutions is intractable (#P-complete) in many special cases for which satisfiability is tractable (in P), as well as when satisfiability is intractable (NP-complete). This includes the following.
#3SAT
This is the counting version of 3SAT. One can show that any formula in SAT can be rewritten as a formula in 3-CNF form preserving the number of satisfying assignments. Hence, #SAT and #3SAT are counting equivalent and #3SAT is #P-complete as well.
#2SAT
Even though 2SAT (deciding whether a 2CNF formula has a solution) is polynomial, counting the number of solutions is #P-complete.
The #P-completeness already in the monotone case, i.e., when there are no negations (#MONOTONE-2-CNF).
It is known that, assuming that NP is different from RP, #MONOTONE-2-CNF also cannot be approximated by a fully polynomial-time approximation scheme (FPRAS), even assuming that each variable occurs in at most 6 clauses, but that a fully polynomial-time approximation scheme (FPTAS) exists when each variable occurs in at most 5 clauses: this follows from analogous results on the problem ♯IS of counting the number of independent sets in graphs.
#Horn-SAT
Similarly, even though Horn-satisfiability is polynomial, counting the number of solutions is #P-complete. This result follows from a general dichotomy characterizing which SAT-like problems are #P-complete.
Planar #3SAT
This is the counting version of Planar 3SAT. The hardness reduction from 3SAT to Planar 3SAT given by Lichtenstein is parsimonious. This implies that Planar #3SAT is #P-complete.
Planar Monotone Rectilinear #3SAT
This is the counting version of Planar Monotone Rectilinear 3SAT. The NP-hardness reduction given by de Berg & Khosravi is parsimonious. Therefore, this problem is #P-complete as well.
#DNF
For disjunctive normal form (DNF) formulas, counting the solutions is also #P-complete, even when all clauses have size 2 and there are no negations: this is because, by De Morgan's laws, counting the number of solutions of a DNF amounts to counting the number of solutions of the negation of a conjunctive normal form (CNF) formula. Intractability even holds in the case known as #PP2DNF, where the variables are partitioned into two sets, with each clause containing one variable from each set.
By contrast, it is possible to tractably approximate the number of solutions of a disjunctive normal form formula using the Karp-Luby algorithm, which is an FPRAS for this problem.
Tractable special cases
Affine constraint satisfaction problems
The variant of SAT corresponding to affine relations in the sense of Schaefer's dichotomy theorem, i.e., where clauses amount to equations modulo 2 with the XOR operator, is the only SAT variant for which the #SAT problem can be solved in polynomial time.
Bounded treewidth
If the instances to SAT are restricted using graph parameters, the #SAT problem can become tractable. For instance, #SAT on SAT instances whose treewidth is bounded by a constant can be performed in polynomial time. Here, the treewidth can be the primal treewidth, dual treewidth, or incidence treewidth of the hypergraph associated to the SAT formula, whose vertices are the variables and where each clause is represented as a hyperedge.
Restricted circuit and diagram classes
Model counting is tractable (solvable in polynomial time) for (ordered) BDDs and for some circuit formalisms studied in knowledge compilation, such as d-DNNFs.
Generalizations
Weighted model counting (WMC) generalizes #SAT by computing a linear combination of the models instead of just counting the models. In the literal-weighted variant of WMC, each literal gets assigned a weight, such that
.
WMC is used for probabilistic inference, as probabilistic queries over discrete random variables such as in bayesian networks can be reduced to WMC.
Algebraic model counting further generalizes #SAT and WMC over arbitrary commutative semirings.
References
Computational problems
Satisfiability problems
Combinatorics | Sharp-SAT | [
"Mathematics"
] | 1,503 | [
"Discrete mathematics",
"Automated theorem proving",
"Computational problems",
"Combinatorics",
"Mathematical problems",
"Satisfiability problems"
] |
25,591,082 | https://en.wikipedia.org/wiki/Nano%20%28journal%29 | Nano is an international peer-reviewed scientific journal published by World Scientific, covering recent developments and discussions in the field of nanoscience and technology. Topics covered include nanomaterials, characterization tools, fabrication methods, numerical simulation, and theory.
Established in 2006, the journal started as bimonthly, switched to 8 issues per year in 2014, and to monthly in 2016. The 2020 Impact Factor of the journal was 1.556.
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index Expanded, ISI Alerting Services, Materials Science Citation Index, Current Contents/Physical, Chemical & Earth Sciences, and Inspec.
See also
External links
Academic journals established in 2006
Materials science journals
World Scientific academic journals
English-language journals
Bimonthly journals
Nanotechnology journals | Nano (journal) | [
"Materials_science",
"Engineering"
] | 164 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Nanotechnology journals",
"Materials science",
"Nanotechnology stubs",
"Nanotechnology"
] |
25,591,159 | https://en.wikipedia.org/wiki/Service%20Interoperability%20in%20Ethernet%20Passive%20Optical%20Networks | The Service Interoperability in Ethernet Passive Optical Networks (SIEPON) working group proposed the IEEE 1904.1 standard for managing telecommunications networks.
Description
Ethernet passive optical network (EPON) is a technology for fiber to the x access networks, with millions subscriber lines.
In response to rapid growth, the SIEPON project was formed in 2009 to develop system-level specifications, targeting "plug-and-play" interoperability of the transport, service, and control planes in a multi-vendor environment.
The project was organized to build upon the IEEE 802.3ah (1G-EPON) and IEEE 802.3av (10G-EPON) physical layer and data link layer standards and create a system-level and network-level standard, allowing interoperability of the transport, service, and control planes in a multi-vendor environment.
The "P" prefix is used while the standard is being proposed, and then dropped when ratified.
A draft standard was announced in September 2011.
The Industry Standards and Technology Organization announced a conformity assessment program in February 2012.
The first official standard in the series, IEEE Standard 1904.1-2013, was published in September 2013.
References
Broadband
Network architecture
Fiber-optic communications | Service Interoperability in Ethernet Passive Optical Networks | [
"Engineering"
] | 254 | [
"Network architecture",
"Computer networks engineering"
] |
25,595,552 | https://en.wikipedia.org/wiki/Agricultural%20microbiology | Agricultural microbiology is a branch of microbiology dealing with plant-associated microbes and plant and animal diseases. It also deals with the microbiology of soil fertility, such as microbial degradation of organic matter and soil nutrient transformations. The primary goal of agricultural microbiology is to comprehensively explore the interactions between beneficial microorganisms like bacteria and fungi with crops. It also deals with the microbiology of soil fertility, such as microbial degradation of organic matter and soil nutrient transformations.
Soil microorganisms
Importance of soil microorganisms
Involved in nutrient transformation process
Decomposition of resistant components of plant and animal tissue
Role in microbial antagonism
Microorganisms as biofertilizers
Biofertilizers are seen as promising, sustainable alternatives to harmful chemical fertilizers due to their ability to increase yield and soil fertility through enhancing crop immunity and development. When applied to the soil, plant, or seed these biofertilizers colonize the rhizosphere or interior of the plant root. Once the microbial community is established, these microorganisms can help to solubilize and break down essential nutrients in the environment which would otherwise be unavailable or difficult for the crop to incorporate into biomass.
Nitrogen
Nitrogen is an essential element needed for the creation of biomass and is usually seen as a limiting nutrient in agricultural systems. Though abundant in the atmosphere, the atmospheric form of nitrogen cannot be utilized by plants and must be transformed into a form that can be taken up directly by the plants; this problem is solved by biological nitrogen fixers. Nitrogen fixing bacteria, also known as diazotrophs, can be broken down into three groups: free-living (ex. Azotobacter, Anabaena, and Clostridium) , symbiotic (ex. Rhizobium and Trichodesmium) and associative symbiotic (ex. Azospirillum). These organisms have the ability to fix atmospheric nitrogen to bioavailable forms that can be taken up by plants and incorporated into biomass. An important nitrogen fixing symbiosis is that between Rhizobium and leguminous plants. Rhizobium have been shown to contribute upwards of 300 kg N/ha/year in different leguminous plants, and their application to agricultural crops has been shown to increase crop height, seed germination, and nitrogen content within the plant. The use of nitrogen fixing bacteria in agriculture could help reduce the reliance on man-made nitrogen fertilizers that are synthesized via the Haber-Bosch process.
Phosphorus
Phosphorus can be made available to plants via solubilization or mobilization by bacteria or fungi. Under most soil conditions, phosphorus is the least mobile nutrient in the environment and therefore must be converted to solubilized forms in order to be available for plant uptake. Phosphate solubilization is the process by which organic acids are secreted into the environment, this lowers the pH and dissolves phosphate bonds therefore leaving the phosphate solubilized. Phosphate-solubilizing bacteria (PBS) (ex. Bacillus subtilis and Bacillus circulans) are responsible for upwards of 50% of microbial phosphate solubilization. In addition to the solubilized phosphate, PBS can also provide trace elements such as iron and zinc which further enhance plant growth. Fungi (ex. Aspergillus awamori and Penicillium spp.) also perform this process, however their contribution is less than 1% of all activity. A 2019 study showed that when crops were inoculated with Aspergillus niger , there was a significant increase fruit size and yield compared with non-inoculated crops; when the crop was co-inoculated with A. niger and the nitrogen fixing bacteria Azobacter, the crop performance was better than with inoculation using only one of the biofertilizer and the crops that were not inoculated at all. Phosphorus mobilization is the process of transferring phosphorus to the root from the soil; this process is carried out via mycorrhiza (ex. Arbuscular mycorrhiza) . Arbuscular mycorrhiza mobilize phosphate by penetrating and increasing the surface area of the roots which helps to mobilize phosphorus into the plant. Phosphate solubilizing and mobilizing microorganisms can contribute upwards of 30–50 kg P2O5/ha which, in turn, has the potential to increase crop yield by 10–20%.
Example
DAP
UREA
SUPER PHOSPHATE
Microbiology in Sustainable Agriculture
Effective Microorganisms
Effective microorganisms (EM) are essential to the development of sustainable agriculture and consist of a diverse, mixed culture of microorganisms that is naturally occurring in nature. Biopreparations containing effective microorganisms play a crucial role across various sectors, such as environmental protection, food production, and medicine. Furthermore, this application of effective microorganism biotechnology spans a range of agricultural areas, including soil rejuvenation, crop cultivation, livestock farming, and food preservation. These biopreparations prove particularly beneficial for land preparation and field preparation. Effective microorganisms can be applied to crops during the growing season or directly to the soil during preparation, enhancing both soil health and promoting plant growth. The broad utility of effective microorganisms stems from their high enzymatic specificity, allowing them to thrive in various conditions. Moreover, effective microorganism technology is now utilized in more than 140 countries worldwide, with Brazil being the leading adopter. The widespread usage of effective microorganisms displays the power to enhance the agricultural industry and environmentally sustainable farming.
Effective Microorganisms in Sustainable Agriculture
Conventional farming methods use chemical fertilizers, pesticides, and herbicides to safeguard crops from pests and diseases. However, these chemical agents have adverse environmental impacts, contributing to environmental pollution. The use of agricultural chemicals has been linked to the decline of plant and animal species, as well as harm to soil biodiversity, including bacterial and fungal communities. Chemical plant protection products can alter agricultural soils by affecting their physical properties such as texture, permeability, and porosity. Additionally, these products disrupt the nutrient cycles of phosphorus and nitrogen and reduce the diversity of the soil microbiome. Given the challenges posed by a growing global population and the need for more and higher-quality food, the future of agriculture lies in using effective microorganisms to boost yields. This approach offers a sustainable alternative to traditional chemical methods, fostering environmental health and agricultural resilience.
Successful crop production hinges on the health of the soil, which is influenced by a network of biological, chemical, and physical processes driven by microorganisms. Effective microorganisms enhance the soil's beneficial microbial community, paving the way for sustainable agriculture. These microorganisms consist of naturally occurring microbes, such as photosynthesizing bacteria, lactic acid bacteria, yeasts, and fermenting fungi, which can be applied to increase soil microbial diversity. The application of effective microorganisms improves soil structure and fertility while significantly boosting biological diversity. They can inhibit the proliferation of soil-borne pathogens, assist in nitrogen fixation, and enhance plant nutrient uptake. Effective microorganisms also accelerate the decomposition of organic waste, which promotes composting and, therefore, increases the availability of valuable minerals and bolsters the activities of indigenous microbes. By dominating the soil's microbial environment, effective microorganisms encourage other beneficial microbes to thrive and outcompete smaller groups of pathogenic or opportunistic microbes. This natural balancing act leads to stronger, more resilient plants and higher crop yields, positioning effective microorganisms as a key player in the future of sustainable agriculture.
Factors Affecting Microorganisms in Agricultural Systems
Human Impacts
Organic farming methods, which are methods meant to sustain ecosystem health by limiting the use of external inputs like synthetic fertilizers and focusing on natural inputs, can have an effect of increasing the number of microbes in a system and increasing their ability to utilize carbon and nitrogen-based molecules. A method of maintaining ecosystem function in agricultural soils is using crop rotations, where increases in the number of crops used for a crop rotation in farming systems has also been shown to have the effect of increasing microbial diversity and the number of microbial species present. Relatedly, increases in microbial diversity have been shown to have beneficial effects on the health of plants and soils.
There are also agricultural practices that have negative impacts on microorganisms in agricultural systems. Another common farming practice, tillage, can have the immediate effect of decreasing carbon and nitrogen originating from microbial biomass. Conversely, no-till practices have been shown to be able to increase soil health, increase microbial growth, and increase microbial community functionality. However, the negative impacts of tillage are dependent on the intensity of tillage, and the microbial community has been shown to be able to recover over time. To limit the effects of insect behavior on crop growth and health, a common practice is the use of pesticides. These pesticides in turn affect soil microbes in ways such as altering the composition of the microbial community in soils for months after exposure. A type of pesticide, fungicides, have also been shown to have negative effects on microbes not being targeted by the chemical as well as causing changes in the community of microbes living associated with host plant roots.
Environmental Impacts
Climatic changes present themselves in many different ways, and these changes can also impact the microbial members of agricultural soils. Increasing temperatures have been shown to have an effect of limiting plant root growth and thereby reducing the ability of arbuscular mycorrhizal fungi (AMF) to grow associated with these roots. Changing temperatures can have the effect of increasing the abundance and ability of plant pathogens to produce negative impacts in agricultural ecosystems. Increased levels of carbon dioxide (CO2), modify the interactions between plants and pathogens and can lead to changes in what plant pathogens are present and how they are able to negatively impact plants. However, there currently is a lack of information about how to predict how elevated levels of CO2 can change the interactions between plants and potential pathogens across many different plant-pathogen relationships.
There is currently a push to understand the role microbes in soils, including agricultural soils, play in limiting the negative impacts of climate change. For example, soil microbes are able to convert methane into carbon dioxide, thereby modulating greenhouse gas emissions. They also have a wide range of effects on agriculture including conversion of carbon dioxide into usable forms of carbon for plants, releasing chemicals to increase the ability of plants to uptake and store water, and protecting plants from drought.
See also
Veterinary medicine
References
Further reading
Agriculture
Microbiology | Agricultural microbiology | [
"Chemistry",
"Biology"
] | 2,229 | [
"Microbiology",
"Microscopy"
] |
25,597,490 | https://en.wikipedia.org/wiki/Solar%20neutrino%20problem | The solar neutrino problem concerned a large discrepancy between the flux of solar neutrinos as predicted from the Sun's luminosity and as measured directly. The discrepancy was first observed in the mid-1960s and was resolved around 2002.
The flux of neutrinos at Earth is several tens of billions per square centimetre per second, mostly from the Sun's core. They are nevertheless difficult to detect, because they interact very weakly with matter, traversing the whole Earth. Of the three types (flavors) of neutrinos known in the Standard Model of particle physics, the Sun produces only electron neutrinos. When neutrino detectors became sensitive enough to measure the flow of electron neutrinos from the Sun, the number detected was much lower than predicted. In various experiments, the number deficit was between one half and two thirds.
Particle physicists knew that a mechanism, discussed in 1957 by Bruno Pontecorvo, could explain the deficit in electron neutrinos. However, they hesitated to accept it for various reasons, including the fact that it required a modification of the accepted Standard Model. They first pointed at the solar model for adjustment, which was ruled out. Today it is accepted that the neutrinos produced in the Sun are not massless particles as predicted by the Standard Model but rather mixed quantum states made up of defined-mass eigenstates in different (complex) proportions. That allows a neutrino produced as a pure electron neutrino to change during propagation into a mixture of electron, muon and tau neutrinos, with a reduced probability of being detected by a detector sensitive to only electron neutrinos.
Several neutrino detectors aiming at different flavors, energies, and traveled distance contributed to our present knowledge of neutrinos. In 2002 and 2015, a total of four researchers related to some of these detectors were awarded the Nobel Prize in Physics.
Background
The Sun performs nuclear fusion via the proton–proton chain reaction, which converts four protons into alpha particles, neutrinos, positrons, and energy. This energy is released in the form of electromagnetic radiation, as gamma rays, as well as in the form of the kinetic energy of both the charged particles and the neutrinos. The neutrinos travel from the Sun's core to Earth without any appreciable absorption by the Sun's outer layers.
In the late 1960s, Ray Davis and John N. Bahcall's Homestake Experiment was the first to measure the flux of neutrinos from the Sun and detect a deficit. The experiment used a chlorine-based detector. Many subsequent radiochemical and water Cherenkov detectors confirmed the deficit, including the Kamioka Observatory and Sudbury Neutrino Observatory.
The expected number of solar neutrinos was computed using the standard solar model, which Bahcall had helped establish. The model gives a detailed account of the Sun's internal operation.
In 2002, Ray Davis and Masatoshi Koshiba won part of the Nobel Prize in Physics for experimental work which found the number of solar neutrinos to be around a third of the number predicted by the standard solar model.
In recognition of the firm evidence provided by the 1998 and 2001 experiments "for neutrino oscillation", Takaaki Kajita from the Super-Kamiokande Observatory and Arthur McDonald from the Sudbury Neutrino Observatory (SNO) were awarded the 2015 Nobel Prize for Physics. The Nobel Committee for Physics, however, erred in mentioning neutrino oscillations in regard to the SNO-Experiment: for the high-energy solar neutrinos observed in that experiment, it is not neutrino oscillations, but the Mikheyev–Smirnov–Wolfenstein effect. Bruno Pontecorvo was not included in these Nobel prizes since he died in 1993.
Proposed solutions
Early attempts to explain the discrepancy proposed that the models of the Sun were wrong, i.e. the temperature and pressure in the interior of the Sun were substantially different from what was believed. For example, since neutrinos measure the amount of current nuclear fusion, it was suggested that the nuclear processes in the core of the Sun might have temporarily shut down. Since it takes thousands of years for heat energy to move from the core to the surface of the Sun, this would not immediately be apparent.
Advances in helioseismology observations made it possible to infer the interior temperatures of the Sun; these results agreed with the well established standard solar model. Detailed observations of the neutrino spectrum from more advanced neutrino observatories produced results which no adjustment of the solar model could accommodate: while the overall lower neutrino flux (which the Homestake experiment results found) required a reduction in the solar core temperature, details in the energy spectrum of the neutrinos required a higher core temperature. This happens because different nuclear reactions, whose rates have different dependence upon the temperature, produce neutrinos with different energy. Any adjustment to the solar model worsened at least one aspect of the discrepancies.
Resolution
The solar neutrino problem was resolved with an improved understanding of the properties of neutrinos. According to the Standard Model of particle physics, there are three flavors of neutrinos: electron neutrinos, muon neutrinos, and tau neutrinos. Electron neutrinos are the ones produced in the Sun and the ones detected by the above-mentioned experiments, in particular the chlorine-detector Homestake Mine experiment.
Through the 1970s, it was widely believed that neutrinos were massless and their flavors were invariant. However, in 1968 Pontecorvo proposed that if neutrinos had mass, then they could change from one flavor to another. Thus, the "missing" solar neutrinos could be electron neutrinos which changed into other flavors along the way to Earth, rendering them invisible to the detectors in the Homestake Mine and contemporary neutrino observatories.
The supernova 1987A indicated that neutrinos might have mass because of the difference in time of arrival of the neutrinos detected at Kamiokande and IMB. However, because very few neutrino events were detected, it was difficult to draw any conclusions with certainty. If Kamiokande and IMB had high-precision timers to measure the travel time of the neutrino burst through the Earth, they could have more definitively established whether or not neutrinos had mass. If neutrinos were massless, they would travel at the speed of light; if they had mass, they would travel at velocities slightly less than that of light. Since the detectors were not intended for supernova neutrino detection, this could not be done.
Strong evidence for neutrino oscillation came in 1998 from the Super-Kamiokande collaboration in Japan. It produced observations consistent with muon neutrinos (produced in the upper atmosphere by cosmic rays) changing into tau neutrinos within the Earth: Fewer atmospheric neutrinos were detected coming through the Earth than coming directly from above the detector. These observations only concerned muon neutrinos. No tau neutrinos were observed at Super-Kamiokande. The result made it, however, more plausible that the deficit in the electron-flavor neutrinos observed in the (relatively low-energy) Homestake experiment has also to do with neutrino mass.
One year later, the Sudbury Neutrino Observatory (SNO) started collecting data. That experiment aimed at the 8B solar neutrinos, which at around 10 MeV are not much affected by oscillation in both the Sun and the Earth. A large deficit is nevertheless expected due to the Mikheyev–Smirnov–Wolfenstein effect as had been calculated by Alexei Smirnov in 1985. SNO's unique design employing a large quantity of heavy water as the detection medium was proposed by Herb Chen, also in 1985. SNO observed electron neutrinos, specifically, and all flavors of neutrinos, collectively, hence the fraction of electron neutrinos. After extensive statistical analysis, the SNO collaboration determined that fraction to be about 34%, in perfect agreement with prediction. The total number of detected 8B neutrinos also agrees with the then rough predictions from the solar model.
References
External links
Solar neutrino data
Solving the Mystery of the Missing Neutrinos
Raymond Davis Jr.'s logbook
Nova – The Ghost Particle
The Solar Neutrino Problem by John N. Bahcall
The Solar Neutrino Problem, by L. Stockman
A set of photos of different Neutrino detectors
John Bahcall's web site
Neutrino problem
Particle physics
Neutrinos | Solar neutrino problem | [
"Physics"
] | 1,854 | [
"Particle physics"
] |
25,598,253 | https://en.wikipedia.org/wiki/Grey%20atmosphere | The grey atmosphere (or gray) is a useful set of approximations made for radiative transfer applications in studies of stellar atmospheres (atmospheres of stars) based on the simplified notion that the absorption coefficient of matter within a star's atmosphere is constant—that is, unchanging—for all frequencies of the star's incident radiation.
Application
The grey atmosphere approximation is the primary method astronomers use to determine the temperature and basic radiative properties of astronomical objects, including planets with atmospheres, the Sun, other stars, and interstellar clouds of gas and dust. Although the simplified model of grey atmosphere approximation demonstrates good correlation to observations, it deviates from observational results because real atmospheres are not grey, e.g. radiation absorption is frequency-dependent.
Approximations
The primary approximation is based on the assumption that the absorption coefficient, typically represented by an , has no dependence on frequency for the frequency range being worked in, e.g. .
Typically a number of other assumptions are made simultaneously:
The atmosphere has a plane-parallel atmosphere geometry.
The atmosphere is in a thermal radiative equilibrium.
This set of assumptions leads directly to the mean intensity and source function being directly equivalent to a blackbody Planck function of the temperature at that optical depth.
The Eddington approximation (see next section) may also be used optionally, to solve for the source function. This greatly simplifies the model without greatly distorting results.
Derivation of source function using the Eddington Approximation
Deriving various quantities from the grey atmosphere model involves solving an integro-differential equation, an exact solution of which is complex. Therefore, this derivation takes advantage of a simplification known as the Eddington Approximation. Starting with an application of a plane-parallel model, we can imagine an atmospheric model built up of plane-parallel layers stacked on top of each other, where properties such as temperature are constant within a plane. This means that such parameters are function of physical depth , where the direction of positive points towards the upper layers of the atmosphere. From this it is easy to see that a ray path at angle to the vertical, is given by
We now define optical depth as
where is the absorption coefficient associated with the various constituents of the atmosphere. We now turn to the radiation transfer equation
where is the total specific intensity, is the emission coefficient. After substituting for and dividing by we have
where is the so-called total source function defined as the ratio between emission and absorption coefficients. This differential equation can by solved by multiplying both sides by , re-writing the lefthand side as and then integrating the whole equation with respect to . This gives the solution
where we have used the limits as we are integrating outward from some depth within the atmosphere; therefore . Even though we have neglected the frequency-dependence of parameters such as , we know that it is a function of optical depth therefore in order to integrate this we need to have a method for deriving the source function. We now define some important parameters such as energy density , total flux and radiation pressure as follows
We also define the average specific intensity (averaged over all angles) as
We see immediately that by dividing the radiative transfer equation by 2 and integrating over , we have
Furthermore, by multiplying the same equation by and integrating w.r.t. , we have
By substituting the average specific intensity J into the definition of energy density, we also have the following relationship
Now, it is important to note that total flux must remain constant through the atmosphere therefore
This condition is known as radiative equilibrium. Taking advantage of the constancy of total flux, we now integrate to obtain
where is a constant of integration. We know from thermodynamics that for an isotropic gas the following relationship holds
where we have substituted the relationship between energy density and average specific intensity derived earlier. Although this may be true for lower depths within the stellar atmosphere, near the surface it almost certainly isn't. However, the Eddington Approximation assumes this to hold at all levels within the atmosphere. Substituting this in the previous equation for pressure gives
and under the condition of radiative equilibrium
This means we have solved the source function except for a constant of integration. Substituting this result into the solution to the radiation transfer equation and integrating gives
Here we have set the lower limit of to zero, which is the value of optical depth at the surface of the atmosphere. This would represent radiation coming out of, say, the surface of the Sun. Finally, substituting this into the definition of total flux and integrating gives
Therefore, and the source function is given by
Temperature solution
Integrating the first and second moments of the radiative transfer equation, applying the above relation and the Two-Stream Limit approximation leads to information about each of the higher moments in . The first moment of the mean intensity, is constant regardless of optical depth:
The second moment of the mean intensity, is then given by:
Note that the Eddington approximation is a direct consequence of these assumptions.
Defining an effective temperature for the Eddington flux and applying the Stefan–Boltzmann law, realize this relation between the externally observed effective temperature and the internal blackbody temperature of the medium.
The results of the grey atmosphere solution: The observed temperature is a good measure of the true temperature at an optical depth and the atmosphere top temperature is .
This approximation makes the source function linear in optical depth.
References
Observational astronomy
Astrophysics | Grey atmosphere | [
"Physics",
"Astronomy"
] | 1,100 | [
"Astronomical sub-disciplines",
"Observational astronomy",
"Astrophysics"
] |
5,039,189 | https://en.wikipedia.org/wiki/First%20moment%20of%20area | The first moment of area is based on the mathematical construct moments in metric spaces. It is a measure of the spatial distribution of a shape in relation to an axis.
The first moment of area of a shape, about a certain axis, equals the sum over all the infinitesimal parts of the shape of the area of that part times its distance from the axis [Σad].
First moment of area is commonly used to determine the centroid of an area.
Definition
Given an area, A, of any shape, and division of that area into n number of very small, elemental areas (dAi). Let xi and yi be the distances (coordinates) to each elemental area measured from a given x-y axis. Now, the first moment of area in the x and y directions are respectively given by:
and
The SI unit for first moment of area is a cubic metre (m3). In the American Engineering and Gravitational systems the unit is a cubic foot (ft3) or more commonly inch3.
The static or statical moment of area, usually denoted by the symbol Q, is a property of a shape that is used to predict its resistance to shear stress. By definition:
where
Qj,x – the first moment of area "j" about the neutral x axis of the entire body (not the neutral axis of the area "j");
dA – an elemental area of area "j";
y – the perpendicular distance to the centroid of element dA from the neutral axis x.
Shear stress in a semi-monocoque structure
The equation for shear flow in a particular web section of the cross-section of a semi-monocoque structure is:
q – the shear flow through a particular web section of the cross-section
Vy – the shear force perpendicular to the neutral axis x through the entire cross-section
Sx – the first moment of area about the neutral axis x for a particular web section of the cross-section
Ix – the second moment of area about the neutral axis x for the entire cross-section
Shear stress may now be calculated using the following equation:
– the shear stress through a particular web section of the cross-section
q – the shear flow through a particular web section of the cross-section
t – the thickness of a particular web section of the cross-section at the point being measured
See also
Second moment of area
Polar moment of inertia
Section modulus
References
Solid mechanics
Moment (physics) | First moment of area | [
"Physics",
"Mathematics"
] | 496 | [
"Solid mechanics",
"Physical quantities",
"Quantity",
"Mechanics",
"Moment (physics)"
] |
5,039,858 | https://en.wikipedia.org/wiki/Shear%20flow | In fluid dynamics, shear flow is the flow induced by a force in a fluid. In solid mechanics, shear flow is the shear stress over a distance in a thin-walled structure.
In solid mechanics
For thin-walled profiles, such as that through a beam or semi-monocoque structure, the shear stress distribution through the thickness can be neglected. Furthermore, there is no shear stress in the direction normal to the wall, only parallel. In these instances, it can be useful to express internal shear stress as shear flow, which is found as the shear stress multiplied by the thickness of the section. An equivalent definition for shear flow is the shear force V per unit length of the perimeter around a thin-walled section. Shear flow has the dimensions of force per unit of length. This corresponds to units of newtons per meter in the SI system and pound-force per foot in the US.
Origin
When a transverse force is applied to a beam, the result is variation in bending normal stresses along the length of the beam. This variation causes a horizontal shear stress within the beam that varies with distance from the neutral axis in the beam. The concept of complementary shear then dictates that a shear stress also exists across the cross section of the beam, in the direction of the original transverse force. As described above, in thin-walled structures, the variation along the thickness of the member can be neglected, so the shear stress across the cross section of a beam that is composed of thin-walled elements can be examined as shear flow, or the shear stress multiplied by the thickness of the element.
Applications
The concept of shear flow is particularly useful when analyzing semi-monocoque structures, which can be idealized using the skin-stringer model. In this model, the longitudinal members, or stringers, carry only axial stress, while the skin or web resists the externally applied torsion and shear force. In this case, since the skin is a thin-walled structure, the internal shear stresses in the skin can be represented as shear flow. In design, the shear flow is sometimes known before the skin thickness is determined, in which case the skin thickness can simply be sized according to allowable shear stress.
Shear center
For a given structure, the shear center is the point in space at which shear force could be applied without causing torsional deformation (e.g. twisting) of the cross-section of the structure. The shear center is an imaginary point, but does not vary with the magnitude of the shear force - only the cross-section of the structure. The shear center always lies along the axis of symmetry, and can be found using the following method:
Apply an arbitrary resultant shear force
Calculate the shear flows from this shear force
Choose a reference point o an arbitrary distance e from the point of application of the load
Calculate the moment about o using both shear flows and the resultant shear force, and equate the two expressions. Solve for e
The distance e and the axis of symmetry give the coordinate for the shear center, independent of the shear force magnitude.
Calculating shear flow
By definition, shear flow through a cross section of thickness t is calculated using , where . Thus the equation for shear flow at a particular depth in a particular cross-section of a thin-walled structure that is symmetric across its width is
where
q, the shear flow
Vy, the shear force perpendicular to the neutral axis x at the cross-section of interest
Qx, the first moment of area (aka statical moment) about the neutral axis x for the cross section of the structure above the depth in question
Ix, the second moment of area (aka moment of inertia) about the neutral axis x for the structure (a function only of the shape of the structure)
In fluid mechanics
Unlike in solid mechanics where shear flow is the shear stress force per unit length, in fluid mechanics, shear flow (or shearing flow) refers to adjacent layers of fluid moving parallel to each other with different speeds. Viscous fluids resist this shearing motion. For a Newtonian fluid, the stress exerted by the fluid in resistance to the shear is proportional to the strain rate or shear rate.
A simple example of a shear flow is Couette flow, in which a fluid is trapped between two large parallel plates, and one plate is moved with some relative velocity to the other. Here, the strain rate is simply the relative velocity divided by the distance between the plates.
Shear flows in fluids tend to be unstable at high Reynolds numbers, when fluid viscosity is not strong enough to dampen out perturbations to the flow. For example, when two layers of fluid shear against each other with relative velocity, the Kelvin–Helmholtz instability may occur.
Notes
References
Riley, W. F. F., Sturges, L. D. and Morris, D. H. Mechanics of Materials. J. Wiley & Sons, New York, 1998 (5th Ed.), 720 pp.
Weisshaar, T. A. Aerospace Structures: An Introduction to Fundamental Problems. T.A. Weisshaar, West Lafayette, 2009, 140pp.
Aerospace Mechanics and Materials. TU Delft OpenCourseWare. 11/22/16. <https://ocw.tudelft.nl/courses/aerospace-mechanics-of-materials/>
External links
Horizontal shearing stress
Shear flow
Solid mechanics
Fluid dynamics | Shear flow | [
"Physics",
"Chemistry",
"Engineering"
] | 1,098 | [
"Solid mechanics",
"Chemical engineering",
"Mechanics",
"Piping",
"Fluid dynamics"
] |
5,041,589 | https://en.wikipedia.org/wiki/Autocollimator | An autocollimator is an optical instrument for non-contact measurement of angles. They are typically used to align components and measure deflections in optical or mechanical systems. An autocollimator works by projecting an image onto a target mirror and measuring the deflection of the returned image against a scale, either visually or by means of an electronic detector. A visual autocollimator can measure angles as small as 1 arcsecond (4.85 microradians), while an electronic autocollimator can have up to 100 times more resolution.
Visual autocollimators are often used for aligning laser rod ends and checking the face parallelism of optical windows and wedges. Electronic and digital autocollimators are used as angle measurement standards, for monitoring angular movement over long periods of time and for checking angular position repeatability in mechanical systems. Servo autocollimators are specialized compact forms of electronic autocollimators that are used in high-speed servo-feedback loops for stable-platform applications. An electronic autocollimator is typically calibrated to read the actual mirror angle.
Electronic autocollimator
The electronic autocollimator is a high precision angle measurement instrument capable of measuring angular deviations with accuracy down to fractions of an arcsecond, by electronic means only, with no optical eye-piece.
Measuring with an electronic autocollimator is fast, easy, accurate, and will frequently be the most cost effective procedure. Used extensively in workshops, tool rooms, inspection departments and quality control laboratories worldwide, these highly sensitive instruments will measure extremely small angular displacements, squareness, twist and parallelism.
Laser analyzing autocollimator
Today, a new technology allows to improve the autocollimation instrument to allow direct measurements of incoming laser beams. This new capability opens a gate of inter-alignment between optics, mirrors and lasers.
This technology fusion between a century-old technology of autocollimation with recent laser technology offers a very versatile instrument capable of measurement of inter-alignment between multiple line of sights, laser in respect to mechanical datum, alignment of laser cavity, measurement of multiple rollers parallelism in roll to roll machinery, laser divergence angle and its spatial stability and many more inter-alignment applications.
Total station autocollimator
The concept of autocollimation as an optical instrument was conceived about a century ago for non-contact measurements of angles. Hybrid technology fulfills a need recently developed by novel photonics applications has created for the alignment and measurement of optics and lasers. Implementing motorized focusing offers an additional measurement dimension by focusing on the area to be examined and performing alignment and deviations from alignment on the scale of microns. This is relevant in the adjustment phase as well as final testing and examination phases of integrated systems. Recent progress has been made in with the aim to serve the photonics AR/VR industry, involving development in interalingment, fusion of several wavelengths including NIR into one system, and measurements of multi laser array such as VCSEL in respect with other optical sensors, to improve angular accurate optical measurements to a resolution of 0.01 arcseconds.
Typical applications
An electronic autocollimator can be used in the measurement of straightness of machine components (such as guide ways) or the straightness of lines of motion of machine components. Flatness measurement of granite surface plates, for example, can be performed by measuring straightness of multiple lines along the flat surface, then summing the deviations in line angle over the surface. Recent advancements in applications allow angular orientation measurement of wafers. This could also be done without obstructing lines of sight to the wafer's surface itself. It is applicable in wafer measuring machines and wafer processing machines. Other applications include:
Aircraft assembly jigs
Satellite testing
Steam and gas turbines
Marine propulsion machinery
Printing presses
Air compressors
Cranes
Diesel engines
Nuclear reactors
Coal conveyors
Shipbuilding and repair
Rolling mills
Rod and wire mills
Extruder barrels
Optical measurement applications:
Retroreflector measurement
Roof prism measurement
Optical assembly procedures
Alignment of beam delivery systems
Alignment of laser cavity
Testing perpendicularity of laser rods in respect to its axis
Real time measurement of angular stability of mirror elements.
See also
Autocollimation
Collimator
References
Optical instruments
Optical metrology
Measuring instruments | Autocollimator | [
"Technology",
"Engineering"
] | 871 | [
"Measuring instruments"
] |
5,042,021 | https://en.wikipedia.org/wiki/Cofiring | Co-firing (also referred to as complementary firing or co-combustion) is the combustion of two different fuels in the same combustion system. Fuels can be solid fuels, liquid fuels or gaseous, and its source either fossil or renewable. Therefore, use of heavy fuel oil assisting coal power stations may technically be considered co-firing. The term co-firing was popularized in the 1980s and then referred specifically to the use of waste solid residues (paper, plastic, solvents, tars, etc.) or biomass in coal power stations that were designed only for the combustion of coal.
Combustion
Incineration | Cofiring | [
"Chemistry",
"Engineering"
] | 124 | [
"Combustion engineering",
"Incineration",
"Combustion",
"Chemical reaction stubs",
"Chemical process stubs"
] |
5,047,113 | https://en.wikipedia.org/wiki/Plasma%20Physics%20Laboratory%20%28Saskatchewan%29 | The Plasma Physics Laboratory at the University of Saskatchewan was established in 1959 by H. M. Skarsgard. Early work centered on research with a Betatron.
Facilities
STOR-1M
STOR-1M is Canada's first tokamak built in 1983. In 1987 STOR-1M was the world’s first demonstration of alternating current in a tokamak.
STOR-M
STOR-M stands for Saskatchewan Torus-Modified. STOR-M is a tokamak located at the University of Saskatchewan. STOR-M is a small tokamak (major radius = 46 cm, minor radius = 12.5 cm) designed for studying plasma heating, anomalous transport and developing novel tokamak operation modes and advanced diagnostics. STOR-M is capable of a 30–40 millisecond plasma discharge with a toroidal magnetic field of between 0.5 and 1 tesla and a plasma current of between 20 and 50 kiloamperes. STOR-M has also demonstrated improved confinement induced by a turbulent heating pulse, electrode biasing and compact torus injection.
References
External links
Fusion power
Nuclear research institutes
Research institutes in Canada
University of Saskatchewan
Plasma physics facilities
Tokamaks | Plasma Physics Laboratory (Saskatchewan) | [
"Physics",
"Chemistry",
"Engineering"
] | 251 | [
"Nuclear research institutes",
"Nuclear organizations",
"Plasma physics",
"Fusion power",
"Plasma physics stubs",
"Plasma physics facilities",
"Nuclear fusion"
] |
5,047,118 | https://en.wikipedia.org/wiki/Range%20%28particle%20radiation%29 | In passing through matter, charged particles ionize and thus lose energy in many steps, until their energy is (almost) zero. The distance to this point is called the range of the particle. The range depends on the type of particle, on its initial energy and on the material through which it passes.
For example, if the ionising particle passing through the material is a positive ion like an alpha particle or proton, it will collide with atomic electrons in the material via Coulombic interaction. Since the mass of the proton or alpha particle is much greater than that of the electron, there will be no significant deviation from the radiation's incident path and very little kinetic energy will be lost in each collision. As such, it will take many successive collisions for such heavy ionising radiation to come to a halt within the stopping medium or material. Maximum energy loss will take place in a head-on collision with an electron.
Since large angle scattering is rare for positive ions, a range may be well defined for that radiation, depending on its energy and charge, as well as the ionisation energy of the stopping medium. Since the nature of such interactions is statistical, the number of collisions required to bring a radiation particle to rest within the medium will vary slightly with each particle (i.e., some may travel further and undergo fewer collisions than others). Hence, there will be a small variation in the range, known as straggling.
The energy loss per unit distance (and hence, the density of ionization), or stopping power also depends on the type and energy of the particle and on the material. Usually, the energy loss per unit distance increases while the particle slows down. The curve describing this fact is called the Bragg curve. Shortly before the end, the energy loss passes through a maximum, the Bragg Peak, and then drops to zero (see the figures in Bragg Peak and in stopping power). This fact is of great practical importance for radiation therapy.
The range of alpha particles in ambient air amounts to only several centimeters; this type of radiation can therefore be stopped by a sheet of paper. Although beta particles scatter much more than alpha particles, a range can still be defined; it frequently amounts to several hundred centimeters of air.
The mean range can be calculated by integrating the inverse stopping power over energy.
Scaling
The range of a heavy charged particle is approximately proportional to the mass of the particle and the inverse of the density of the medium, and is a function of the initial velocity of the particle.
See also
Stopping power (particle radiation)
Attenuation length
Radiation length
Further reading
Particle physics
Radiation
es:Alcance de la radiación | Range (particle radiation) | [
"Physics",
"Chemistry"
] | 541 | [
"Transport phenomena",
"Physical phenomena",
"Waves",
"Radiation",
"Particle physics",
"Particle physics stubs"
] |
6,630,855 | https://en.wikipedia.org/wiki/Transfersome | Transfersome is a proprietary drug delivery technology, an artificial vesicle designed to exhibit the characteristics of a cell vesicle suitable for controlled and potentially targeted drug delivery. Some evidence has shown efficacy for its use for drug delivery without causing skin irritation, potentially being used to treat skin cancer. Transfersome is made by the German company IDEA AG.
References
Cell biology
Nanomedicine
Drug delivery devices
Dosage forms | Transfersome | [
"Chemistry",
"Materials_science",
"Biology"
] | 84 | [
"Pharmacology",
"Cell biology",
"Drug delivery devices",
"Nanomedicine",
"Nanotechnology"
] |
6,632,365 | https://en.wikipedia.org/wiki/Engine%20cart | An engine cart is an engine support on rollers used at an engine test stand. For example, the combustion engine is mounted on this mobile support for holding the engine in an accurate position during the test.
Compared to a fixed support, the engine cart is used for preparing the combustion engine outside the test stand in a separate rigging area.
The transport from the rigging area to the test room is made manually.
Engines
Engine technology
Automotive tools | Engine cart | [
"Physics",
"Technology"
] | 90 | [
"Physical systems",
"Machines",
"Engine technology",
"Engines"
] |
6,633,988 | https://en.wikipedia.org/wiki/Coker%20unit | A coker or coker unit is an oil refinery processing unit that converts the residual oil from the vacuum distillation column into low molecular weight hydrocarbon gases, naphtha, light and heavy gas oils, and petroleum coke. The process thermally cracks the long chain hydrocarbon molecules in the residual oil feed into shorter chain molecules leaving behind the excess carbon in the form of petroleum coke.
This petroleum coke can either be fuel grade (high in sulphur and metals) or anode grade (low in sulphur and metals). The raw coke from the coker is often referred to as green coke. In this context, "green" means unprocessed. The further processing of green coke by calcining in a rotary kiln removes residual volatile hydrocarbons from the coke. The calcined petroleum coke can be further processed in an anode baking oven in order to produce anode coke of the desired shape and physical properties. The anodes are mainly used in the aluminium and steel industry.
Types
There are three types of cokers used in oil refineries: delayed coker, fluid coker and flexicoker. The one that is by far the most commonly used is the delayed coker.
The schematic flow diagram below depicts a typical delayed coker:
See also
Delayed coker
Shukhov cracking process
Burton process
Petroleum coke
References
External links
Detailed description of cokers and related topics
Quality specifications for petroleum cokes
Oil refineries
Chemical equipment
Petroleum production | Coker unit | [
"Chemistry",
"Engineering"
] | 304 | [
"Chemical equipment",
"Oil refineries",
"Petroleum",
"Oil refining",
"nan"
] |
21,227,565 | https://en.wikipedia.org/wiki/Solid%20state%20ionics | Solid-state ionics is the study of ionic-electronic mixed conductor and fully ionic conductors (solid electrolytes) and their uses. Some materials that fall into this category include inorganic crystalline and polycrystalline solids, ceramics, glasses, polymers, and composites. Solid-state ionic devices, such as solid oxide fuel cells, can be much more reliable and long-lasting, especially under harsh conditions, than comparable devices with fluid electrolytes.
The field of solid-state ionics was first developed in Europe, starting with the work of Michael Faraday on solid electrolytes Ag2S and PbF2 in 1834. Fundamental contributions were later made by Walther Nernst, who derived the Nernst equation and detected ionic conduction in heterovalently doped zirconia, which he applied in his Nernst lamp. Another major step forward was the characterization of silver iodide in 1914. Around 1930, the concept of point defects was established by Yakov Frenkel, Walter Schottky and Carl Wagner, including the development of point-defect thermodynamics by Schottky and Wagner; this helped explain ionic and electronic transport in ionic crystals, ion-conducting glasses, polymer electrolytes and nanocomposites. In the late 20th and early 21st centuries, solid-state ionics focused on the synthesis and characterization of novel solid electrolytes and their applications in solid state battery systems, fuel cells and sensors.
The term solid state ionics was coined in 1967 by Takehiko Takahashi, but did not become widely used until the 1980s, with the emergence of the journal Solid State Ionics. The first international conference on this topic was held in 1972 in Belgirate, Italy, under the name "Fast Ion Transport in Solids, Solid State Batteries and Devices".
History
Foundations
In the early 1830s, Michael Faraday laid the foundations of electrochemistry and solid-state ionics by discovering the motion of ions in liquid and solid electrolytes. Earlier, around 1800, Alessandro Volta used a liquid electrolyte in his voltaic pile, the first electrochemical battery, but failed to realize that ions are involved in the process. Meanwhile, in his work on decomposition of solutions by electric current, Faraday used not only the ideas of ion, cation, anion, electrode, anode, cathode, electrolyte and electrolysis, but even the present-day terms for them. Faraday associated electric current in an electrolyte with the motion of ions, and discovered that ions can exchange their charges with an electrode while they were transformed into elements by electrolysis. He quantified those processes by two laws of electrolysis. The first law (1832) stated that the mass of a product at the electrode, Δm, increases linearly with the amount of charge passed through the electrolyte, Δq. The second law (1833) established the proportionality between Δm and the “electrochemical equivalent” and defined the Faraday constant F as F = (Δq/Δm)(M/z), where M is the molar mass and z is the charge of the ion.
In 1834, Faraday discovered ionic conductivity in heated solid electrolytes Ag2S and PbF2. In PbF2, the conductivity increase upon heating was not sudden, but spread over a hundred degrees Celsius. Such behavior, called Faraday transition, is observed in the cation conductors Na2S and Li4SiO4 and anion conductors PbF2, CaF2, SrF2, SrCl2 and LaF3.
Later in 1891, Johann Wilhelm Hittorf reported on the ion transport numbers in electrochemical cells, and in the early 20th century those numbers were determined for solid electrolytes.
First theories and applications
The voltaic pile stimulated a series of improved batteries, such as the Daniell cell, fuel cell and lead acid battery. Their operation was largely understood in the late 1800s from the theories by Wilhelm Ostwald and Walther Nernst. In 1894 Ostwald explained the energy conversion in a fuel cell and stressed that its efficiency was not limited by thermodynamics. Ostwald, together with Jacobus Henricus van 't Hoff, and Svante Arrhenius, was a founding father of electrochemistry and chemical ionic theory, and received a Nobel prize in chemistry in 1909.
His work was continued by Walther Nernst, who derived the Nernst equation and described ionic conduction in heterovalently doped zirconia, which he used in his Nernst lamp. Nernst was inspired by the dissociation theory of Arrhenius published in 1887, which relied on ions in solution. In 1889 he realized the similarity between electrochemical and chemical equilibria, and formulated his equation that correctly predicted the output voltage of various electrochemical cells based on liquid electrolytes from the thermodynamic properties of their components.
Besides his theoretical work, in 1897 Nernst patented the first lamp that used a solid electrolyte. Contrary to the existing carbon-filament lamps, Nernst lamp could operate in air and was twice more efficient as its emission spectrum was closer to that of daylight. AEG, a lighting company in Berlin, bought the Nernst’s patent for one million German gold marks, which was a fortune at the time, and used 800 of Nernst lamps to illuminate their booth at the world’s fair Exposition Universelle (1900).
Ionic conductivity in silver halides
Among several solid electrolytes described in the 19th and early 20th century, α-AgI, the high-temperature crystalline form of silver iodide, is widely regarded as the most important one. Its electrical conduction was characterized by Carl Tubandt and E. Lorenz in 1914. Their comparative study of AgI, AgCl and AgBr demonstrated that α-AgI, is thermally stable and highly conductive between 147 and 555 °C; the conductivity weakly increased with temperature in this range and then dropped upon melting. This behavior was fully reversible and excluded non-equilibrium effects. Tubandt and Lorenz described other materials with a similar behavior, such as α-CuI, α-CuBr, β-CuBr, and high-temperature phases of Ag2S, Ag2Se and Ag2Te. They associated the conductivity with cations in silver and cuprous halides and with ions and electrons in silver chalcogenides.
Point defects in ionic crystals
In 1926, Yakov Frenkel suggested that in an ionic crystal like AgI, in thermodynamic equilibrium, a small fraction of the cations, α, are displaced from their regular lattice sites into interstitial positions. He related α with the Gibbs energy for the formation of one mol of Frenkel pairs, ΔG, as α = exp(-ΔG/2RT), where T is temperature and R is the gas constant; for a typical value of ΔG = 100 kJ/mol, α ~ 1 at 100 °C and ~6 at 400 °C. This idea naturally explained the presence of an appreciable fraction of mobile ions in otherwise defect-free ionic crystals, and thus the ionic conductivity in them.
Frenkel’s idea was expanded by Carl Wagner and Walter Schottky in their 1929 theory, which described the equilibrium thermodynamics of point defects in ionic crystals. In particular, Wagner and Schottky related the deviations from stoichiometry in those crystals with the chemical potentials of the crystal components, and explained the phenomenon of mixed electronic and ionic conduction.
Wagner and Schottky considered four extreme cases of point-defect disorder in a stoichiometric binary ionic crystal of type AB:
Pairs of interstitial cations A+ and lattice vacancies (Frenkel defects)
Pairs of interstitial anions B− and lattice vacancies (anti-Frenkel defects)
Pairs of interstitial cations A+ and interstitial anions B− with no vacancies
Pairs of A and B-type lattice vacancies with no interstitials (Schottky disorder).
Type-3 disorder does not occur in practice, and type 2 is observed only in rare cases when anions are smaller than cations, while both types 1 and 4 are common and show the same exp(-ΔG/2RT) temperature dependence.
Later in 1933, Wagner suggested that in metal oxides an excess of metal would result in extra electrons, while a deficit of metal would produce electron holes, i.e., that atomic non-stoichiometry would result in a mixed ionic-electronic conduction.
Other types of disorder
Ionic glasses
The studies of crystalline ionic conductors where excess ions were provided by point defect continued through 1950s, and the specific mechanism of conduction was established for each compound depending on its ionic structure. The emergence of glassy and polymeric electrolytes in the late 1970s provided new ionic conduction mechanisms. A relatively wide range of conductivities was attained in glasses, wherein mobile ions were dynamically decoupled from the matrix. It was found that the conductivity could be increased by doping a glass with certain salts, or by using a glass mixture. The conductivity values could be as high as 0.03 S/cm at room temperature, with activation energies as low as 20 kJ/mol. Compared to crystals, glasses have isotropic properties, continuously tunable composition and good workability; they lack the detrimental grain boundaries and can be molded into any shape, but understanding their ionic transport was complicated by the lack of long-range order.
Historically, an evidence for ionic conductivity was provided back in the 1880s, when German scientists noticed that a well-calibrated thermometer made of Thuringian glass would show −0.5 °C instead of 0 °C when placed in ice shortly after immersion in boiling water, and recover only after several months. In 1883, they reduced this effect 10 times by replacing a mixture of sodium and potassium in the glass by either sodium or potassium. This finding helped Otto Schott develop the first accurate lithium-based thermometer. More systematic studies on ionic conductivity in glass appeared in 1884, but received broad attention only a century later. Several universal laws have been empirically formulated for ionic glasses and extended to other ionic conductors, such as the frequency dependence of electrical conductivity σ(ν) – σ(0) ~ νp, where the exponent p depends on the material, but not on temperature, at least below ~100 K. This behavior is a fingerprint of activated hopping conduction among nearby sites.
Polymer electrolytes
In 1975, Peter V. Wright, a polymer chemist from Sheffield (UK), produced the first polymer electrolyte, which contained sodium and potassium salts in a polyethylene oxide (PEO) matrix. Later another type of polymer electrolytes, polyelectrolyte, was put forward, where ions moved through an electrically charged, rather than neutral, polymer matrix. Polymer electrolytes showed lower conductivities than glasses, but they were cheaper, much more flexible and could be easier machined and shaped into various forms. While ionic glasses are typically operated below, polymer conductors are typically heated above their glass transition temperatures. Consequently, both the electric field and mechanical deformation decay on a similar time scale in polymers, but not in glasses.
Between 1983 and 2001 it was believed that the amorphous fraction is responsible for ionic conductivity, i.e., that (nearly) complete structural disorder is essential for the fast ionic transport in polymers. However, a number of crystalline polymers have been described in 2001 and later with ionic conductivity as high as 0.01 S/cm 30 °C and activation energy of only 0.24 eV.
Nanostructures
In the 1970s–80s, it was realized that nanosized systems may affect ionic conductivity, opening a new field of nanoionics. In 1973, it was reported that ionic conductivity of lithium iodide (LiI) crystals could be increased 50 times by adding to it a fine powder of ‘’insulating’’ material (alumina). This effect was reproduced in the 1980s in Ag- and Tl-halides doped with alumina nanoparticles. Similarly, addition of insulating nanoparticles helped increase the conductivity of ionic polymers. These unexpected results were explained by charge separation at the matrix-nanoparticle interface that provided additional conductive channels to the matrix, and the small size of the filler particles was required to increase the area of this interface. Similar charge-separation effects were observed for grain boundaries in crystalline ionic conductors.
Applications
By 1971, solid-state cells and batteries based on rubidium silver iodide (RbAg4I5) have been designed and tested in a wide range of temperatures and discharge currents. Despite the relatively high conductivity of RbAg4I5, they have never been commercialized due to a low overall energy content per unit weight (ca. 5 W·h/kg).
On the contrary, LiI, which had a conductivity of only ca. 1 S/cm at room temperature, found a wide-scale application in batteries for artificial pacemakers. The first such device, based on undoped LiI, was implanted into a human in March 1972 in Ferrara, Italy. Later models used as electrolyte a film of LiI, which was doped with alumina nanoparticles to increase its conductivity. LiI was formed in an in situ chemical reaction between the Li anode and iodine-poly(2-vinylpyridine) cathode, and therefore was self-healed from erosion and cracks during the operation.
Sodium-sulfur cells, based on ceramic β-Al2O3 electrolyte sandwiched between molten-sodium anode and molten-sulfur cathode showed high energy densities and were considered for car batteries in the 1990s, but disregarded due to the brittleness of alumina, which resulted in cracks and critical failure due to reaction between molten sodium and sulfur. Replacement of β-Al2O3 with NASICON did not save this application because it did not solve the cracking problem, and because NASICON reacted with the molten sodium.
Yttria-stabilized zirconia is used as a solid electrolyte in oxygen sensors in cars, generating voltage that depends on the ratio of oxygen and exhaust gas and providing electronic feedback to the fuel injector. Such sensors are also installed at many metallurgical and glass-making factories. Similar sensors of CO2, chlorine and other gases based on solid silver halide electrolytes have been proposed in the 1980s–1990s.
Since mid-1980s, a Li-based solid electrolyte is used to separate the electrochromic film (typically WO3) and ion-storing film (typically LiCoO2) in the smart glass, a window whose transparency is controlled by external voltage.
Solid-state ionic conductors are essential components of lithium-ion batteries, proton exchange membrane fuel cells (PEMFCs), supercapacitors, a novel class of electrochemical energy storage devices, and solid oxide fuel cells, devices that produces electricity from oxidizing a fuel. Nafion, a flexible fluoropolymer-copolymer discovered in the late 1960s, is widely used as a polymer electrolyte in PEMFCs.
See also
Solid-state battery
References
Electrochemistry | Solid state ionics | [
"Chemistry"
] | 3,235 | [
"Electrochemistry"
] |
21,227,807 | https://en.wikipedia.org/wiki/DNA%20demethylation | For molecular biology in mammals, DNA demethylation causes replacement of 5-methylcytosine (5mC) in a DNA sequence by cytosine (C) (see figure of 5mC and C). DNA demethylation can occur by an active process at the site of a 5mC in a DNA sequence or, in replicating cells, by preventing addition of methyl groups to DNA so that the replicated DNA will largely have cytosine in the DNA sequence (5mC will be diluted out).
Methylated cytosine is frequently present in the linear DNA sequence where a cytosine is followed by a guanine in a 5' → 3' direction (a CpG site). In mammals, DNA methyltransferases (which add methyl groups to DNA bases) exhibit a strong sequence preference for cytosines at CpG sites. There appear to be more than 20 million CpG dinucleotides in the human genome (see genomic distribution). In mammals, on average, 70% to 80% of CpG cytosines are methylated, though the level of methylation varies with different tissues. Methylated cytosines often occur in groups or CpG islands within the promoter regions of genes, where such methylation may reduce or silence gene expression (see gene expression). Methylated cytosines in the gene body, however, are positively correlated with expression.
Almost 100% DNA demethylation occurs by a combination of passive dilution and active enzymatic removal during the reprogramming that occurs in early embryogenesis and in gametogenesis. Another large demethylation, of about 3% of all genes, can occur by active demethylation in neurons during formation of a strong memory. After surgery, demethylations are found in peripheral blood mononuclear cells at sites annotated to immune system genes. Demethylations also occur during the formation of cancers. During global DNA hypomethylation of tumor genomes, there is a minor to moderate reduction of the number of methylated cytosines (5mC) amounting to a loss of about 5% to 20% on average of the 5mC bases.
Embryonic development
Early embryonic development
The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. After fertilization, the paternal chromosome is almost completely demethylated in six hours by an active process, before DNA replication (blue line in Figure).
Demethylation of the maternal genome occurs by a different process. In the mature oocyte, about 40% of its CpG sites in DNA are methylated. While somatic cells of mammals have three main DNA methyltransferases (which add methyl groups to cytosines at CpG sites), DNMT1, DNMT3A, and DNMT3B, in the pre-implantation embryo up to the blastocyst stage (see Figure), the only methyltransferase present is an isoform of DNMT1 designated DNMT1o. DNMT1o has an alternative oocyte-specific promoter and first exon (exon 1o) located 5' of the somatic and spermatocyte promoters. As reviewed by Howell et al., DNMT1o is sequestered in the cytoplasm of mature oocytes and in 2-cell and 4-cell embryos, but at the 8-cell stage is only present in the nucleus. At the 16 cell stage (the morula) DNMT1o is again found only in the cytoplasm. It appears that demethylation of the maternal chromosomes largely takes place by blockage of the methylating enzyme DNMT1o from entering the nucleus except briefly at the 8 cell stage. The maternal-origin DNA thus undergoes passive demethylation by dilution of the methylated maternal DNA during replication (red line in Figure). The morula (at the 16 cell stage), has only a small amount of DNA methylation (black line in Figure).
DNMT3b begins to be expressed in the blastocyst. Methylation begins to increase at 3.5 days after fertilization in the blastocyst, and a large wave of methylation then occurs on days 4.5 to 5.5 in the epiblast, going from 12% to 62% methylation, and reaching maximum level after implantation in the uterus. By day seven after fertilization, the newly formed primordial germ cells (PGC) in the implanted embryo segregate from the remaining somatic cells. At this point the PGCs have about the same level of methylation as the somatic cells.
Gametogenesis
The newly formed primordial germ cells (PGC) in the implanted embryo devolve from the somatic cells. At this point the PGCs have high levels of methylation. These cells migrate from the epiblast toward the gonadal ridge. As reviewed by Messerschmidt et al., the majority of PGCs are arrested in the G2 phase of the cell cycle, while they migrate toward the hindgut during embryo days 7.5 to 8.5. Then demethylation of the PGCs takes place in two waves. At day 9.5 the primordial germ cells begin to rapidly replicate going from about 200 PGCs at embryo day 9.5 to about 10,000 PGCs at day 12.5. During days 9.5 to 12.5 DNMT3a and DNMT3b are repressed and DNMT1 is present in the nucleus at a high level. But DNMT1 is unable to methylate cytosines during days 9.5 to 12.5 because the UHRF1 gene (also known as NP95) is repressed and UHRF1 is an essential protein needed to recruit DNMT1 to replication foci where maintenance DNA methylation takes place. This is a passive, dilution form of demethylation.
In addition, from embryo day 9.5 to 13.5 there is an active form of demethylation. As indicated below in "Molecular stages of active reprogramming," two enzymes are central to active demethylation. These are a ten-eleven translocation methylcytosine dioxygenase (TET) and thymine-DNA glycosylase (TDG). One particular TET enzyme, TET1, and TDG are present at high levels from embryo day 9.5 to 13.5, and are employed in active demethylation during gametogenesis. PGC genomes display the lowest levels of DNA methylation of any cells in the entire life cycle of the mouse at embryonic day 13.5.
Learning and Memory
Learning and memory have levels of permanence, differing from other mental processes such as thought, language, and consciousness, which are temporary in nature. Learning and memory can be either accumulated slowly (multiplication tables) or rapidly (touching a hot stove), but once attained, can be recalled into conscious use for a long time. Rats subjected to one instance of contextual fear conditioning create an especially strong long-term memory. At 24 hours after training, 9.17% of the genes in the genomes of rat hippocampus neurons were found to be differentially methylated. This included more than 2,000 differentially methylated genes at 24 hours after training, with over 500 genes being demethylated. Similar results to that in the rat hippocampus were also obtained in mice with contextual fear conditioning.
The hippocampus region of the brain is where contextual fear memories are first stored (see figure of the brain, this section), but this storage is transient and does not remain in the hippocampus. In rats contextual fear conditioning is abolished when the hippocampus is subjected to hippocampectomy just one day after conditioning, but rats retain a considerable amount of contextual fear when hippocampectomy is delayed by four weeks. In mice, examined at 4 weeks after conditioning, the hippocampus methylations and demethylations were reversed (the hippocampus is needed to form memories but memories are not stored there) while substantial differential CpG methylation and demethylation occurred in cortical neurons during memory maintenance. There were 1,223 differentially methylated genes in the anterior cingulate cortex of mice four weeks after contextual fear conditioning. Thus, while there were many methylations in the hippocampus shortly after memory was formed, all these hippocampus methylations were demethylated as soon as four weeks later.
Demethylation in Cancer
The human genome contains about 28 million CpG sites, and roughly 60% of the CpG sites are methylated at the 5 position of the cytosine. During formation of a cancer there is an average reduction of the number of methylated cytosines of about 5% to 20%, or about 840,00 to 3.4 million demethylations of CpG sites.
DNMT1 methylates CpGs on hemi-methylated DNA during DNA replication. Thus, when a DNA strand has a methylated CpG, and the newly replicated strand during semi-conservative replication lacks a methyl group on the complementary CpG, DNMT1 is normally recruited to the hemimethylated site and adds a methyl group to cytosine in the newly synthesized CpG. However, recruitment of DNMT1 to hemimethylated CpG sites during DNA replication depends on the UHRF1 protein. If UHRF1 does not bind to a hemimethylated CpG site, then DNMT1 is not recruited and cannot methylate the newly synthesized CpG site. The arginine methyltransferase PRMT6 regulates DNA methylation by methylating the arginine at position 2 of histone 3 (H3R2me2a). (See Protein methylation#Arginine.) In the presence of H3R2me2a UHRF1 can not bind to a hemimethylated CpG site, and then DNMT1 is not recruited to the site, and the site remains hemimethylated. Upon further rounds of replication the methylated CpG is passively diluted out. PRMT6 is frequently overexpressed in many types of cancer cells. The overexpression of PRMT6 may be a source of DNA demethylation in cancer.
Molecular stages of active reprogramming
Three molecular stages are required for actively, enzymatically reprogramming the DNA methylome. Stage 1: Recruitment. The enzymes needed for reprogramming are recruited to genome sites that require demethylation or methylation. Stage 2: Implementation. The initial enzymatic reactions take place. In the case of methylation, this is a short step that results in the methylation of cytosine to 5-methylcytosine. Stage 3: Base excision DNA repair. The intermediate products of demethylation are catalysed by specific enzymes of the base excision DNA repair pathway that finally restore cystosine in the DNA sequence.
Stage 2 of active demethylation
Demethylation of 5-methylcytosine to generate 5-hydroxymethylcytosine (5hmC) very often initially involves oxidation of 5mC (see Figure in this section) by ten-eleven translocation methylcytosine dioxygenases (TET enzymes). The molecular steps of this initial demethylation are shown in detail in TET enzymes. In successive steps (see Figure) TET enzymes further hydroxylate 5hmC to generate 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). Thymine-DNA glycosylase (TDG) recognizes the intermediate bases 5fC and 5caC and excises the glycosidic bond resulting in an apyrimidinic site (AP site). This is followed by base excision repair (stage 3). In an alternative oxidative deamination pathway, 5hmC can be oxidatively deaminated by APOBEC (AID/APOBEC) deaminases to form 5-hydroxymethyluracil (5hmU). Also, 5mC can be converted to thymine (Thy). 5hmU can be cleaved by TDG, MBD4, NEIL1 or SMUG1. AP sites and T:G mismatches are then repaired by base excision repair (BER) enzymes to yield cytosine (Cyt). The TET family of dioxygenases are employed in the most frequent type of demethylation reactions.
TET family
TET dioxygenase isoforms include at least two isoforms of TET1, one of TET2 and three isoforms of TET3. The full-length canonical TET1 isoform appears virtually restricted to early embryos, embryonic stem cells and primordial germ cells (PGCs). The dominant TET1 isoform in most somatic tissues, at least in the mouse, arises from alternative promoter usage which gives rise to a short transcript and a truncated protein designated TET1s. The isoforms of TET3 are the full length form TET3FL, a short form splice variant TET3s, and a form that occurs in oocytes and neurons designated TET3o. TET3o is created by alternative promoter use and contains an additional first N-terminal exon coding for 11 amino acids. TET3o only occurs in oocytes and neurons and is not expressed in embryonic stem cells or in any other cell type or adult mouse tissue tested. Whereas TET1 expression can barely be detected in oocytes and zygotes, and TET2 is only moderately expressed, the TET3 variant TET3o shows extremely high levels of expression in oocytes and zygotes, but is nearly absent at the 2-cell stage. It is possible that TET3o, high in neurons, oocytes and zygotes at the one cell stage, is the major TET enzyme utilized when very large scale rapid demethylations occur in these cells.
Stage 1 of demethylation - recruitment of TET to DNA
The TET enzymes do not specifically bind to 5-methylcytosine except when recruited. Without recruitment or targeting, TET1 predominantly binds to high CG promoters and CpG islands (CGIs) genome-wide by its CXXC domain that can recognize un-methylated CGIs. TET2 does not have an affinity for 5-methylcytosine in DNA. The CXXC domain of the full-length TET3, which is the predominant form expressed in neurons, binds most strongly to CpGs where the C was converted to 5-carboxycytosine (5caC). However, it also binds to un-methylated CpGs.
For a TET enzyme to initiate demethylation it must first be recruited to a methylated CpG site in DNA. Two of the proteins shown to recruit a TET enzyme to a methylated cytosine in DNA are OGG1 (see figure Initiation of DNA demethylation at a CpG site) and EGR1.
OGG1
Oxoguanine glycosylase (OGG1) catalyses the first step in base excision repair of the oxidatively damaged base 8-OHdG. OGG1 finds 8-OHdG by sliding along the linear DNA at 1,000 base pairs of DNA in 0.1 seconds. OGG1 very rapidly finds 8-OHdG. OGG1 proteins bind to oxidatively damaged DNA with a half maximum time of about 6 seconds. When OGG1 finds 8-OHdG it changes conformation and complexes with 8-OHdG in its binding pocket. OGG1 does not immediately act to remove the 8-OHdG. Half maximum removal of 8-OHdG takes about 30 minutes in HeLa cells in vitro, or about 11 minutes in the livers of irradiated mice. DNA oxidation by reactive oxygen species preferentially occurs at a guanine in a methylated CpG site, because of a lowered ionization potential of guanine bases adjacent to 5-methylcytosine. TET1 binds (is recruited to) the OGG1 bound to 8-OHdG (see figure). This likely allows TET1 to demethylate an adjacent methylated cytosine. When human mammary epithelial cells (MCF-10A) were treated with H2O2, 8-OHdG increased in DNA by 3.5-fold and this caused about 80% demethylation of the 5-methylcytosines in the MCF-10A genome.
EGR1
The gene early growth response protein 1 (EGR1) is an immediate early gene (IEG). EGR1 can rapidly be induced by neuronal activity. The defining characteristic of IEGs is the rapid and transient up-regulation—within minutes—of their mRNA levels independent of protein synthesis. In adulthood, EGR1 is expressed widely throughout the brain, maintaining baseline expression levels in several key areas of the brain including the medial prefrontal cortex, striatum, hippocampus and amygdala. This expression is linked to control of cognition, emotional response, social behavior and sensitivity to reward. EGR1 binds to DNA at sites with the motifs 5′-GCGTGGGCG-3′ and 5'-GCGGGGGCGG-3′ and these motifs occur primarily in promoter regions of genes. The short isoform TET1s is expressed in the brain. EGR1 and TET1s form a complex mediated by the C-terminal regions of both proteins, independently of association with DNA. EGR1 recruits TET1s to genomic regions flanking EGR1 binding sites. In the presence of EGR1, TET1s is capable of locus-specific demethylation and activation of the expression of downstream genes regulated by EGR1.
DNA demethylation intermediate 5hmC
As indicated in the Figure above, captioned "Demethylation of 5-methylcytosine," the first step in active demethylation is a TET oxidation of 5-methylcytosine (5mC) to 5-hydroxymethylcytosine (5hmC). The demethylation process, in some tissues and at some genome locations, may stop at that point. As reviewed by Uribe-Lewis et al., in addition to being an intermediate in active DNA demethylation, 5hmC is often a stable DNA modification. Within the genome, 5hmC is located at transcriptionally active genes, regulatory elements and chromatin associated complexes. In particular, 5hmC is dynamically changed and positively correlated with active gene transcription during cell lineage specification, and high levels of 5hmC are found in embryonic stem cells and in the central nervous system. In humans, defective 5-hydroxymethylating activity is associated with a phenotype of lymphoproliferation, immunodeficiency and autoimmunity.
Stage 3 base excision repair
The third stage of DNA demethylation is removal of the intermediate products of demethylation generated by a TET enzyme by base excision repair. As indicated above in Stage 2, after 5mC is first oxidized by a TET to form 5hmC, further oxidation of 5hmC by TET yields 5fC and oxidation of 5fC by TET yields 5caC. Both 5fC and 5caC are recognized by a DNA glycosylase, TDG, a base excision repair enzyme, as an abnormal base. As shown in the Figure in this section, TDG removes the abnormal base (e.g. 5fC) while leaving the sugar-phosphate backbone intact, creating an apurinic/apyrimidinic site, commonly referred to as an AP site. In this Figure, the 8-OHdG is left in the DNA, since it may have been present when OGG1 attracted TET1 to the CpG site with a methylated cytosine. After an AP site is formed, AP endonuclease creates a nick in the phosphodiester backbone of the AP site that was formed when the TDG DNA glycosylase removed the 5fC or 5caC. The human AP endonuclease incises DNA 5′ to the AP site by a hydrolytic mechanism, leaving a 3′-hydroxyl and a 5′-deoxyribose phosphate (5' dRP) residue. This is followed by either short patch or long patch repair. In short patch repair, 5′ dRP lyase trims the 5′ dRP end to form a phosphorylated 5′ end. This is followed by DNA polymerase β (pol β) adding a single cytosine to pair with the pre-existing guanine in the complementary strand and then DNA ligase to seal the cut strand. In long patch repair, DNA synthesis is thought to be mediated by polymerase δ and polymerase ε performing displacement synthesis to form a flap. Pol β can also perform long-patch displacement synthesis. Long-patch synthesis typically inserts 2–10 new nucleotides. Then flap endonuclease removes the flap, and this is followed by DNA ligase to seal the strand. At this point there has been a complete replacement of the 5-methylcytosine by cytosine (demethylation) in the DNA sequence.
Demethylation after exercise
Physical exercise has well established beneficial effects on learning and memory (see Neurobiological effects of physical exercise). BDNF is a particularly important regulator of learning and memory. As reviewed by Fernandes et al., in rats, exercise enhances the hippocampus expression of the gene Bdnf, which has an essential role in memory formation. Enhanced expression of Bdnf occurs through demethylation of its CpG island promoter at exon IV and this demethylation depends on steps illustrated in the two figures.
Demethylation after exposure to traffic related air pollution
In a panel of healthy adults, negative associations were found between total DNA methylation and exposure to traffic related air pollution. DNA methylation levels were associated both with recent and chronic exposure to Black Carbon as well as benzene.
Peripheral sensory neuron regeneration
After injury, neurons in the adult peripheral nervous system can switch from a dormant state with little axonal growth to robust axon regeneration. DNA demethylation in mature mammalian neurons removes barriers to axonal regeneration. This demethylation, in regenerating mouse peripheral neurons, depends upon TET3 to generate 5-hydroxymethylcytosine (5hmC) in DNA. 5hmC was altered in a large set of regeneration-associated genes (RAGs), including well-known RAGs such as Atf3, Bdnf, and Smad1, that regulate the axon growth potential of neurons.
See also
DNA methylation
DNA demethylation
References
Molecular biology
DNA
Methylation
Epigenetics | DNA demethylation | [
"Chemistry",
"Biology"
] | 4,951 | [
"Biochemistry",
"Methylation",
"Molecular biology"
] |
882,160 | https://en.wikipedia.org/wiki/Accordion%20effect | In physics, the accordion effect (also known as the slinky effect, concertina effect, elastic band effect, and string instability) occurs when fluctuations in the motion of a traveling body cause disruptions in the flow of elements following it. This can happen in road traffic, foot marching, bicycle and motor racing, and, in general, to processes in a pipeline. These are examples of nonlinear processes. The accordion effect generally decreases the throughput of the system in which it occurs.
In traffic
The accordion effect in road traffic refers to the typical decelerations and accelerations of a vehicle when the vehicle in front decelerates and accelerates. These fluctuations in speed propagate backwards and typically get bigger and bigger further down the line, resulting in reduced throughput of road traffic. For this reason, the Norwegian Public Roads Administration recommends that each driver should try to follow the accelerations of the vehicle in front closely, and keeping a steady gap that is neither too small or large. Too small gaps and sudden braking can lead to rear ending.
In motorsports
In the 2020 Tuscan Grand Prix, an accordion effect after the restart under the safety car caused five of the last cars in the field to crash. Data analysis of the crash showed that each consecutive driver accelerated faster and faster, and also that each consecutive driver braked later and later.
See also
Bus bunching
Wavelength
Doppler effect
Traffic wave
References
External links
Accordion Effect - Tabroot
Waves | Accordion effect | [
"Physics",
"Chemistry"
] | 294 | [
"Physical phenomena",
"Waves",
"Motion (physics)",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
882,410 | https://en.wikipedia.org/wiki/Untouchable%20number | In mathematics, an untouchable number is a positive integer that cannot be expressed as the sum of all the proper divisors of any positive integer. That is, these numbers are not in the image of the aliquot sum function. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable.
Examples
The number 4 is not untouchable, as it is equal to the sum of the proper divisors of 9: 1 + 3 = 4.
The number 5 is untouchable, as it is not the sum of the proper divisors of any positive integer: 5 = 1 + 4 is the only way to write 5 as the sum of distinct positive integers including 1, but if 4 divides a number, 2 does also, so 1 + 4 cannot be the sum of all of any number's proper divisors (since the list of factors would have to contain both 4 and 2).
The number 6 is not untouchable, as it is equal to the sum of the proper divisors of 6 itself: 1 + 2 + 3 = 6.
The first few untouchable numbers are
2, 5, 52, 88, 96, 120, 124, 146, 162, 188, 206, 210, 216, 238, 246, 248, 262, 268, 276, 288, 290, 292, 304, 306, 322, 324, 326, 336, 342, 372, 406, 408, 426, 430, 448, 472, 474, 498, ... .
Properties
The number 5 is believed to be the only odd untouchable number, but this has not been proven. It would follow from a slightly stronger version of the Goldbach conjecture, since the sum of the proper divisors of pq (with p, q distinct primes) is 1 + p + q. Thus, if a number n can be written as a sum of two distinct primes, then n + 1 is not an untouchable number. It is expected that every even number larger than 6 is a sum of two distinct primes, so probably no odd number larger than 7 is an untouchable number, and , , , so only 5 can be an odd untouchable number. Thus it appears that besides 2 and 5, all untouchable numbers are composite numbers (since except 2, all even numbers are composite). No perfect number is untouchable, since, at the very least, it can be expressed as the sum of its own proper divisors. Similarly, none of the amicable numbers or sociable numbers are untouchable. Also, none of the Mersenne numbers are untouchable, since Mn = 2n − 1 is equal to the sum of the proper divisors of 2n.
No untouchable number is one more than a prime number, since if p is prime, then the sum of the proper divisors of p2 is p + 1. Also, no untouchable number is three more than a prime number, except 5, since if p is an odd prime then the sum of the proper divisors of 2p is p + 3.
Infinitude
There are infinitely many untouchable numbers, a fact that was proven by Paul Erdős. According to Chen & Zhao, their natural density is at least d > 0.06.
See also
Aliquot sequence
Nontotient
Noncototient
Weird number
References
Richard K. Guy, Unsolved Problems in Number Theory (3rd ed), Springer Verlag, 2004 ; section B10.
External links
Arithmetic dynamics
Divisor function
Integer sequences | Untouchable number | [
"Mathematics"
] | 766 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Arithmetic dynamics",
"Combinatorics",
"Numbers",
"Number theory",
"Dynamical systems"
] |
882,427 | https://en.wikipedia.org/wiki/Categorial%20grammar | Categorial grammar is a family of formalisms in natural language syntax that share the central assumption that syntactic constituents combine as functions and arguments. Categorial grammar posits a close relationship between the syntax and semantic composition, since it typically treats syntactic categories as corresponding to semantic types. Categorial grammars were developed in the 1930s by Kazimierz Ajdukiewicz and in the 1950s by Yehoshua Bar-Hillel and Joachim Lambek. It saw a surge of interest in the 1970s following the work of Richard Montague, whose Montague grammar assumed a similar view of syntax. It continues to be a major paradigm, particularly within formal semantics.
Basics
A categorial grammar consists of two parts: a lexicon, which assigns a set of types (also called categories) to each basic symbol, and some type inference rules, which determine how the type of a string of symbols follows from the types of the constituent symbols. It has the advantage that the type inference rules can be fixed once and for all, so that the specification of a particular language grammar is entirely determined by the lexicon.
A categorial grammar shares some features with the simply typed lambda calculus.
Whereas the lambda calculus has only one function type ,
a categorial grammar typically has two function types, one type that is applied on the left,
and one on the right. For example, a simple categorial grammar might have two function types and .
The first, , is the type of a phrase that results in a phrase of type
when followed (on the right) by a phrase of type .
The second, , is the type of a phrase that results
in a phrase of type when preceded (on the left) by a phrase of type
.
The notation is based upon algebra. A fraction when multiplied by (i.e. concatenated with) its denominator yields its numerator. As concatenation is not commutative, it makes a difference whether the denominator occurs to the left or right. The concatenation must be on the same side as the denominator for it to cancel out.
The first and simplest kind of categorial grammar is called a basic categorial grammar, or sometimes an AB-grammar (after Ajdukiewicz and Bar-Hillel).
Given a set of primitive types , let
be the set of types constructed from primitive types. In the basic case, this is the least set such that
and if
then .
Think of these as purely formal expressions freely generated from the primitive types; any semantics will be added later. Some authors assume a fixed infinite set of primitive types used by all grammars, but by making the primitive types part of the grammar, the whole construction is kept finite.
A basic categorial grammar is a tuple
where is a finite set of symbols,
is a finite set of primitive types, and .
The relation is the lexicon, which relates types to symbols .
Since the lexicon is finite, it can be specified by listing a set of pairs like .
Such a grammar for English might have three basic types , assigning count nouns the type , complete noun phrases the type
, and sentences the type .
Then an adjective could have the type , because if it is followed by a noun then the whole phrase is a noun.
Similarly, a determiner has the type ,
because it forms a complete noun phrase when followed by a noun.
Intransitive verbs have the type , and transitive verbs the type .
Then a string of words is a sentence if it has overall type .
For example, take the string "the bad boy made that mess". Now "the" and "that" are determiners, "boy" and "mess" are nouns, "bad" is an adjective, and "made" is a transitive verb, so the lexicon is
{,
,
,
,
,
}.
and the sequence of types in the string is
now find functions and appropriate arguments and reduce them according to the two inference rules
and
:
The fact that the result is means that the string is a sentence, while the sequence of reductions shows that it can be parsed as ((the (bad boy)) (made (that mess))).
Categorial grammars of this form (having only function application rules) are equivalent in generative capacity to context-free grammars and are thus often considered inadequate for theories of natural language syntax. Unlike CFGs, categorial grammars are lexicalized, meaning that only a small number of (mostly language-independent) rules are employed, and all other syntactic phenomena derive from the lexical entries of specific words.
Another appealing aspect of categorial grammars is that it is often easy to assign them a compositional semantics, by first assigning interpretation types to all the basic categories, and then associating all the derived categories with appropriate function types. The interpretation of any constituent is then simply the value of a function at an argument. With some modifications to handle intensionality and quantification, this approach can be used to cover a wide variety of semantic phenomena.
Lambek calculus
A Lambek grammar is an elaboration of this idea that has a
concatenation operator for types, and several other inference rules.
Mati Pentus has shown that these still have the generative capacity of
context-free grammars.
For the Lambek calculus, there is a type concatenation
operator , so
that
and if
then .
The Lambek calculus consists of several deduction rules, which specify
how type inclusion assertions can be derived. In the following
rules, upper case roman letters stand for types, upper case Greek
letters stand for sequences of types. A sequent of the form
can be read: a string is of type if it consists of the concatenation
of strings of each of the types in . If a type is
interpreted as a set of strings, then the
← may be interpreted as ⊇,
that is, "includes as a subset".
A horizontal line means that the inclusion above the line
implies the one below the line.
The process is begun by the Axiom rule, which has no antecedents and
just says that any type includes itself.
The Cut rule says that inclusions can be composed.
The other rules come in pairs, one pair for each type construction
operator, each pair consisting of one rule for the operator in the
target, one in the source, of the arrow.
The name of a rule consists of the operator and an arrow, with the
operator on the side of the arrow on which it occurs in the conclusion.
{| class="wikitable"
|-
!Target
!Source
|-
|
|
|-
|
|
|-
|
|
|}
For an example, here is a derivation of "type raising", which says that
. The names of rules and the substitutions used are to the right.
Relation to context-free grammars
Recall that a context-free grammar is a 4-tuple where
is a finite set of non-terminals or variables.
is a finite set of terminal symbols.
is a finite set of production rules, that is, a finite relation .
is the start variable.
From the point of view of categorial grammars, a context-free grammar can be seen as a calculus with a set of special purpose axioms for
each language, but with no type construction operators and no inference rules except Cut.
Specifically, given a context-free grammar as above, define a categorial grammar
where ,
and .
Let there be an axiom
for every symbol
,
an axiom
for every production rule ,
a lexicon entry for every terminal symbol
,
and Cut for the only rule.
This categorial grammar generates the same language as the given CFG.
Of course, this is not a basic categorial grammar, since it has special axioms that depend upon the language; i.e. it is not lexicalized.
Also, it makes no use at all of non-primitive types.
To show that any context-free language can be generated by a basic categorial grammar, recall that
any context-free language can be generated by a context-free grammar in Greibach normal form.
The grammar is in Greibach normal form if every production rule is of the form
,
where capital letters are variables, ,
and ,
that is, the right side of the production is a single terminal symbol
followed by zero or more (non-terminal) variables.
Now given a CFG in Greibach normal form,
define a basic categorial grammar with a primitive type
for each non-terminal variable
,
and with an entry in the lexicon
,
for each production rule
.
It is fairly easy to see that this basic categorial grammar
generates the same language as the original CFG.
Note that the lexicon of this grammar will generally
assign multiple types to each symbol.
The same construction works for Lambek grammars, since they are an extension of basic categorial grammars. It is necessary to verify that the extra inference rules do not change the generated language. This can be done and shows that every context-free language is generated by some Lambek grammar.
To show the converse, that every language generated by a Lambek grammar is context-free, is much more difficult.
It was an open problem for nearly thirty years, from the early 1960s until about 1991 when it was proven by Pentus.
The basic idea is, given a Lambek grammar,
construct a context-free grammar
with the same set of terminal symbols, the same start symbol, with variables some (not all) types
,
and with a production rule
for each entry
in the lexicon, and production rules for certain sequents that are derivable in the Lambek calculus.
Of course, there are infinitely many types and infinitely many derivable sequents, so in
order to make a finite grammar it is necessary put a bound on the size of the types and sequents
that are needed. The heart of Pentus's proof is to show that there is such a finite bound.
Notation
The notation in this field is not standardized. The notations used in
formal language theory, logic, category theory, and linguistics, conflict
with each other. In logic, arrows point to the more general from the more particular,
that is, to the conclusion from the hypotheses. In this article,
this convention is followed, i.e. the target of the arrow is the more general (inclusive) type.
In logic, arrows usually point left to right. In this article this convention is
reversed for consistency with the notation of context-free grammars, where the
single non-terminal symbol is always on the left. We use the symbol
in a production rule as in Backus–Naur form. Some authors use an arrow, which
unfortunately may point in either direction, depending on whether the grammar is
thought of as generating or recognizing the language.
Some authors on categorial grammars write instead of
. The convention used here follows Lambek and algebra.
Historical notes
The basic ideas of categorial grammar date from work by Kazimierz Ajdukiewicz (in 1935) and other scholars from the Polish tradition of mathematical logic including Stanisław Leśniewski, Emil Post and Alfred Tarski. Ajdukiewicz's formal approach to syntax was influenced by Edmund Husserl's pure logical grammar, which was formalized by Rudolph Carnap. It represents a development in the historical idea of universal logical grammar as an underlying structure of all languages. A core concept of the approach is the substitutability of syntactic categories—hence the name categorial grammar. The membership of an element (e.g., word or phrase) in a syntactic category (word class, phrase type) is established by the commutation test, and the formal grammar is constructed through series of such tests.
The term categorial grammar was coined by Yehoshua Bar-Hillel (in 1953). In 1958, Joachim Lambek introduced a syntactic calculus that formalized the function type constructors along with various rules for the combination of functions. This calculus is a forerunner of linear logic in that it is a substructural logic.
Montague grammar is based on the same principles as categorial grammar. Montague's work helped to bolster interest in categorial grammar by associating it with his highly successful formal treatment of natural language semantics. Later work in categorial grammar has focused on the improvement of syntactic coverage. One formalism that has received considerable attention in recent years is Steedman and Szabolcsi's combinatory categorial grammar, which builds on combinatory logic invented by Moses Schönfinkel and Haskell Curry.
There are a number of related formalisms of this kind in linguistics, such as type logical grammar and abstract categorial grammar.
Some definitions
Derivation A derivation is a binary tree that encodes a proof.
Parse tree A parse tree displays a derivation, showing the syntactic structure of a sentence.
Functor and argument In a right (left) function application, the node of the type A\B (B/A) is called the functor, and the node of the type A is called an argument.
Functor–argument structure
Refinements of categorical grammar
A variety of changes to categorial grammar have been proposed to improve syntactic coverage. Some of the most common are listed below.
Features and subcategories
Most systems of categorial grammar subdivide categories. The most common way to do this is by tagging them with features, such as person, gender, number, and tense. Sometimes only atomic categories are tagged in this way. In Montague grammar, it is traditional to subdivide function categories using a multiple slash convention, so A/B and A//B would be two distinct categories of left-applying functions, that took the same arguments but could be distinguished between by other functions taking them as arguments.
Function composition
Rules of function composition are included in many categorial grammars. An example of such a rule would be one that allowed the concatenation of a constituent of type A/B with one of type B/C to produce a new constituent of type A/C. The semantics of such a rule would simply involve the composition of the functions involved. Function composition is important in categorial accounts of conjunction and extraction, especially as they relate to phenomena like right node raising. The introduction of function composition into a categorial grammar leads to many kinds of derivational ambiguity that are vacuous in the sense that they do not correspond to semantic ambiguities.
Conjunction
Many categorial grammars include a typical conjunction rule, of the general form X CONJ X → X, where X is a category. Conjunction can generally be applied to nonstandard constituents resulting from type raising or function composition..
Discontinuity
The grammar is extended to handle linguistic phenomena such as discontinuous idioms, gapping and extraction.
See also
Combinatory categorial grammar
Link grammar
Noncommutative logic
Pregroup Grammar
Scope
Type shifter
References
Further reading
Michael Moortgat, Categorial Type Logics, Chapter 2 in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997,
Wojciech Buszkowski, Mathematical linguistics and proof theory, Chapter 12 in J. van Benthem and A. ter Meulen (eds.) Handbook of Logic and Language. Elsevier, 1997,
External links
Grammar, categorial at Springer Encyclopaedia of Mathematics
Typelogical Grammar at Stanford Encyclopedia of Philosophy
Grammar frameworks
Formal languages
Computational linguistics
Type theory
Semantics | Categorial grammar | [
"Mathematics",
"Technology"
] | 3,233 | [
"Mathematical structures",
"Mathematical linguistics",
"Applied mathematics",
"Formal languages",
"Mathematical logic",
"Mathematical objects",
"Computational linguistics",
"Type theory",
"Natural language and computing"
] |
882,729 | https://en.wikipedia.org/wiki/High-throughput%20screening | High-throughput screening (HTS) is a method for scientific discovery especially used in drug discovery and relevant to the fields of biology, materials science and chemistry. Using robotics, data processing/control software, liquid handling devices, and sensitive detectors, high-throughput screening allows a researcher to quickly conduct millions of chemical, genetic, or pharmacological tests. Through this process one can quickly recognize active compounds, antibodies, or genes that modulate a particular biomolecular pathway. The results of these experiments provide starting points for drug design and for understanding the noninteraction or role of a particular location.
Assay plate preparation
The key labware or testing vessel of HTS is the microtiter plate, which is a small container, usually disposable and made of plastic, that features a grid of small, open divots called wells. In general, microplates for HTS have either 96, 192, 384, 1536, 3456 or 6144 wells. These are all multiples of 96, reflecting the original 96-well microplate with spaced wells of 8 x 12 with 9 mm spacing. Most of the wells contain test items, depending on the nature of the experiment. These could be different chemical compounds dissolved e.g. in an aqueous solution of dimethyl sulfoxide (DMSO). The wells could also contain cells or enzymes of some type. (The other wells may be empty or contain pure solvent or untreated samples, intended for use as experimental controls.)
A screening facility typically holds a library of stock plates, whose contents are carefully catalogued, and each of which may have been created by the lab or obtained from a commercial source. These stock plates themselves are not directly used in experiments; instead, separate assay plates are created as needed. An assay plate is simply a copy of a stock plate, created by pipetting a small amount of liquid (often measured in nanoliters) from the wells of a stock plate to the corresponding wells of a completely empty plate.
Reaction observation
To prepare for an assay, the researcher fills each well of the plate with some biological entity that they wish to conduct the experiment upon, such as a protein, cells, or an animal embryo. After some incubation time has passed to allow the biological matter to absorb, bind to, or otherwise react (or fail to react) with the compounds in the wells, measurements are taken across all the plate's wells, either manually or by a machine. Manual measurements are often necessary when the researcher is using microscopy to (for example) seek changes or defects in embryonic development caused by the wells' compounds, looking for effects that a computer could not easily determine by itself. Otherwise, a specialized automated analysis machine can run a number of experiments on the wells (such as shining polarized light on them and measuring reflectivity, which can be an indication of protein binding). In this case, the machine outputs the result of each experiment as a grid of numeric values, with each number mapping to the value obtained from a single well. A high-capacity analysis machine can measure dozens of plates in the space of a few minutes like this, generating thousands of experimental datapoints very quickly.
Depending on the results of this first assay, the researcher can perform follow up assays within the same screen by "cherrypicking" liquid from the source wells that gave interesting results (known as "hits") into new assay plates, and then re-running the experiment to collect further data on this narrowed set, confirming and refining observations.
Automation systems
Automation is an essential element in HTS's usefulness. Typically, an integrated robot system consisting of one or more robots transports assay-microplates from station to station for sample and reagent addition, mixing, incubation, and finally readout or detection. An HTS system can usually prepare, incubate, and analyze many plates simultaneously, further speeding the data-collection process. HTS robots that can test up to 100,000 compounds per day currently exist. Automatic colony pickers pick thousands of microbial colonies for high throughput genetic screening. The term uHTS or ultra-high-throughput screening refers (circa 2008) to screening in excess of 100,000 compounds per day.
Experimental design and data analysis
With the ability of rapid screening of diverse compounds (such as small molecules or siRNAs) to identify active compounds, HTS has led to an explosion in the rate of data generated in recent years
.
Consequently, one of the most fundamental challenges in HTS experiments is to glean biochemical significance from mounds of data, which relies on the development and adoption of appropriate experimental designs and analytic methods for both quality control and hit selection
.
HTS research is one of the fields that have a feature described by John Blume, Chief Science Officer for Applied Proteomics, Inc., as follows: Soon, if a scientist does not understand some statistics or rudimentary data-handling technologies, he or she may not be considered to be a true molecular biologist and, thus, will simply become "a dinosaur."
Quality control
High-quality HTS assays are critical in HTS experiments. The development of high-quality HTS assays requires the integration of both experimental and computational approaches for quality control (QC). Three important means of QC are (i) good plate design, (ii) the selection of effective positive and negative chemical/biological controls, and (iii) the development of effective QC metrics to measure the degree of differentiation so that assays with inferior data quality can be identified.
A good plate design helps to identify systematic errors (especially those linked with well position) and determine what normalization should be used to remove/reduce the impact of systematic errors on both QC and hit selection.
Effective analytic QC methods serve as a gatekeeper for excellent quality assays. In a typical HTS experiment, a clear distinction between a positive control and a negative reference such as a negative control is an index for good quality. Many quality-assessment measures have been proposed to measure the degree of differentiation between a positive control and a negative reference. Signal-to-background ratio, signal-to-noise ratio, signal window, assay variability ratio, and Z-factor have been adopted to evaluate data quality.
Strictly standardized mean difference (SSMD) has recently been proposed for assessing data quality in HTS assays.
Hit selection
A compound with a desired size of effects in an HTS is called a hit. The process of selecting hits is called hit selection. The analytic methods for hit selection in screens without replicates (usually in primary screens) differ from those with replicates (usually in confirmatory screens). For example, the z-score method is suitable for screens without replicates whereas the t-statistic is suitable for screens with replicates. The calculation of SSMD for screens without replicates also differs from that for screens with replicates
.
For hit selection in primary screens without replicates, the easily interpretable ones are average fold change, mean difference, percent inhibition, and percent activity. However, they do not capture data variability effectively. The z-score method or SSMD, which can capture data variability based on an assumption that every compound has the same variability as a negative reference in the screens.
However, outliers are common in HTS experiments, and methods such as z-score are sensitive to outliers and can be problematic. As a consequence, robust methods such as the z*-score method, SSMD*, B-score method, and quantile-based method have been proposed and adopted for hit selection.
In a screen with replicates, we can directly estimate variability for each compound; as a consequence, we should use SSMD or t-statistic that does not rely on the strong assumption that the z-score and z*-score rely on. One issue with the use of t-statistic and associated p-values is that they are affected by both sample size and effect size.
They come from testing for no mean difference, and thus are not designed to measure the size of compound effects. For hit selection, the major interest is the size of effect in a tested compound. SSMD directly assesses the size of effects.
SSMD has also been shown to be better than other commonly used effect sizes.
The population value of SSMD is comparable across experiments and, thus, we can use the same cutoff for the population value of SSMD to measure the size of compound effects
.
Techniques for increased throughput and efficiency
Unique distributions of compounds across one or many plates can be employed either to increase the number of assays per plate or to reduce the variance of assay results, or both. The simplifying assumption made in this approach is that any N compounds in the same well will not typically interact with each other, or the assay target, in a manner that fundamentally changes the ability of the assay to detect true hits.
For example, imagine a plate wherein compound A is in wells 1–2–3, compound B is in wells 2–3–4, and compound C is in wells 3–4–5. In an assay of this plate against a given target, a hit in wells 2, 3, and 4 would indicate that compound B is the most likely agent, while also providing three measurements of compound B's efficacy against the specified target. Commercial applications of this approach involve combinations in which no two compounds ever share more than one well, to reduce the (second-order) possibility of interference between pairs of compounds being screened.
Recent advances
Automation and low volume assay formats were leveraged by scientists at the NIH Chemical Genomics Center (NCGC) to develop quantitative HTS (qHTS), a paradigm to pharmacologically profile large chemical libraries through the generation of full concentration-response relationships for each compound. With accompanying curve fitting and cheminformatics software qHTS data yields half maximal effective concentration (EC50), maximal response, Hill coefficient (nH) for the entire library enabling the assessment of nascent structure activity relationships (SAR).
In March 2010, research was published demonstrating an HTS process allowing 1,000 times faster screening (100 million reactions in 10 hours) at 1-millionth the cost (using 10−7 times the reagent volume) than conventional techniques using drop-based microfluidics. Drops of fluid separated by oil replace microplate wells and allow analysis and hit sorting while reagents are flowing through channels.
In 2010, researchers developed a silicon sheet of lenses that can be placed over microfluidic arrays to allow the fluorescence measurement of 64 different output channels simultaneously with a single camera. This process can analyze 200,000 drops per second.
In 2013, researchers have disclosed an approach with small molecules from plants. In general, it is essential to provide high-quality proof-of-concept validations early in the drug discovery process. Here technologies that enable the identification of potent, selective, and bioavailable chemical probes are of crucial interest, even if the resulting compounds require further optimization for development into a pharmaceutical product. Nuclear receptor RORα, a protein that has been targeted for more than a decade to identify potent and bioavailable agonists, was used as an example of a very challenging drug target. Hits are confirmed at the screening step due to the bell-shaped curve. This method is very similar to the quantitative HTS method (screening and hit confirmation at the same time), except that using this approach greatly decreases the data point number and can screen easily more than 100.000 biological relevant compounds.
Switching from an orbital shaker, which required milling times of 24 hours and at least 10 mg of drug compound to a ResonantAcoustic mixer, Merck reported reduced processing time to less than 2 hours on only 1-2 mg of drug compound per well. Merck also indicated the acoustic milling approach allows for the preparation of high dose nanosuspension formulations that could not be obtained using conventional milling equipment.
Whereby traditional HTS drug discovery uses purified proteins or intact cells, recent development of the technology is associated with the use of intact living organisms, like the nematode Caenorhabditis elegans and zebrafish (Danio rerio).
In 2016-2018 plate manufacturers began producing specialized chemistry to allow for mass production of ultra-low adherent cell repellent surfaces which facilitated the rapid development of HTS amenable assays to address cancer drug discovery in 3D tissues such as organoids and spheroids; a more physiologically relevant format.
Increasing use of HTS in academia for biomedical research
HTS is a relatively recent innovation, made feasible largely through modern advances in robotics and high-speed computer technology. It still takes a highly specialized and expensive screening lab to run an HTS operation, so in many cases a small- to moderate-size research institution will use the services of an existing HTS facility rather than set up one for itself.
There is a trend in academia for universities to be their own drug discovery enterprise. These facilities, which normally are found only in industry, are now increasingly found at universities as well. UCLA, for example, features an open access HTS laboratory Molecular Screening Shared Resources (MSSR, UCLA), which can screen more than 100,000 compounds a day on a routine basis. The open access policy ensures that researchers from all over the world can take advantage of this facility without lengthy intellectual property negotiations. With a compound library of over 200,000 small molecules, the MSSR has one of the largest compound deck of all universities on the west coast. Also, the MSSR features full functional genomics capabilities (genome wide siRNA, shRNA, cDNA and CRISPR) which are complementary to small molecule efforts: Functional genomics leverages HTS capabilities to execute genome wide screens which examine the function of each gene in the context of interest by either knocking each gene out or overexpressing it. Parallel access to high-throughput small molecule screen and a genome wide screen enables researchers to perform target identification and validation for given disease or the mode of action determination on a small molecule. The most accurate results can be obtained by use of "arrayed" functional genomics libraries, i.e. each library contains a single construct such as a single siRNA or cDNA. Functional genomics is typically paired with high content screening using e.g. epifluorescent microscopy or laser scanning cytometry.
The University of Illinois also has a facility for HTS, as does the University of Minnesota. The Life Sciences Institute at the University of Michigan houses the HTS facility in the Center for Chemical Genomics. Columbia University has an HTS shared resource facility with ~300,000 diverse small molecules and ~10,000 known bioactive compounds available for biochemical, cell-based and NGS-based screening. The Rockefeller University has an open-access HTS Resource Center HTSRC (The Rockefeller University, HTSRC), which offers a library of over 380,000 compounds. Northwestern University's High Throughput Analysis Laboratory supports target identification, validation, assay development, and compound screening. The non-profit Sanford Burnham Prebys Medical Discovery Institute also has a long-standing HTS facility in the Conrad Prebys Center for Chemical Genomics which was part of the MLPCN. The non-profit Scripps Research Molecular Screening Center (SRMSC) continues to serve academia across institutes post-MLPCN era. The SRMSC uHTS facility maintains one of the largest library collections in academia, presently at well-over 665,000 small molecule entities, and routinely screens the full collection or sub-libraries in support of multi-PI grant initiatives.
In the United States, the National Institutes of Health or NIH has created a nationwide consortium of small-molecule screening centers to produce innovative chemical tools for use in biological research. The Molecular Libraries Probe Production Centers Network, or MLPCN, performs HTS on assays provided by the research community, against a large library of small molecules maintained in a central molecule repository. In addition, the NIH created the National Center for Advancing Translational Sciences or NCATS, housed in Shady Grove Maryland, that carries out small molecule and RNAi screens in collaboration with academic laboratories. Of note, the small molecule screening uses 1536 well plates, a capability rarely seen in academic screening laboratories that allows one to carry out quantitative HTS in which each compound is tested across four- to five-orders of magnitude of concentrations.
See also
Chemoproteomics
Compound management
DNA-encoded chemical library
Drug discovery hit to lead
Dual-flashlight plot
High-content screening
High throughput biology
IC50 / EC50
Laboratory automation
Reverse pharmacology
Synthetic genetic array
Virtual high throughput screening
Yeast two-hybrid screening
References
Further reading
Zhang XHD (2011) "Optimal High-Throughput Screening: Practical Experimental Design and Data Analysis for Genome-scale RNAi Research, Cambridge University Press"
Flow cytometry enables a high-throughput homogeneous fluorescent antibody-binding assay for cytotoxic T cell lytic granule exocytosis
External links
Open Screening Environment
Setting up High-Throughput Screening Laboratory (Koppal, Lab Manager Magazine)
Assay Guidance Manual (NIH, NCGC)
OncoSignature High Throughput Screening
Scientific techniques
Pharmaceutics
Drug discovery | High-throughput screening | [
"Chemistry",
"Biology"
] | 3,572 | [
"Life sciences industry",
"Medicinal chemistry",
"Drug discovery"
] |
882,793 | https://en.wikipedia.org/wiki/Poisson%20superalgebra | In mathematics, a Poisson superalgebra is a Z2-graded generalization of a Poisson algebra. Specifically, a Poisson superalgebra is an (associative) superalgebra A together with a second product, a Lie superbracket
such that (A, [·,·]) is a Lie superalgebra and the operator
is a superderivation of A:
Here, is the grading of a (pure) element .
A supercommutative Poisson algebra is one for which the (associative) product is supercommutative.
This is one of two possible ways of "super"izing the Poisson algebra. This gives the classical dynamics of fermion fields and classical spin-1/2 particles. The other way is to define an antibracket algebra or Gerstenhaber algebra, used in the BRST and Batalin-Vilkovisky formalism. The difference between these two is in the grading of the Lie bracket. In the Poisson superalgebra, the grading of the bracket is zero:
whereas in the Gerstenhaber algebra, the bracket decreases the grading by one:
Examples
If is any associative Z2 graded algebra, then, defining a new product , called the super-commutator, by for any pure graded x, y, turns into a Poisson superalgebra.
See also
Poisson supermanifold
References
Super linear algebra
Symplectic geometry | Poisson superalgebra | [
"Physics"
] | 311 | [
"Supersymmetry",
"Symmetry",
"Super linear algebra"
] |
882,936 | https://en.wikipedia.org/wiki/Nambu%20mechanics | In mathematics, Nambu mechanics is a generalization of Hamiltonian mechanics involving multiple Hamiltonians. Recall that Hamiltonian mechanics is based upon the flows generated by a smooth Hamiltonian over a symplectic manifold. The flows are symplectomorphisms and hence obey Liouville's theorem. This was soon generalized to flows generated by a Hamiltonian over a Poisson manifold. In 1973, Yoichiro Nambu suggested a generalization involving Nambu–Poisson manifolds with more than one Hamiltonian.
Nambu bracket
Specifically, consider a differential manifold , for some integer ; one has a smooth -linear map from copies of to itself, such that it is completely antisymmetric:
the Nambu bracket,
which acts as a derivation
whence the Filippov Identities (FI) (evocative of the Jacobi identities,
but unlike them, not antisymmetrized in all arguments, for ):
so that acts as a generalized derivation over the -fold product .
Hamiltonians and flow
There are N − 1 Hamiltonians, , generating an incompressible flow,
The generalized phase-space velocity is divergenceless, enabling Liouville's theorem. The case reduces to a Poisson manifold, and conventional Hamiltonian mechanics.
For larger even , the Hamiltonians identify with the maximal number of independent invariants of motion (cf. Conserved quantity) characterizing a superintegrable system that evolves in -dimensional phase space. Such systems are also describable by conventional Hamiltonian dynamics; but their description in the framework of Nambu mechanics is substantially more elegant and intuitive, as all invariants enjoy the same geometrical status as the Hamiltonian: the trajectory in phase space is the intersection of the hypersurfaces specified by these invariants. Thus, the flow is perpendicular to all gradients of these Hamiltonians, whence parallel to the generalized cross product specified by the respective Nambu bracket.
Nambu mechanics can be extended to fluid dynamics, where the resulting Nambu brackets are non-canonical and the Hamiltonians are identified with the Casimir of the system, such as enstrophy or helicity.
Quantizing Nambu dynamics leads to intriguing structures that coincide with conventional quantization ones when superintegrable systems are involved—as they must.
See also
Hamiltonian mechanics
Symplectic manifold
Poisson manifold
Poisson algebra
Integrable system
Conserved quantity
Hamiltonian Fluid Mechanics
Notes
References
Hamiltonian mechanics
Mathematical physics | Nambu mechanics | [
"Physics",
"Mathematics"
] | 504 | [
"Applied mathematics",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mathematical physics",
"Dynamical systems"
] |
883,034 | https://en.wikipedia.org/wiki/Gerstenhaber%20algebra | In mathematics and theoretical physics, a Gerstenhaber algebra (sometimes called an antibracket algebra or braid algebra) is an algebraic structure discovered by Murray Gerstenhaber (1963) that combines the structures of a supercommutative ring and a graded Lie superalgebra. It is used in the Batalin–Vilkovisky formalism. It appears also in the generalization of Hamiltonian
formalism known as the De Donder–Weyl theory as the algebra of generalized Poisson brackets defined on differential forms.
Definition
A Gerstenhaber algebra is a graded-commutative algebra with a Lie bracket of degree −1 satisfying the Poisson identity. Everything is understood to satisfy the usual superalgebra sign conventions. More precisely, the algebra has two products, one written as ordinary multiplication and one written as [,], and a Z-grading called degree (in theoretical physics sometimes called ghost number). The degree of an element a is denoted by |a|. These satisfy the identities
(ab)c = a(bc) (The product is associative)
ab = (−1)|a||b|ba (The product is (super) commutative)
|ab| = |a| + |b| (The product has degree 0)
|[a,b]| = |a| + |b| − 1 (The Lie bracket has degree −1)
[a,bc] = [a,b]c + (−1)(|a|−1)|b|b[a,c] (Poisson identity)
[a,b] = −(−1)(|a|−1)(|b|−1) [b,a] (Antisymmetry of Lie bracket)
[a,[b,c]] = [[a,b],c] + (−1)(|a|−1)(|b|−1)[b,[a,c]] (The Jacobi identity for the Lie bracket)
Gerstenhaber algebras differ from Poisson superalgebras in that the Lie bracket has degree −1 rather than degree 0. The Jacobi identity may also be expressed in a symmetrical form
Examples
Gerstenhaber showed that the Hochschild cohomology H*(A,A) of an algebra A is a Gerstenhaber algebra.
A Batalin–Vilkovisky algebra has an underlying Gerstenhaber algebra if one forgets its second order Δ operator.
The exterior algebra of a Lie algebra is a Gerstenhaber algebra.
The differential forms on a Poisson manifold form a Gerstenhaber algebra.
The multivector fields on a manifold form a Gerstenhaber algebra using the Schouten–Nijenhuis bracket
References
Algebras
Theoretical physics
Symplectic geometry | Gerstenhaber algebra | [
"Physics",
"Mathematics"
] | 602 | [
"Algebra stubs",
"Mathematical structures",
"Algebras",
"Theoretical physics",
"Algebraic structures",
"Theoretical physics stubs",
"Algebra"
] |
883,189 | https://en.wikipedia.org/wiki/Massless%20particle | In particle physics, a massless particle is an elementary particle whose invariant mass is zero. At present the only confirmed massless particle is the photon.
Other particles and quasiparticles
Standard Model gauge bosons
The photon (carrier of electromagnetism) is one of two known gauge bosons that are both believed to be massless; the other is the gluon (carrier of the strong force). The only other confirmed gauge bosons are the W and Z bosons, which are known from experiment to be extremely massive. Of these, only the photon has been experimentally confirmed to be massless.
Although there are compelling theoretical reasons to believe that gluons are massless, they can never be observed as free particles due to being confined within hadrons, and hence their presumed lack of rest mass cannot be confirmed by any feasible experiment.
Hypothetical graviton
The graviton is a hypothetical tensor boson proposed to be the carrier of gravitational force in some quantum theories of gravity, but no such theory has been successfully incorporated into the Standard Model, so the Standard Model neither predicts any such particle nor requires it, and no gravitational quantum particle has been indicated by experiment. Whether a graviton would be massless if it existed is likewise an open question.
Quasiparticles
The Weyl fermion discovered in 2015 is also expected to be massless,
but these are not actual particles. At one time neutrinos were thought to perhaps be Weyl fermions, but when they were discovered to have mass, that left no fundamental particles of the Weyl type.
The Weyl fermions discovered in 2015 are merely quasiparticles – composite motions found in the structure of molecular latices that have particle-like behavior, but are not themselves real particles. Weyl fermions in matter are like phonons, which are also quasiparticles. No real particle that is a Weyl fermion has been found to exist, and there is no compelling theoretical reason that requires them to exist.
Neutrinos were originally thought to be massless – and possibly Weyl fermions. However, because neutrinos change flavour as they travel, at least two of the types of neutrinos must have mass (and cannot be Weyl fermions).
The discovery of this phenomenon, known as neutrino oscillation, led to Canadian scientist Arthur B. McDonald and Japanese scientist Takaaki Kajita sharing the 2015 Nobel Prize in Physics.
See also
Relativistic particle
Gravitational waves
References
Special relativity
Particle physics | Massless particle | [
"Physics"
] | 530 | [
"Special relativity",
"Particle physics",
"Theory of relativity"
] |
883,560 | https://en.wikipedia.org/wiki/Gamma-ray%20spectrometer | A gamma-ray spectrometer (GRS) is an instrument for measuring the distribution (or spectrum—see figure) of the intensity of gamma radiation versus the energy of each photon.
The study and analysis of gamma-ray spectra for scientific and technical use is called gamma spectroscopy, and gamma-ray spectrometers are the instruments which observe and collect such data.
Because the energy of each photon of EM radiation is proportional to its frequency, gamma rays have sufficient energy that they are typically observed by counting individual photons.
Some notable gamma-ray spectrometers are Gammasphere, AGATA, and GRETINA.
Gamma-ray spectroscopy
Atomic nuclei have an energy-level structure somewhat analogous to the energy levels of atoms, so that they may emit (or absorb) photons of particular energies, much as atoms do, but at energies that are thousands to millions of times higher than those typically studied in optical spectroscopy.
(Note that photons in the short-wavelength high-energy end of the atomic spectroscopy energy range (few eV to few hundred keV), generally termed X-rays, overlaps somewhat with the low end of the nuclear gamma-ray range (~10 MeV to ~10 keV) so that the terminology used to distinguish X-rays from gamma rays can be arbitrary or ambiguous in the overlap region.)
As with atoms, the particular energy levels of nuclei are characteristic of each species, so that the photon energies of the gamma rays emitted, which correspond to the energy differences of the nuclei, can be used to identify particular elements and isotopes.
Distinguishing between gamma-rays of slightly different energy is an important consideration in the analysis of complex spectra, and the ability of a GRS to do so is characterized by the instrument's spectral resolution, or the accuracy with which the energy of each photon is measured.
Semi-conductor detectors, based on cooled germanium or silicon detecting elements, have been invaluable for such applications.
Because the energy level spectrum of nuclei typically dies out above about 10 MeV, gamma-ray instruments looking to still higher energies generally observe only continuum spectra, so that the moderate spectral resolution of scintillation (often sodium iodide (NaI) or caesium iodide, (CsI) spectrometers), often suffices for such applications.
Astronomical spectrometers
A number of investigations have been performed to observe the gamma-ray spectra of the Sun and other astronomical sources, both galactic and extra-galactic. The Gamma-Ray Imaging Spectrometer, the Hard X-ray/Low-Energy Gamma-ray experiment (A-4) on HEAO 1, the Burst and Transient Spectrometry Experiment (BATSE) and the OSSI (Oriented Scintillation Spectrometer Experiment) on CGRO, the C1 germanium (Ge) gamma-ray instrument on HEAO 3, and the Ge gamma-ray spectrometer (SPI) on the ESA INTEGRAL mission are examples of cosmic spectrometers, while the GRS on the SMM and the imaging Ge spectrometer on the RHESSI satellite have been devoted to solar observations.
Planetary gamma-ray spectrometers
Gamma-ray spectrometers have been widely used for the elemental and isotopic analysis of bodies in the Solar System, especially the Moon and Mars.
These surfaces are subjected to a continual bombardment of high-energy cosmic rays, which excite nuclei in them to emit characteristic gamma-rays which can be detected from orbit.
Thus an orbiting instrument can in principle map the surface distribution of the elements for an entire planet.
Examples include the mapping of 20 elements observed in the exploration of Mars, Eros and the Moon. They are usually associated with neutron detectors that can look for water and ice in the soil by measuring neutrons. They are able to measure the abundance and distribution of about 20 primary elements of the periodic table, including silicon, oxygen, iron, magnesium, potassium, aluminum, calcium, sulfur, and carbon. Knowing what elements are at or near the surface will give detailed information about how planetary bodies have changed over time. To determine the elemental makeup of the Martian surface, the Mars Odyssey used a gamma-ray spectrometer and two neutron detectors.
GRS instruments supply data on the distribution and abundance of chemical elements, much as the Lunar Prospector mission did on the Moon. In this case, the chemical element thorium was mapped, with higher concentrations shown as yellow/orange/red in the left image.
How a GRS works
Some constructions of scintillation counters can be used as gamma-ray spectrometers. The gamma photon energy is discerned from the intensity of the flash of the scintillator, a number of low-energy photons produced by the single high-energy one. Another approach relies on using Germanium detectors - a crystal of hyperpure germanium that produces pulses proportional to the captured photon energy; while more sensitive, it has to be cooled to a low temperature, requiring a bulky cryogenic apparatus. Handheld and many laboratory gamma spectrometers are therefore the scintillator kind, mostly with thallium-doped sodium iodide, thallium-doped caesium iodide, or, more recently, cerium doped lanthanum bromide. Spectrometers for space missions conversely tend to be of the germanium kind.
When exposed to cosmic rays (charged particles from space thought to possibly originate in supernova and active galactic nuclei), chemical elements in soils and rocks emit uniquely identifiable signatures of energy in the form of gamma rays. The gamma-ray spectrometer looks at these signatures, or energies, coming from the elements present in the target soil.
By measuring gamma rays coming from the target body, it is possible to calculate the abundance of various elements and how they are distributed around the planet's surface. Gamma rays, emitted from the nuclei of atoms, show up as sharp emission lines on the instrument's spectrum output. While the energy represented in these emissions determines which elements are present, the intensity of the spectrum reveals the elements concentrations. Spectrometers are expected to add significantly to the growing understanding of the origin and evolution of planets like Mars and the processes shaping them today and in the past.
Gamma rays and neutrons are produced by cosmic rays. Incoming cosmic rays—some of the highest-energy particles—collide with the nucleus of atoms in the soil. When nuclei are hit with such energy, neutrons are released, which scatter and collide with other nuclei. The nuclei get "excited" in the process, and emit gamma rays to release the extra energy so they can return to their normal rest state. Some elements like potassium, uranium, and thorium are naturally radioactive and give off gamma rays as they decay, but all elements can be excited by collisions with cosmic rays to produce gamma rays. The HEND and Neutron Spectrometers on GRS directly detect scattered neutrons, and the gamma sensor detects the gamma rays.
Water detection
By measuring neutrons, it is possible to calculate the abundance of hydrogen, thus inferring the presence of water. The neutron detectors are sensitive to concentrations of hydrogen in the upper meter of the surface. When cosmic rays hit the surface of Mars, neutrons and gamma-rays come out of the soil. The GRS measured their energies. Certain energies are produced by hydrogen. Since hydrogen is most likely present in the form of water ice, the spectrometer will be able to measure directly the amount of permanent ground ice and how it changes with the seasons. Like a virtual shovel "digging into" the surface, the spectrometer will allow scientists to peer into this shallow subsurface of Mars and measure the existence of hydrogen.
GRS will supply data similar to that of the successful Lunar Prospector mission, which told us how much hydrogen, and thus water, is likely on the Moon.
The gamma-ray spectrometer used on the Odyssey spacecraft consists of four main components: the gamma sensor head, the neutron spectrometer, the high energy neutron detector, and the central electronics assembly. The sensor head is separated from the rest of the spacecraft by a 6.2 meter (20 ft) boom, which was extended after Odyssey entered the mapping orbit at Mars. This maneuver is done to minimize interference from any gamma rays coming from the spacecraft itself. The initial spectrometer activity, lasting between 15 and 40 days, performed an instrument calibration before the boom was deployed. After about 100 days of the mapping mission, the boom was deployed and remained in this position for the duration of the mission. The two neutron detectors-the neutron spectrometer and the high-energy neutron detector-are mounted on the main spacecraft structure and operated continuously throughout the mapping mission.
GRS specifications for the Odyssey mission
The Gamma-Ray Spectrometer weighs 30.5 kilograms (67.2 lb) and uses 32 watts of power. Along with its cooler, it measures 468 by 534 by 604 mm (18.4 by 21.0 by 23.8 in). The detector is a photodiode made of a 1.2 kg germanium crystal, reverse biased to about 3 kilovolts, mounted at the end of a six-meter boom to minimize interferences from the gamma radiation produced by the spacecraft itself. Its spatial resolution is about 300 km.
The neutron spectrometer is 173 by 144 by 314 mm (6.8 by 5.7 by 12.4 in).
The high-energy neutron detector measures 303 by 248 by 242 mm (11.9 by 9.8 by 9.5 in). The instrument's central electronics box is 281 by 243 by 234 mm (11.1 by 9.6 by 9.2 in).
See also
Total absorption spectroscopy
Pandemonium effect
References
External links
NASA Jet Propulsion Laboratory Gamma Ray Spectrometer page
Mars Odyssey GRS instrument site at the University of Arizona
Apollo 16 Gamma Ray Spectrometer
NEAR Science instruments (including GRS)
Lunar Prospector's GRS at NASA Ames Research Center
Lunar Prospector's GRS at National Space Science Data Center (NSSDC)
Spectrometers | Gamma-ray spectrometer | [
"Physics",
"Chemistry"
] | 2,090 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
883,683 | https://en.wikipedia.org/wiki/Tropospheric%20Emission%20Spectrometer | Tropospheric Emission Spectrometer or TES was a satellite instrument designed to measure the state of the earth's troposphere.
Overview
TES was a high-resolution infrared Fourier Transform spectrometer and provided key data for studying tropospheric chemistry, troposphere-biosphere interaction, and troposphere-stratosphere exchanges. It was built for NASA by the Jet Propulsion Laboratory, California Institute of Technology in Pasadena, California. It was successfully launched into polar orbit by a Delta II 7920-10L rocket aboard NASA's third Earth Observing Systems spacecraft (EOS-Aura) at 10:02 UTC on July 15, 2004. Originally planned as a 5-year mission, it was decommissioned after almost 14 years on January 31, 2018.
References
External links
NASA JPL's TES page
NASA Aura TES page
Spectrometers
Earth observation satellite sensors | Tropospheric Emission Spectrometer | [
"Physics",
"Chemistry"
] | 186 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
883,703 | https://en.wikipedia.org/wiki/Submarine%20%28baseball%29 | In baseball, a submarine is a pitch in which the ball is released often just above the ground, but not underhanded, with the torso bent at a right angle, and shoulders tilted so severely that they rotate around a nearly horizontal axis. This is in stark contrast to the underhand softball pitch in which the torso remains upright, the shoulders are level, and the hips do not rotate.
Description
The "upside down" release of the submariner causes balls to move differently from pitches generated by other arm slots. Gravity plays a significant role, for the submariner's ball must be thrown considerably above the strike zone, after which it drops rapidly back through. The sinking motion of the submariner's fastball is enhanced by forward rotation, in contrast with the overhand pitcher's hopping backspin.
Submarine pitches are often the toughest for same-side batters to hit (i.e., a right-handed submarine pitcher is the more difficult for a right-handed batter to hit, and likewise for left-handed pitchers and batters). This is because the submariner's spin is not perfectly level; the ball rotates forward and toward the pitching arm side, jamming same-sided hitters at the last moment, even as the ball drops rapidly through the zone.
Though the bending motion required to pitch effectively as a submariner means that submariners may be more at risk of developing back problems, it is commonly thought that the submarine motion is less injurious to the elbow and shoulder. Kent Tekulve and Gene Garber, two former submarine pitchers, were among the most durable pitchers in baseball history with 1,944 appearances between the two.
Past major league submariners include Carl Mays, Ted Abernathy, Elden Auker, Chad Bradford, Mark Eichhorn, Gene Garber, Kent Tekulve, Todd Frohwirth, and Dan Quisenberry. Steve Olin was also a submarine pitcher.
Japanese pitcher Shunsuke Watanabe is known as "Mr. Submarine" in Japan. Watanabe has an even lower release point than the typical submarine pitcher, dropping his pivot knee so low that it scrapes the ground. He now wears a pad under his uniform to avoid injuring his knee. His release is so low that his knuckles often become raw from their periodic drag on the ground.
Submarine pitchers
Current players
Major League Baseball
Adam Cimber
Ryan Middendorf
Brian Moran
Tyler Rogers
Ryan Thompson
Zach Vennaro
Tim Hill
Nippon Professional Baseball
Kazuhisa Makita
Rei Takahashi
Hirofumi Yamanaka
Kaito Yoza
KBO League
Park Jong-hoon
Dae-woo Kim
Chinese Professional Baseball League (Taiwan)
Lin Chen-hua
Huang Tzu-Peng
Former players
Ted Abernathy
Elden Auker
Chad Bradford
Tae-Hyon Chong
Mark Eichhorn
Craig Feltner
Todd Frohwirth
Gene Garber
Byung-hyun Kim
Terry Leach
Carl Mays
Porter Moss
Mike Myers (baseball)
Darren O'Day
Steve Olin
Dan Quisenberry
Gus Schlosser
Kent Tekulve
Jack Warhop
Shunsuke Watanabe
Kelly Wunsch
Brad Ziegler
Joe Smith
Eric Yardley
See also
Sidearm
References
Pitching (baseball)
Biomechanics
Motor skills
Motor control
Throwing | Submarine (baseball) | [
"Physics",
"Biology"
] | 673 | [
"Biomechanics",
"Behavior",
"Motor skills",
"Motor control",
"Mechanics"
] |
884,142 | https://en.wikipedia.org/wiki/Klein%E2%80%93Nishina%20formula | In particle physics, the Klein–Nishina formula gives the differential cross section (i.e. the "likelihood" and angular distribution) of photons scattered from a single free electron, calculated in the lowest order of quantum electrodynamics. It was first derived in 1928 by Oskar Klein and Yoshio Nishina, constituting one of the first successful applications of the Dirac equation. The formula describes both the Thomson scattering of low energy photons (e.g. visible light) and the Compton scattering of high energy photons (e.g. x-rays and gamma-rays), showing that the total cross section and expected deflection angle decrease with increasing photon energy.
Formula
For an incident unpolarized photon of energy , the differential cross section is:
where
is the classical electron radius (~2.82 fm, is about 7.94 × 10−30 m2 or 79.4 mb)
is the ratio of the wavelengths of the incident and scattered photons
is the scattering angle (0 for an undeflected photon).
The angular dependent photon wavelength (or energy, or frequency) ratio is
as required by the conservation of relativistic energy and momentum (see Compton scattering). The dimensionless quantity expresses the energy of the incident photon in terms of the electron rest energy (~511 keV), and may also be expressed as , where is the Compton wavelength of the electron (~2.42 pm). Notice that the scatter ratio increases monotonically with the deflection angle, from (forward scattering, no energy transfer) to (180 degree backscatter, maximum energy transfer).
In some cases it is convenient to express the classical electron radius in terms of the Compton wavelength: , where is the fine structure constant (~1/137) and is the reduced Compton wavelength of the electron (~0.386 pm), so that the constant in the cross section may be given as:
Polarized photons
If the incoming photon is polarized, the scattered photon is no longer isotropic with respect to the azimuthal angle. For a linearly polarized photon scattered with a free electron at rest, the differential cross section is instead given by:
where is the azimuthal scattering angle. Note that the unpolarized differential cross section can be obtained by averaging over .
Limits
Low energy
For low energy photons the wavelength shift becomes negligible () and the Klein–Nishina formula reduces to the classical Thomson expression:
which is symmetrical in the scattering angle, i.e. the photon is just as likely to scatter backwards as forwards. With increasing energy this symmetry is broken and the photon becomes more likely to scatter in the forward direction.
High energy
For high energy photons it is useful to distinguish between small and large angle scattering. For large angles, where , the scatter ratio is large and
showing that the (large angle) differential cross section is inversely proportional to the photon energy.
The differential cross section has a constant peak in the forward direction:
independent of . From the large angle analysis it follows that this peak can only extend to about . The forward peak is thus confined to a small solid angle of approximately , and we may conclude that the total small angle cross section decreases with .
Total cross section
The differential cross section may be integrated to find the total cross section:
In the low-energy limit there is no energy dependence, and we recover the Thomson cross section (~66.5 fm2):
History
The Klein–Nishina formula was derived in 1928 by Oskar Klein and Yoshio Nishina, and was one of the first results obtained from the study of quantum electrodynamics. Consideration of relativistic and quantum mechanical effects allowed development of an accurate equation for the scattering of radiation from a target electron. Before this derivation, the electron cross section had been classically derived by the British physicist and discoverer of the electron, J.J. Thomson. However, scattering experiments showed significant deviations from the results predicted by the Thomson cross section. Further scattering experiments agreed perfectly with the predictions of the Klein–Nishina formula.
See also
Synchrotron radiation
Yoshio Nishina
Oskar Klein
References
Further reading
Quantum electrodynamics
Scattering | Klein–Nishina formula | [
"Physics",
"Chemistry",
"Materials_science"
] | 876 | [
"Condensed matter physics",
"Scattering",
"Particle physics",
"Nuclear physics"
] |
884,375 | https://en.wikipedia.org/wiki/Ely%20Cathedral | Ely Cathedral, formally the , is an Anglican cathedral in the city of Ely, Cambridgeshire, England.
The cathedral can trace its origin to the abbey founded in Ely in 672 by St Æthelthryth (also called Etheldreda). The earliest parts of the present building date to 1083, and it was granted cathedral status in 1109. Until the Reformation, the cathedral was dedicated to St Etheldreda and St Peter, at which point it was refounded as the Cathedral Church of the Holy and Undivided Trinity of Ely. It is the cathedral of the Diocese of Ely, which covers most of Cambridgeshire and western Norfolk, Essex, and Bedfordshire. It is the seat of the Bishop of Ely and a suffragan bishop, the Bishop of Huntingdon.
Architecturally, Ely Cathedral is outstanding both for its scale and stylistic details. Having been built in a monumental Romanesque style, the galilee porch, lady chapel and choir were rebuilt in an exuberant Decorated Gothic. Its most notable feature is the central octagonal tower, with lantern above, which provides a unique internal space and, along with the West Tower, dominates the surrounding landscape.
The cathedral is a major tourist destination, receiving around 250,000 visitors per year, and sustains a daily pattern of morning and evening services.
Anglo-Saxon abbey
Ely Abbey was founded in 672, by Æthelthryth (St Etheldreda), a daughter of Anna, King of East Anglia. It was a mixed community of men and women. Later accounts suggest her three successor abbesses were also members of the East Anglian Royal family. In later centuries, the depredations of Viking raids may have resulted in its destruction, or at least the loss of all records. It is possible that some monks provided a continuity through to its refoundation in 970, under the Rule of St Benedict. The precise siting of Æthelthryth's original monastery is not known. The presence of her relics, bolstered by the growing body of literature on her life and miracles, was a major driving force in the success of the refounded abbey. The church building of 970 was within or near the nave of the present building, and was progressively demolished from 1102 alongside the construction of the Norman church. The obscure Ermenilda of Ely also became an abbess sometime after her husband, Wulfhere of Mercia, died in 675.
Present-day church
The cathedral is built from stone quarried from Barnack in Northamptonshire (bought from Peterborough Abbey, whose lands included the quarries, for 8,000 eels a year), with decorative elements carved from Purbeck Marble and local clunch. The plan of the building is cruciform (cross-shaped), with an additional transept at the western end. The total length is , and the nave at over long remains one of the longest in Britain. The west tower is high. The unique Octagon 'Lantern Tower' is wide and is high. Internally, from the floor to the central roof boss the lantern is high. The cathedral is known locally as "the ship of the Fens", because of its prominent position above the surrounding flat landscape.
Norman abbey church
Having a pre-Norman history spanning 400 years and a re-foundation in 970, Ely over the course of the next hundred years had become one of England's most successful Benedictine abbeys, with a famous saint, treasures, library, book production of the highest order and lands exceeded only by Glastonbury. However the imposition of Norman rule was particularly problematic at Ely. Newly arrived Normans such as Picot of Cambridge were taking possession of abbey lands, there was appropriation of daughter monasteries such as Eynesbury by French monks, and interference by the Bishop of Lincoln was undermining its status. All this was exacerbated when, in 1071, Ely became a focus of English resistance, through such people as Hereward the Wake, culminating in the Siege of Ely, for which the abbey suffered substantial fines.
Under the Normans almost every English cathedral and major abbey was rebuilt from the 1070s onwards. If Ely was to maintain its status then it had to initiate its own building work, and the task fell to Abbot Simeon. He was the brother of Walkelin, the then Bishop of Winchester, and had himself been the prior at Winchester Cathedral when the rebuilding began there in 1079. In 1083, a year after Simeon's appointment as abbot of Ely, and when he was 90 years old, building work began. The years since the conquest had been turbulent for the Abbey, but the unlikely person of an aged Norman outsider effectively took sides with the Ely monks, reversed the decline in the abbey's fortunes, and found the resources, administrative capacity, identity and purpose to begin a mighty new building.
The design had many similarities to Winchester, a cruciform plan with central crossing tower, aisled transepts, a three-storey elevation and a semi-circular apse at the east end. It was one of the largest buildings under construction north of the Alps at the time. The first phase of construction took in the eastern arm of the church, and the north and south transepts. However, a significant break in the way the masonry is laid indicates that, with the transepts still unfinished, there was an unplanned halt to construction that lasted several years. It would appear that when Abbott Simeon died in 1093, an extended interregnum caused all work to cease. The administration of Ranulf Flambard may have been to blame. He illegally kept various posts unfilled, including that of Abbot of Ely, so he could appropriate the income. In 1099 he got himself appointed Bishop of Durham, in 1100 Abbot Richard was appointed to Ely and building work resumed. It is Abbot Richard who asserted Ely's independence from the Diocese of Lincoln, and pressed for it to be made a diocese in its own right, with the abbey church as its cathedral. Although Abbot Richard died in 1107, his successor Hervey le Breton was able to achieve this and become the first Bishop of Ely in 1109. This period at the start of the twelfth century was when Ely re-affirmed its link with its Anglo-Saxon past. The struggle for independence coincided with the period when resumption of building work required the removal of the shrines from the old building and the translation of the relics into the new church. This appears to have allowed, in the midst of a Norman-French hierarchy, an unexpectedly enthusiastic development of the cult of these pre-Norman saints and benefactors.
The Norman east end and the whole of the central area of the crossing are now entirely gone, but the architecture of the transepts survives in a virtually complete state, to give a good impression of how it would have looked. Massive walls pierced by Romanesque arches would have formed aisles running around all sides of the choir and transepts. Three tiers of archways rise from the arcaded aisles. Galleries with walkways could be used for liturgical processions, and above that is the Clerestory with a passage within the width of the wall.
Construction of the nave was underway from around 1115, and roof timbers dating to 1120 suggest that at least the eastern portion of the nave roof was in place by then. The great length of the nave required that it was tackled in phases and after completing four bays, sufficient to securely buttress the crossing tower and transepts, there was a planned pause in construction. By 1140 the nave had been completed together with the western transepts and west tower up to triforium level, in the fairly plain early Romanesque style of the earlier work. Another pause now occurred, for over 30 years, and when it resumed, the new mason found ways to integrate the earlier architectural elements with the new ideas and richer decorations of early Gothic.
The West Tower
The half-built west tower and upper parts of the two western transepts were completed under Bishop Geoffrey Ridel (1174–89), to create an exuberant west front, richly decorated with intersecting arches and complex mouldings. The new architectural details were used systematically to the higher storeys of the tower and transepts. Rows of trefoil heads and use of pointed instead of semicircular arches, results in a west front with a high level of orderly uniformity.
Originally the west front had transepts running symmetrically either side of the west tower. Stonework details on the tower show that an octagonal tower was part of the original design, although the current western octagonal tower was installed in 1400. Numerous attempts were made, during all phases of its construction to correct problems from subsidence in areas of soft ground at the western end of the cathedral. In 1405–1407, to cope with the extra weight from the octagonal tower, four new arches were added at the west crossing to strengthen the tower. The extra weight of these works may have added to the problem, as at the end of the fifteenth century the north-west transept collapsed. A great sloping mass of masonry was built to buttress the remaining walls, which remain in their broken-off state on the north side of the tower.
Galilee Porch
The Galilee Porch is now the principal entrance into the cathedral for visitors. Its original liturgical functions are unclear, but its location at the west end meant it may have been used as a chapel for penitents, a place where liturgical processions could gather, or somewhere the monks could hold business meetings with women, who were not permitted into the abbey. It also has a structural role in buttressing the west tower. The walls stretch over two storeys, but the upper storey now has no roof, it having been removed early in the nineteenth century. Its construction dating is also uncertain. Records suggest it was initiated by Bishop Eustace (1197–1215), and it is a notable example of Early English Gothic style. But there are doubts about just how early, especially as Eustace had taken refuge in France in 1208, and had no access to his funds for the next 3 years. George Gilbert Scott argued that details of its decoration, particularly the 'syncopated arches' and the use of Purbeck marble shafts, bear comparison with St Hugh's Choir, Lincoln Cathedral, and the west porch at St Albans, which both predate Eustace, whereas the foliage carvings and other details offer a date after 1220, suggesting it could be a project taken up, or re-worked by Bishop Hugh of Northwold.
Presbytery and East end
The first major reworking of an element of the Norman building was undertaken by Hugh of Northwold (bishop 1229–54). The eastern arm had been only four bays, running from the choir (then located at the crossing itself) to the high altar and the shrine to Etheldreda. In 1234 Northwold began an eastward addition of six further bays, which were built over 17 years, in a richly ornamented style with extensive use of Purbeck marble pillars and foliage carvings. It was built using the same bay dimensions, wall thicknesses and elevations as the Norman parts of the nave, but with an Early English Gothic style that makes it 'the most refined and richly decorated English building of its period'. St Etheldreda's remains were translated to a new shrine immediately east of the high altar within the new structure, and on completion of these works in 1252 the cathedral was reconsecrated in the presence of King Henry III and Prince Edward. As well as a greatly expanded presbytery, the new east end had the effect of inflating still further the significance of St Etheldreda's shrine. Surviving fragments of the shrine pedestal suggest its decoration was similar to the interior walls of the Galilee porch. The relics of the saints Wihtburh, Seaxburh (sisters of St Etheldreda) and Ermenilda (daughter of St Seaxburh of Ely) would also have been accommodated, and the new building provided much more space for pilgrims to visit the shrines, via a door in the North Transept. The presbytery has subsequently been used for the burials and memorials of over 100 individuals connected with the abbey and cathedral.
Lady Chapel
In 1321, under the sacrist Alan of Walsingham, work began on a large free-standing Lady Chapel, linked to the north aisle of the chancel by a covered walkway. The chapel is long and wide, and was built in an exuberant 'Decorated' Gothic style over the course of the next 30 years. Masons and finances were unexpectedly required for the main church from 1322, which must have slowed the progress of the chapel. The north and south wall each have five bays, comprising large traceried windows separated by pillars each of which has eight substantial niches and canopies which once held statues.
Below the window line, and running round three sides of the chapel is an arcade of richly decorated 'nodding ogees', with Purbeck marble pillars, creating scooped out seating booths. There are three arches per bay plus a grander one for each main pillar, each with a projecting pointed arch covering a subdividing column topped by a statue of a bishop or king. Above each arch is a pair of spandrels containing carved scenes which create a cycle of 93 carved relief sculptures of the life and miracles of the Virgin Mary. The carvings and sculptures would all have been painted. The window glass would all have been brightly coloured with major schemes perhaps of biblical narratives, of which a few small sections have survived. At the reformation, the edict to remove images from the cathedral was carried out very thoroughly by Bishop Thomas Goodrich. The larger statues have gone. The relief scenes were built into the wall, so each face or statue was individually hacked off, but leaving many finely carved details, and numerous puzzles as to what the original scenes showed. After the reformation it was redeployed as the parish church (Holy Trinity) for the town, a situation which continued up to 1938.
In 2000 a life-size statue of the Virgin Mary by David Wynne was installed above the lady chapel altar. The statue was criticised by local people and the cathedral dean said he had been inundated with letters of complaint.
Octagon
The central octagonal tower, with its vast internal open space and its pinnacles and lantern above, forms the most distinctive and celebrated feature of the cathedral. However, what Pevsner describes as Ely's 'greatest individual achievement of architectural genius' came about through a disaster at the centre of the cathedral. On the night of 12–13 February 1322, possibly as a result of digging foundations for the Lady Chapel, the Norman central crossing tower collapsed. Work on the Lady Chapel was suspended as attention transferred to dealing with this disaster. Instead of being replaced by a new tower on the same ground plan, the crossing was enlarged to an octagon, removing all four of the original tower piers and absorbing the adjoining bays of the nave, chancel and transepts to define an open area far larger than the square base of the original tower. The construction of this unique and distinctive feature was overseen by Alan of Walsingham. The extent of his influence on the design continues to be a matter of debate, as are the reasons such a radical step was taken. Mistrust of the soft ground under the failed tower piers may have been a major factor in moving all the weight of the new tower further out.
The large stone octagonal tower, with its eight internal archways, leads up to timber vaulting that appears to allow the large glazed timber lantern to balance on their slender struts. The roof and lantern are actually held up by a complex timber structure above the vaulting which could not be built in this way today because there are no trees big enough. The central lantern, also octagonal in form, but with angles offset from the great Octagon, has panels showing pictures of musical angels, which can be opened, with access from the Octagon roof-space, so that real choristers can sing from on high. More wooden vaulting forms the lantern roof. At the centre is a wooden boss carved from a single piece of oak, showing Christ in Majestry. The elaborate joinery and timberwork was brought about by William Hurley, master carpenter in the royal service.
It is unclear what damage was caused to the Norman chancel by the fall of the tower, but the three remaining bays were reconstructed under Bishop John Hotham (1316–1337) in an ornate Decorated style with flowing tracery. Structural evidence shows that this work was a remodelling rather than a total rebuilding. New choirstalls with carved misericords and canopy work were installed beneath the octagon, in a similar position to their predecessors. Work was resumed on the Lady Chapel, and the two westernmost bays of Northwold's presbytery were adapted by unroofing the triforia so as to enhance the lighting of Etheldreda's shrine. Starting at about the same time the remaining lancet windows of the aisles and triforia of the presbytery were gradually replaced by broad windows with flowing tracery. At the same period extensive work took place on the monastic buildings, including the construction of the elegant chapel of Prior Crauden.
Chantry Chapels
In the late fifteenth and early sixteenth centuries elaborate chantry chapels were inserted in the easternmost bays of the presbytery aisles, on the north for Bishop John Alcock (1486–1500) and on the south for Bishop Nicholas West (1515–33).
John Alcock was born in around 1430, the son of a Hull merchant, but achieved high office in both church and state. Amongst his many duties and posts he was given charge of Edward IV's sons, who became known as the Princes in the Tower. That Alcock faithfully served Edward IV and his sons as well Henry VII adds to the mystery of how their fate was kept secret. Appointed bishop of Rochester and then Worcester by Edward IV, he was also declared 'Lord President of Wales' in 1476. On Henry VII's victory over Richard III in 1485, Alcock became interim Lord Chancellor and in 1486 was appointed Bishop of Ely. As early as 1476 he had endowed a chantry for his parents at Hull, but the resources Ely put at his disposal allowed him to found Jesus College, Cambridge and build his own fabulous chantry chapel in an ornate style. The statue niches with their architectural canopies are crammed so chaotically together that some of the statues never got finished as they were so far out of sight. Others, although completed, were overlooked by the destructions of the reformation, and survived when all the others were destroyed. The extent that the chapel is squashed in, despite cutting back parts of the Norman walls, raises the possibility that the design, and perhaps even some of the stonework, was done with a more spacious bay at Worcester in mind. On his death in 1500 he was buried within his chapel.
Nicholas West had studied at Cambridge, Oxford and Bologna, had been a diplomat in the service of Henry VII and Henry VIII, and became Bishop of Ely in 1515. For the remaining 19 years of his life he 'lived in greater splendour than any other prelate of his time, having more than a hundred servants.' He was able to build the magnificent Chantry chapel at the south-east corner of the presbytery, panelled with niches for statues (which were destroyed or disfigured just a few years later at the reformation), and with fan tracery forming the ceiling, and West's tomb on the south side.
In 1771 the chapel was also used to house the bones of seven Saxon 'benefactors of the church'. These had been translated from the old Saxon Abbey into the Norman building, and had been placed in a wall of the choir when it stood in the Octagon. When the choir stalls were moved, their enclosing wall was demolished, and the bones of Wulfstan (died 1023), Osmund of Sweden, Athelstan of Elmham, Ælfwine of Elmham, Ælfgar of Elmham, Eadnoth of Dorchester and Byrhtnoth, eorldorman of Essex, were found, and relocated into West's chapel. Also sharing Nicholas West's chapel, against the east wall, is the tomb memorial to the bishop Bowyer Sparke, who died in 1836.
Dissolution and Reformation
On 18 November 1539 the royal commissioners took possession of the monastery and all its possessions, and for nearly two years its future hung in the balance as Henry VIII and his advisers considered what role, if any, Cathedrals might play in the emerging Protestant church. On 10 September 1541 a new charter was granted to Ely, at which point Robert Steward, the last prior, was re-appointed as the first dean, who, with eight prebendaries formed the dean and chapter, the new governing body of the cathedral. Under Bishop Thomas Goodrich's orders, first the shrines to the Anglo-Saxon saints were destroyed, and as iconoclasm increased, nearly all the stained glass and much of the sculpture in the cathedral was destroyed or defaced during the 1540s. In the Lady Chapel the free-standing statues were destroyed and all 147 carved figures in the frieze of St Mary were decapitated, as were the numerous sculptures on West's chapel. The Cathedrals were eventually spared on the basis of three useful functions: propagation of true worship of God, educational activity, and care of the poor. To this end, vicars choral, lay clerks and boy choristers were all appointed (many having previously been members of the monastic community), to assist in worship. A grammar school with 24 scholars was established in the monastic buildings, and in the 1550s plate and vestments were sold to buy books and establish a library. The passageway running to the Lady Chapel was turned into an almshouse for six bedemen. The Lady Chapel itself was handed over to the town as Holy Trinity Parish Church in 1566, replacing a very unsatisfactory lean-to structure that stood against the north wall of the nave. Many of the monastic buildings became the houses of the new Cathedral hierarchy, although others were demolished. Much of the Cathedral itself had little purpose. The whole East end was used simply as a place for burials and memorials. The cathedral was damaged in the Dover Straits Earthquake of 6 April 1580, where stones fell from the vaulting.
Difficult as the sixteenth century had been for the cathedral, it was the period of the Commonwealth that came nearest to destroying both the institution and the buildings. Throughout the 1640s, with Oliver Cromwell's army occupying the Isle of Ely, a puritanical regime of worship was imposed. Bishop Matthew Wren was arrested in 1642 and spent the next 18 years in the Tower of London. That no significant destruction of images occurred during the Civil War and the Commonwealth would appear to be because it had been done so thoroughly 100 years before. In 1648 parliament encouraged the demolition of the buildings, so that the materials could be sold to pay for 'relief of sick and maimed soldiers, widows and children'. That this did not happen, and that the building suffered nothing worse than neglect, may have been due to protection by Oliver Cromwell, although the uncertainty of the times, and apathy rather than hostility to the building may have been as big a factor.
Restoration
When Charles II was invited to return to Britain, alongside the political restoration there began a process of re-establishing the Church of England. Matthew Wren, whose high church views had kept him in prison throughout the period of the Commonwealth, was able to appoint a new cathedral chapter. The dean, by contrast was appointed by the crown. The three big challenges for the new hierarchy were to begin repairs on the neglected buildings, to re-establish Cathedral services, and to recover its lands, rights and incomes. The search for lost deeds and records to establish their rights took over 20 years but most of the rights to the dispersed assets appear to have been regained.
In the 1690s a number of very fine baroque furnishings were introduced, notably a marble font (for many years kept in St Peter’s Church, Prickwillow,) and an organ case mounted on the Romanesque pulpitum (the stone screen dividing the nave from the liturgical choir) with trumpeting angels and other embellishments. In 1699 the north-west corner of the north transept collapsed and had to be rebuilt. The works included the insertion of a fine classical doorway in the north face. Christopher Wren has sometimes been associated with this feature, and he may have been consulted by Robert Grumbold, the mason in charge of the project. Grumbold had worked with Wren on Trinity College Library in Cambridge a few years earlier, and Wren would have been familiar with the Cathedral through his uncle Matthew Wren, bishop from 1638 to 1667. He was certainly among the people with whom the dean (John Lambe 1693–1708) discussed the proposed works during a visit to London. The damaged transept took from 1699 to 1702 to rebuild, and with the exception of the new doorway, the works faithfully re-instated the Romanesque walls, windows, and detailing. This was a landmark approach in the history of restoration.
Bentham and Essex
Two people stand out in Ely Cathedral's eighteenth-century history, one a minor canon and the other an architectural contractor. James Bentham (1709–1794), building on the work of his father Samuel, studied the history of both the institution and architecture of the cathedral, culminating in 1771 with his publication of The History and Antiquities of the Conventual and Cathedral Church of Ely. He sought out original documents to provide definitive biographical lists of abbots, priors, deans and bishops, alongside a history of the abbey and cathedral, and was able to set out the architectural development of the building with detailed engravings and plans. These plans, elevations and sections had been surveyed by the architect James Essex (1722–1784), who by this means was able to both highlight the poor state of parts of the building, and understand its complex interdependencies.
The level of expertise that Bentham and Essex brought to the situation enabled a well-prioritised series of repairs and sensitive improvements to be proposed that occupied much of the later eighteenth century. Essex identified the decay of the octagon lantern as the starting point of a major series of repairs, and was appointed in 1757 to oversee the work. 400 years of weathering and decay may have removed many of the gothic features, and shortage of funds allied to a Georgian suspicion of ornament resulted in plain and pared down timber and leadwork on the lantern. He was then able to move on to re-roof the entire eastern arm and restore the eastern gable which had been pushed outwards some .
Bentham and Essex were both enthusiastic proponents of a longstanding plan to relocate the 14th-century choir stalls from under the octagon. With the octagon and east roof dealt with, the scheme was embarked on in 1769, with Bentham, still only a minor canon, appointed as clerk of works. By moving the choir stalls to the far east end of the cathedral, the octagon became a spacious public area for the first time, with vistas to east and west and views of the octagon vaulting. They also removed the Romanesque pulpitum and put in a new choir screen two bays east of the octagon, surmounted by the 1690s organ case. Despite their antiquarian interests, Bentham and Essex appear to have dismantled the choir stalls with alarming lack of care, and saw no problem in clearing away features at the east end, and removing the pulpitum and medieval walls surrounding the choir stalls. The north wall turned out to incorporate the bones of seven 'Saxon worthies' which would have featured on the pilgrim route into the pre-Reformation cathedral. The bones were rehoused in Bishop West's Chapel. The choir stalls, with their misericords were however retained, and the restoration as a whole was relatively sympathetic by the standards of the period.
The Victorians
The next major period of restoration began in the 1840s and much of the oversight was the responsibility of Dean George Peacock (1839–58). In conjunction with the Cambridge Professor Robert Willis, he undertook thorough investigations into the structure, archaeology and artistic elements of the building, and made a start on what became an extensive series of refurbishments by restoring the south-west transept. This had been used as a 'workshop', and by stripping out more recent material and restoring the Norman windows and arcading, they set a pattern that would be adopted in much of the Victorian period works. In 1845, by which time the cathedral had works underway in many areas, a visiting architect, George Basevi, who was inspecting the west tower, tripped, and fell 36 feet to his death. He was given a burial in the north choir aisle. Works at this time included cleaning back thick layers of limewash, polishing pillars of Purbeck marble, painting and gilding roof bosses and corbels in the choir, and a major opening up of the West tower. A plaster vault was removed that had been put in only 40 years before, and the clock and bells were moved higher. The addition of iron ties and supports allowed removal of vast amounts of infill that was supposed to strengthen the tower, but had simply added more weight and compounded the problems.
George Gilbert Scott
George Gilbert Scott was, by 1847, emerging as a successful architect and keen exponent of the Gothic Revival. He was brought in, as a professional architect to bolster the enthusiastic amateur partnership of Peacock and Willis, initially in the re-working of the fourteenth-century choir stalls. Having been at the East end for 80 years, Scott oversaw their move back towards the Octagon, but this time remaining within the eastern arm, keeping the open space of the Octagon clear. This was Scott's first cathedral commission. He went on to work on a new carved wooden screen and brass gates, moved the high altar two bays westwards, and installed a lavishly carved and ornamented alabaster reredos carved by Rattee and Kett, a new font for the south-west transept, a new Organ case and later a new pulpit, replacing the neo-Norman pulpit designed by John Groves in 1803. In 1876 Scott's designs for the octagon lantern parapet and pinnacles were implemented, returning it to a form which, to judge from pre-Essex depictions, seems to be genuinely close to the original. Various new furnishings replaced the baroque items installed in the 1690s.
Stained glass
In 1845 Edward Sparke, son of the bishop Bowyer Sparke, and himself a canon, spearheaded a major campaign to re-glaze the cathedral with coloured glass. At that time there was hardly any medieval glass (mostly a few survivals in the Lady Chapel) and not much of post-reformation date. An eighteenth-century attempt to get James Pearson to produce a scheme of painted glass had produced only one window and some smaller fragments. With the rediscovery of staining techniques, and the renewed enthusiasm for stained glass that swept the country as the nineteenth century progressed, almost all areas of the cathedral received new glazing. Under Sparke's oversight, money was found from donors, groups, bequests, even gifts by the artists themselves, and by Edward Sparke himself. A wide variety of designers and manufacturers were deliberately used, to help find the right firm to fill the great lancets at the east end. In the event, it was William Wailes who undertook this in 1857, having already begun the four windows of the octagon, as well as contributions to the south west transept, south aisle and north transept. Other windows were by the Gérente brothers, William Warrington, Alexander Gibbs, Clayton and Bell, Ward and Nixon, Hardman & Co., and numerous other individuals and firms from England and France.
A timber boarded ceiling was installed in the nave and painted with scenes from the Old and New Testaments, first by Henry Styleman Le Strange and then, after Le Strange's death in 1862, completed by Thomas Gambier Parry, who also repainted the interior of the octagon.
A further major programme of structural restoration took place between 1986 and 2000 under Deans William Patterson (1984–90) and Michael Higgins (1991–2003), directed by successive Surveyors to the Fabric, initially Peter Miller and from 1994 Jane Kennedy. Much of this restoration work was carried out by Rattee and Kett. In 2000 a Processional Way was built, restoring the direct link between the north choir aisle and the Lady Chapel.
In 1972, the Stained Glass Museum was established to preserve windows from churches across the country that were being closed by redundancy. It opened to the public in 1979 in the north triforium of Ely Cathedral and following an appeal, an improved display space was created in the south triforium opening in 2000. Besides rescued pieces, the collection includes examples from Britain and abroad that have been donated or purchased through bequests, or are on loan from the Victoria and Albert Museum, the Royal Collection, and Friends of Friendless Churches.
Religious community
Ely has been an important centre of Christian worship since the seventh century AD. Most of what is known about its history before the Norman Conquest comes from Bede's Historia ecclesiastica gentis Anglorum written early in the eighth century and from the Liber Eliensis, an anonymous chronicle written at Ely some time in the twelfth century, drawing on Bede for the very early years, and covering the history of the community until the twelfth century. According to these sources the first Christian community here was founded by Æthelthryth (romanised as "Etheldreda"), daughter of the Anglo-Saxon King Anna of East Anglia, who was born at Exning near Newmarket. She may have acquired land at Ely from her first husband Tondberht, described by Bede as a "prince" of the South Gyrwas. After the end of her second marriage to Ecgfrith, a prince of Northumbria, in 673 she set up and ruled as abbess a dual monastery at Ely for men and for women. When she died, a shrine was built there to her memory. This monastery is recorded as having been destroyed in about 870 in the course of Danish invasions. However, while the lay settlement of the time would have been a minor one, it is likely that a church survived there until its refoundation in the tenth century. The history of the religious community during that period is unclear, but accounts of the refoundation in the tenth century suggest that there had been an establishment of secular priests.
In the course of the revival of the English church under Dunstan, Archbishop of Canterbury, and Aethelwold, Bishop of Winchester, Ely Abbey was reestablished in 970 as a community of Benedictine monks. This was one of a wave of monastic refoundations which locally included Peterborough and Ramsey (see English Benedictine Reform). Ely became one of the leading Benedictine houses in late Anglo-Saxon England. Following the Norman conquest of England in 1066 the abbey allied itself with the local resistance to Norman rule led by Hereward the Wake. The new regime having established control of the area, after the death of the abbot Thurstan, a Norman successor Theodwine was installed. In 1109 Ely attained cathedral status with the appointment of Hervey le Breton as bishop of the new diocese which was taken out of the very large diocese of Lincoln. This involved a division of the monastic property between the bishopric and the monastery, whose establishment was reduced from 70 to 40 monks, headed by a prior; the bishop being titular abbot. From 1216 the cathedral priory was part of the Canterbury Province of the English Benedictine Congregation, an umbrella chapter made up of the abbots and priors of the Benedictine houses of England, remaining so until the dissolution.
In 1539, during the Dissolution of the Monasteries, Ely Cathedral Priory surrendered to Henry VIII's commissioners. The cathedral was refounded by royal charter in 1541 with the former prior Robert Steward as dean and the majority of the former monks as prebendaries and minor canons, supplemented by Matthew Parker, later Archbishop of Canterbury, and Richard Cox, later Bishop of Ely. With a brief interruption from 1649 to 1660 during the Commonwealth, when all cathedrals were abolished, this foundation has continued in its essentials to the twenty-first century, with a reduced number of residentiary canons now supplemented by a number of lay canons appointed under a Church Measure of 1999.
As with other cathedrals, Ely's pattern of worship centres around the Opus Dei, the daily programme of services drawing significantly on the Benedictine tradition. It also serves as the mother church of the diocese and ministers to a substantial local congregation. At the Dissolution the veneration of St Etheldreda was suppressed, her shrine in the cathedral was destroyed, and the dedication of the cathedral to her and St Peter was replaced by the present dedication to the Holy and Undivided Trinity. Since 1873 the practice of honouring her memory has been revived, and annual festivals are celebrated, commemorating events in her life and the successive "translations" – removals of her remains to new shrines – which took place in subsequent centuries.
Dean and chapter
:
Dean – Mark Bonney (since 22 September 2012 installation)
Precentor – James Garrard (since 29 November 2008 installation)
Canon residentiary – James Reveley
Canon residentiary and (Diocesan) Initial Ministerial Education (IME) co-ordinator – Jessica Martin (since 10 September 2016 installation)
Burials
The burials below are listed in date order
Æthelthryth – Abbess of Ely in 679. The shrine was destroyed in 1541, her relics are alleged to be in St Etheldreda's Church, Ely Place, London and St Etheldreda's Roman Catholic Church, Ely
Seaxburh – Abbess of Ely in about 699
Wihtburh – possible sister of Æthelthryth, founder and abbess of convent in Dereham. Died 743 and buried in the cemetery of Ely Abbey, reinterred in her church in Dereham 798, remains stolen in 974 and buried in Ely Abbey
Byrhtnoth – patron of Ely Abbey, died leading Anglo-Saxon forces at the Battle of Maldon in 991
Eadnoth the Younger – Abbot of Ramsey, Bishop of Dorchester, killed in 1016 fighting against Cnut, his body was seized and hidden by Ely monks and subsequently venerated as Saint Eadnoth the Martyr
Wulfstan II – Archbishop of York (1002–1023), he died in York but according to his wishes he was buried in the monastery of Ely. Miracles are ascribed to his tomb by the Liber Eliensis
Alfred Aetheling – son of the English king Æthelred the Unready (1012–1037)
Hervey le Breton – First Bishop of Ely (1109–1131)
Nigel – Bishop of Ely (1133–1169), may have been buried here
Geoffrey Ridel – the nineteenth Lord Chancellor of England and Bishop of Ely (1173–1189)
Eustace – Bishop of Ely (1197–1215), also the twenty-third Lord Chancellor of England and Lord Keeper of the Great Seal. Buried near the altar of St Mary
John of Fountains – Bishop of Ely (1220–1225), "in the pavement" near the high altar
Geoffrey de Burgo – Bishop of Ely (1225–1228), buried in north choir but no surviving tomb or monument has been identified as his
Hugh of Northwold – Bishop of Ely (1229–1254), buried next to a shrine to St Etheldreda in the presbytery that he built, his tomb was moved to the north choir aisle but the location of his remains is unclear
William of Kilkenny – Lord Chancellor of England and Bishop of Ely (1254–1256), his heart was buried here, having died in Spain on a diplomatic mission for the king
Hugh de Balsham – Bishop of Ely (1256–1286), founder of Peterhouse, his tomb has not been firmly identified
John Kirkby – Lord High Treasurer of England and Bishop of Ely (1286–1290), a marble tomb slab located in the north choir aisle may possibly be from his tomb
William of Louth – Bishop of Ely (1290–1298), his elaborate tomb is near the entrance to the Lady Chapel in the south choir aisle
John Hotham – Chancellor of the Exchequer, Lord High Treasurer, Lord Chancellor and Bishop of Ely (1316–1337), died after two years of paralysis
John Barnet – Bishop of Ely (1366–1373)
Louis II de Luxembourg – Cardinal, Archbishop of Rouen and Bishop of Ely (1437–1443). He is not known to have ever visited the cathedral; after his death at Hatfield his bowels were interred in the church there, his heart at Rouen and his body at Ely on the south side of the Presbytery
John Tiptoft – 1st Earl of Worcester ('The Butcher of England') (1427–1470), in a large tomb in the South Choir Aisle
William Grey – Lord High Treasurer of England and Bishop of Ely (1454–1478)
John Alcock – Lord Chancellor of England and Bishop of Ely (1486–1500), in the Alcock Chantry
Richard Redman – Bishop of Ely (1501–1505)
Nicholas West – Bishop of Ely (1515–1534), buried in the Bishop West Chantry Chapel, which he built, at the eastern end of the South Choir Aisle
Thomas Goodrich – Bishop of Ely (1534–1554), buried in the South Choir
Robert Steward – First Dean of Ely (1541–1557)
Richard Cox – Bishop of Ely (1559–1581), buried in a tomb over which the choir box was built
Martin Heton – Bishop of Ely (1599–1609)
Humphrey Tyndall – Dean of Ely (1591–1614)
Henry Caesar – Dean of Ely (1614–1636)
Benjamin Lany – Bishop of Ely (1667–1675)
Peter Gunning – Bishop of Ely (1675–1684)
Simon Patrick – Bishop of Ely (1691–1707)
William Marsh – Gentleman of Ely (1642–1708), Marble mural erected above the entrance to the Lady Chapel.
John Moore – Bishop of Ely (1707–1714)
William Fleetwood – Bishop of Ely (1714–1723), in the north chancel aisle
Robert Moss – Dean of Ely (1713–1729)
Thomas Green – Bishop of Ely (1723–1738)
Robert Butts – Bishop of Ely (1738–1748)
Matthias Mawson – Bishop of Ely (1754–1771)
Edmund Keene – Bishop of Ely (1771–1781), in the Bishop West Chantry Chapel (his wife, Mary, was buried in the south side of the choir)
Bowyer Sparke – Bishop of Ely (1812–1836), in the Bishop West Chantry Chapel
George Basevi – Architect. Died 1845, aged 51, after falling through an opening in the floor of the old bell chamber of the west tower of Ely Cathedral while inspecting repairs. Buried in North Choir Aisle under a monumental brass
Joseph Allen – Bishop of Ely (1836–1845)
William Hodge Mill – (1792–1853) the first principal of Bishop's College, Calcutta, and later Regius Professor of Hebrew at Cambridge and Canon at Ely Cathedral
James Woodford – Bishop of Ely (1873–1885), in Matthew Wren's chapel on the south side of the choir
Harry Legge-Bourke – died 1973 while Member of Parliament for the Isle of Ely
Music
The cathedral retains six professional adult lay clerks who sing in the Cathedral Choir along with boy and girl choristers aged 7 to 13 who receive choristerships funded by the cathedral to attend the King's Ely school as day or boarding pupils. From 2021, boy and girl choristers sing an equal number of services, and receive an equal scholarship off of school fees at King’s Ely. The Director of Music leads the Boy choristers, and the Girl choristers are led by Sarah MacDonald.
The Octagon Singers and Ely Imps are voluntary choirs of local adults and children respectively.
Organ
Details of the organ from the National Pipe Organ Register
Organists
The following is a list of organists recorded since the cathedral was refounded in 1541 following the Second Act of Dissolution. Where not directly appointed as Organist, the position is inferred by virtue of their appointment as Master of the Choristers, or most recently as Director of Music.
Stained Glass Museum
The south triforium is home to the Stained Glass Museum, a collection of stained glass from the thirteenth century to the present that is of national importance and includes works from notable contemporary artists including Ervin Bossanyi.
In popular culture
The cathedral was the subject of a watercolour by J. M. W. Turner, in about 1796.
The cathedral appears on the horizon in the cover photo of Pink Floyd's 1994 album The Division Bell, and in the music video of a single from that album, "High Hopes".
Pink Floyd's David Gilmour recorded orchestral and choral parts for his 2024 album Luck and Strange at the cathedral.
The covers of a number of John Rutter's choral albums feature an image of the cathedral, a reference to early recordings of his music being performed and recorded in the Lady chapel.
Direct references to the cathedral appear in the children's book Tom's Midnight Garden by Philippa Pearce. A full-length movie with the same title was released in 1999.
A section of the film Elizabeth: The Golden Age was filmed at the cathedral in June 2006.
Filming for The Other Boleyn Girl took place at the cathedral in August 2007.
Parts of Marcus Sedgwick's 2000 novel Floodland take place at the cathedral after the sea has consumed the land around it, turning Ely into an island.
Direct references to Ely Cathedral are made in Jill Dawson's 2006 novel Watch Me Disappear.
A week's filming took place in November 2009 at the cathedral, when it substituted for Westminster Abbey in The King's Speech.
In April 2013 Mila Kunis was at the cathedral filming Jupiter Ascending.
In 2013, in the movie Snowpiercer, the west tower appeared in a collection of frozen ruined man-made structures in the dystopian future when a view of the outside world was briefly shown as the train Snowpiercer was encircling the globe.
The film Assassin's Creed shot scenes in Ely Cathedral in July 2013.
The film Macbeth used the cathedral for filming in February and March 2014.
In 2016 the cathedral was substituted for Westminster Abbey again in the Netflix original series The Crown.
Shooting for the 2023 film Maestro took place at the cathedral between October 20 and 22, 2022
See also
References
Further reading
W. E. Dickson. Ely Cathedral (Isbister & Co., 1897).
Richard John King. Handbook to the Cathedrals of England – Vol. 3, (John Murray, 1862).
D. J. Stewart. On the architectural history of Ely cathedral (J. Van Voorst, 1868).
Peter Meadows and Nigel Ramsay, eds., A History of Ely Cathedral (The Boydell Press, 2003).
Lynne Broughton, Interpreting Ely Cathedral (Ely Cathedral Publications, 2008).
John Maddison, Ely Cathedral: Design and Meaning (Ely Cathedral Publications, 2000).
Janet Fairweather, trans., Liber Eliensis: A History of the Isle of Ely from the Seventh Century to the Twelfth Compiled by a Monk of Ely in the Twelfth Century (The Boydell Press, 2005).
Peter Meadows, ed., Ely: Bishops and Diocese, 1109–2009 (The Boydell Press, 2010).
External links
Descriptive tour of Ely Cathedral
The Stained Glass Museum at Ely Cathedral
A history of the choristers of Ely Cathedral
Flickr images tagged Ely Cathedral
Discussion of the lady chapel by Janina Ramirez and Will Shank: Art Detective Podcast, 20 Feb 2017
Anglican cathedrals in England
Buildings and structures in Cambridgeshire
Ely, Cambridgeshire
Monasteries in Cambridgeshire
Anglo-Saxon monastic houses
English churches with Norman architecture
English Gothic architecture in Cambridgeshire
Tourist attractions in Cambridgeshire
Benedictine monasteries in England
Pre-Reformation Roman Catholic cathedrals
Grade I listed cathedrals
Grade I listed churches in Cambridgeshire
Museums in Cambridgeshire
Art museums and galleries in Cambridgeshire
Glass museums and galleries
Edward Blore buildings
Burial sites of the House of Wuffingas
Basilicas (Church of England)
Burial sites of the House of Luxembourg
ja:イーリー#イーリー大聖堂 | Ely Cathedral | [
"Materials_science",
"Engineering"
] | 10,061 | [
"Glass engineering and science",
"Glass museums and galleries"
] |
885,382 | https://en.wikipedia.org/wiki/Acoustic%20mirror | An acoustic mirror is a passive device used to reflect and focus (concentrate) sound waves. Parabolic acoustic mirrors are widely used in parabolic microphones to pick up sound from great distances, employed in surveillance and reporting of outdoor sporting events. Pairs of large parabolic acoustic mirrors which function as "whisper galleries" are displayed in science museums to demonstrate sound focusing.
Between the World Wars, before the invention of radar, parabolic sound mirrors were used experimentally as early-warning devices by military air defence forces to detect incoming enemy aircraft by listening for the sound of their engines.
During World War II on the coast of southern England, a network of large concrete acoustic mirrors was in the process of being built when the project was cancelled owing to the development of the Chain Home radar system. Some of these mirrors are still standing today.
Acoustic aircraft detection
Before World War II and the invention of radar, acoustic mirrors were built as early warning devices around the coasts of Great Britain, with the aim of detecting incoming enemy aircraft by the sound of their engines. The most famous of these devices still stand at Denge on the Dungeness peninsula and at Hythe in Kent. Other examples exist in other parts of Britain (including Sunderland, Redcar, Boulby, Kilnsea and Selsey Bill), and Baħar iċ-Ċagħaq in Malta. The Maltese sound mirror is known locally as "the ear" (il-Widna).
The Dungeness mirrors, known colloquially as the "listening ears", consist of three large concrete reflectors built in the 1920s–1930s. Their experimental nature can be discerned by the different shapes of each of the three reflectors: one is a long curved wall about high by long, while the other two are dish-shaped constructions approximately in diameter. Microphones placed at the foci of the reflectors enabled a listener to detect the sound of aircraft far out over the English Channel. The reflectors are not parabolic, but are actually spherical mirrors.
Spherical mirrors can be used for direction finding by moving the sensor rather than the mirror; another unusual example was the Arecibo Observatory.
Acoustic mirrors had a limited effectiveness, and the increasing speed of aircraft in the 1930s meant that they would already be too close to engage by the time they had been detected. The development of radar put an end to further experimentation with the technique. Nevertheless, there were long-lasting benefits. The acoustic mirror programme, led by Dr William Sansome Tucker, had given Britain the methodology to use interconnected stations to pinpoint the position of an enemy in the sky.
The system they developed for linking the stations and plotting aircraft movements was given to the early radar team and contributed to their success in World War II.
Modern uses
Parabolic acoustic mirrors called "whisper dishes" are used as participatory exhibits in science museums to demonstrate focusing of sound. Examples are located at Bristol's We The Curious, Ontario Science Centre, Albuquerque's ¡Explora!, Baltimore's Maryland Science Center, Oklahoma City's Science Museum Oklahoma, San Francisco's Exploratorium, the Science Museum of Minnesota, the Museum of Science and Industry in Chicago, Pacific Science Center in Seattle, Jodrell Bank Observatory, St. Louis Science Center, Parkes Observatory in Australia and on the north campus lawn of North Carolina State University.
A pair of dishes, typically in diameter, is installed facing each other, separated by around a hundred metres (yards). A person standing at the focus of one can hear another person speaking in a whisper at the focus of the other, despite the wide separation between them.
Parabolic microphones depend on a parabolic dish to reflect sound coming from a specific direction into the microphone placed at the focus. They are extremely directional: sensitive to sounds coming from a specific direction. However, they generally have poor bass response because a dish small enough to be portable cannot focus long wavelengths (= low frequencies).
Small portable parabolic microphones are used to record wildlife sounds such as bird song, in televised sports events to pick up the conversations of players, such as in the huddle during American Football games, or to record the sounds of the sport, and in audio surveillance to record speech without the knowledge of the speaker.
Locations
Acoustic aircraft detection mirrors are known to have been built at:
Denge, Kent
Abbot's Cliff, Kent (at OS grid reference TR27083867)
Boulby, Yorkshire
Dover, Kent, at Fan Bay (OS grid reference TR352428)
Hartlepool, County Durham, in the Clavering area
Hythe, Kent
Malta – five sound mirrors were planned for Malta, serialled alphabetically, but only the Magħtab wall is known to have been built:
A. Il-Widna, "the ear": Magħtab
B. Zonkor
C. Ta Karach
D. Ta Zura
E. Tal Merhla
Joss Gap, Kent
Kilnsea, Yorkshire
Leros, Greece
Redcar, Yorkshire
Seaham, County Durham
Selsey, Sussex – converted into a residence
Sunderland, at Namey Hill (OS grid reference NZ38945960)
Warden Point, Isle of Sheppey, Kent – the Warden Point mirror, sited on a cliff-top, fell onto the beach below ca 1978-9
Modern acoustic mirrors built for entertainment
Pennypot, Royal Military Canal
Wat Tyler country park, near Pitsea, Essex – modern sculpture in the form of functional sonic mirrors
The Brickyard (NC State) – North Carolina State University campus
Discovery Green in downtown Houston has a sculpture made of limestone called the Listening Vessels.
Very Large Array – Socorro County, New Mexico Visitor Center has a 'whispering gallery' pair of dishes
Planetanya, Israel – At the space and science visitng center, in the scientific garden.
See also
Acoustic location
Sound ranging, for the artillery use
Parabolic microphone
Whispering gallery
Whispering-gallery wave
References
Further reading
External links
Acoustic mirrors in Britain
Military acoustic locators
White Cliffs Underground further details of variety of East Kent defences
Visiting information for UK (and Maltese) sound mirrors
Acoustics
Anti-aircraft warfare
Warning systems | Acoustic mirror | [
"Physics",
"Technology",
"Engineering"
] | 1,256 | [
"Safety engineering",
"Classical mechanics",
"Acoustics",
"Measuring instruments",
"Warning systems"
] |
31,313,402 | https://en.wikipedia.org/wiki/Ballistic%20conduction%20in%20single-walled%20carbon%20nanotubes | Single-walled carbon nanotubes in the fields of quantum mechanics and nanoelectronics, have the ability to conduct electricity. This conduction can be ballistic, diffusive, or based on scattering. When ballistic in nature conductance can be treated as if the electrons experience no scattering.
Conductance quantization and Landauer formula
Conduction in single-walled carbon nanotubes is quantized due to their one-dimensionality and the number of allowed electronic states is limited, if compared to bulk graphite. The nanotubes behave consequently as quantum wires and charge carriers are transmitted through discrete conduction channels. This conduction mechanism can be either ballistic or diffusive in nature, or based on tunneling. When ballistically conducted, the electrons travel through the nanotubes channel without experiencing scattering due to impurities, local defects or lattice vibrations. As a result, the electrons encounter no resistance and no energy dissipation occurs in the conduction channel.
In order to estimate the current in the carbon nanotube channel, the Landauer formula can be applied, which considers a one-dimensional channel, connected to two contacts – source and drain.
Assuming no scattering and ideal (transparent) contacts, the conductance of the one-dimensional system is given by G = G0NT, where T is the probability that an electron will be transmitted along the channel, N is the number of the channels available for transport, and G0 is the conductance quantum 2e2/h = (12.9kΩ)−1. Perfect contacts, with reflection R = 0, and no back scattering along the channel result in transmission probability T = 1 and the conductance of the system becomes G = (2e2/h) N. Thus each channel contributes 2G0 to the total conductance.
For metallic armchair nanotubes, there are two subbands, which cross the Fermi level, and for semiconducting nanotubes – bands which don't cross the Fermi level. Thus there are two conducting channels and each band accommodates two electrons of opposite spin. Thus the value of the conductance is G = 2G0 = (6.45 kΩ)−1.
In a non-ideal system, T in the Landauer formula is replaced by the sum of the transmission probabilities for each conduction channel. When the value of the conductance for the above example approaches the ideal value of 2G0, the conduction along the channel is said to be ballistic. This happens when the scattering length in the nanotube is much greater than the distance between the contacts.
If a carbon nanotube is a ballistic conductor, but the contacts are nontransparent, the transmission probability, T, is reduced by back-scattering in the contacts. If the contacts are perfect, the reduced T is due to back-scattering along the nanotube only.
When the resistance measured at the contacts is high, one can infer the presence of Coulomb blockade and Luttinger liquid behavior for different temperatures. Low contact resistance is a prerequisite for investigating conduction phenomena in CNTs in the high transmission regime.
Quantum Interference
When the size of the CNT device scales with the electron coherence length, important in the ballistic conduction regime in CNTs becomes the interference pattern arising when measuring the differential conductance as a function of the gate voltage. This pattern is due to the quantum interference of multiply reflected electrons in the CNT channel. Effectively, this corresponds to a Fabry-Perot resonator, where the nanotube acts as a coherent waveguide and the resonant cavity is formed between the two CNT-electrode interfaces. Phase coherent transport, electron interference, and localized states have been observed in the form of fluctuations in the conductance as a function of the Fermi energy.
Phase coherent electrons give rise to the observed interference effect at low temperatures. Coherence then corresponds to a decrease in the occupation numbers of phonon modes and a decreased rate of inelastic scattering. Correspondingly, increased conduction is reported for low temperatures.
Ballistic Conduction in CNT Field-Effect Transistors
CNT FETs exhibit four regimes of charge transport:
ohmic contact ballistic
ohmic contact diffusive
Schottky barrier ballistic
Schottky barrier diffusive
Ohmic contacts ballistic require no scattering as the charge carriers are transported through the channel, i.e. the length of the CNT should be much smaller than the mean free path (L<< lm). The opposite is valid for diffusive transport.
In semiconducting CNTs at room temperature and for low energies, the mean free path is determined by the electron scattering from acoustic phonons, which results in lm ≈ 0.5μm. In order to satisfy the conditions for ballistic transport, one has to take care of the channel length and the properties of the contacts, while the geometry of the device could be any top-gated doped CNT FET.
Ballistic transport in a CNT FET takes place when the length of the conducting channel is much smaller than the mean free path of the charge carrier, lm.
Ballistic conduction in Ohmic Contact FETs
Ohmic i.e. transparent contacts are most favorable for an optimized current flow in a FET.
In order to derive the current-voltage (I-V) characteristics for a ballistic CNT FET, one can start with Planck's postulate, which relates the energy of the i-th state to its frequency:
The total current for a many-state system is then the sum over the energy of each state multiplied by the occupation probability function, in this case the Fermi–Dirac statistics:
For a system with dense states, the discrete sum can be approximated by an integral:
In CNT FETs, the charge carriers move either left (negative velocity) or right (positive velocity) and the resulting net current is called drain current. The source potential controls the right-moving, and the drain potential - the left moving carriers and if the source potential is set to zero, the Fermi energy at the drain subsequently decreases to yield positive drain voltage. The total drain current is computed as a sum of all contributing subbands in the semiconductor CNT, but given the low voltages used with nanoscale electronics, higher subbands can be effectively ignored and the drain current is given only by the contribution of the first subband:
where and is the quantum resistance.
The expression for gives the ballistic current dependence on the voltage in a CNT FET with ideal contacts.
Ballistic conduction with Optical Phonon Scattering
Ideally, ballistic transport in CNT FETs requires no scattering from optical or acoustic phonons, however the analytical model yields only partial agreement with experimental data. Thus, one needs to consider a mechanism, which would improve the agreement and recalibrate the definition of ballistic conduction in CNTs. Partially ballistic transport is modeled to involve optical phonon scattering. Scattering of electrons by optical phonons in carbon nanotube channels has two requirements:
The traveled length in the conduction channel between source and drain has to be greater than the optical phonon mean free path
The electron energy has to be greater than the critical optical phonon emission energy
Schottky barrier Ballistic conduction
CNT FETs with Schottky contacts are easier to fabricate than those with ohmic contacts. In these transistors, the gate voltage controls the thickness of the barrier, and the drain voltage can lower the barrier height at the drain electrode. Quantum tunneling of the electrons through the barrier should also be taken into account here. In order to understand the charge conduction in Schottky barrier CNT FETs, we need to study the band schemes under different bias conditions (Fig 2):
the net current is a result of electrons tunneling from the source and holes tunneling from the drain
ON-state: electrons tunneling from the source
OFF-state: holes tunneling from the drain
Thus, the Schottky barrier CNT FET is effectively an ambipolar transistor, since the ON electron current is opposed by an OFF hole current, which flows at values smaller than the critical gate voltage value.
From the band diagrams, one can deduce the characteristics of the Schottky CNT FETs. Starting at the OFF state, there is hole current, which gradually decreases as the gate voltage is increased until it is opposed with equal strength by the electron current coming from the source. Above the critical gate voltage in the ON state, the electron current prevails and reaches a maximum at and the curve will roughly have a V-shape.
References
Carbon nanotubes
Quantum mechanics
Nanoelectronics
Charge carriers | Ballistic conduction in single-walled carbon nanotubes | [
"Physics",
"Materials_science"
] | 1,812 | [
"Physical phenomena",
"Charge carriers",
"Theoretical physics",
"Quantum mechanics",
"Electrical phenomena",
"Condensed matter physics",
"Nanoelectronics",
"Nanotechnology"
] |
31,314,192 | https://en.wikipedia.org/wiki/Spring%20horizon | A spring horizon or spring line is an impervious layer of rock reaching the surface, along which springs emerge. Since aquifers and impervious strata often lie on top of one another in horizontal layers, adjacent contact springs often emerge at the same height along a line called the spring horizon.
References
Springs (hydrology) | Spring horizon | [
"Environmental_science"
] | 68 | [
"Hydrology",
"Hydrology stubs",
"Springs (hydrology)"
] |
30,178,320 | https://en.wikipedia.org/wiki/Vertex%20of%20a%20representation | In mathematical finite group theory, the vertex of a representation of a finite group is a subgroup associated to it, that has a special representation called a source. Vertices and sources were introduced by .
References
Representation theory
Finite groups | Vertex of a representation | [
"Mathematics"
] | 45 | [
"Mathematical structures",
"Finite groups",
"Fields of abstract algebra",
"Algebraic structures",
"Representation theory"
] |
30,185,050 | https://en.wikipedia.org/wiki/Empty%20lattice%20approximation | The empty lattice approximation is a theoretical electronic band structure model in which the potential is periodic and weak (close to constant). One may also consider an empty irregular lattice, in which the potential is not even periodic. The empty lattice approximation describes a number of properties of energy dispersion relations of non-interacting free electrons that move through a crystal lattice. The energy of the electrons in the "empty lattice" is the same as the energy of free electrons. The model is useful because it clearly illustrates a number of the sometimes very complex features of energy dispersion relations in solids which are fundamental to all electronic band structures.
Scattering and periodicity
The periodic potential of the lattice in this free electron model must be weak because otherwise the electrons wouldn't be free. The strength of the scattering mainly depends on the geometry and topology of the system. Topologically defined parameters, like scattering cross sections, depend on the magnitude of the potential and the size of the potential well. For 1-, 2- and 3-dimensional spaces potential wells do always scatter waves, no matter how small their potentials are, what their signs are or how limited their sizes are. For a particle in a one-dimensional lattice, like the Kronig–Penney model, it is possible to calculate the band structure analytically by substituting the values for the potential, the lattice spacing and the size of potential well. For two and three-dimensional problems it is more difficult to calculate a band structure based on a similar model with a few parameters accurately. Nevertheless, the properties of the band structure can easily be approximated in most regions by perturbation methods.
In theory the lattice is infinitely large, so a weak periodic scattering potential will eventually be strong enough to reflect the wave. The scattering process results in the well known Bragg reflections of electrons by the periodic potential of the crystal structure. This is the origin of the periodicity of the dispersion relation and the division of k-space in Brillouin zones. The periodic energy dispersion relation is expressed
as:
The are the reciprocal lattice vectors to which the bands belong.
The figure on the right shows the dispersion relation for three periods in reciprocal space of a one-dimensional lattice with lattice cells of length a.
The energy bands and the density of states
In a one-dimensional lattice the number of reciprocal lattice vectors that determine the bands in an energy interval is limited to two when the energy rises. In two and three dimensional lattices the number of reciprocal lattice vectors that determine the free electron bands increases more rapidly when the length of the wave vector increases and the energy rises. This is because the number of reciprocal lattice vectors that lie in an interval increases. The density of states in an energy interval depends on the number of states in an interval in reciprocal space and the slope of the dispersion relation .
Though the lattice cells are not spherically symmetric, the dispersion relation still has spherical symmetry from the point of view of a fixed central point in a reciprocal lattice cell if the dispersion relation is extended outside the central Brillouin zone. The density of states in a three-dimensional lattice will be the same as in the case of the absence of a lattice. For the three-dimensional case the density of states is;
In three-dimensional space the Brillouin zone boundaries are planes. The dispersion relations show conics of the free-electron energy dispersion parabolas for all possible reciprocal lattice vectors. This results in a very complicated set intersecting of curves when the dispersion relations are calculated because there is a large number of possible angles between evaluation trajectories, first and higher order Brillouin zone boundaries and dispersion parabola intersection cones.
Second, third and higher Brillouin zones
"Free electrons" that move through the lattice of a solid with wave vectors far outside the first Brillouin zone are still reflected back into the first Brillouin zone. See the external links section for sites with examples and figures.
The nearly free electron model
In most simple metals, like aluminium, the screening effect strongly reduces the electric field of the ions in the solid. The electrostatic potential is expressed as
where Z is the atomic number, e is the elementary unit charge, r is the distance to the nucleus of the embedded ion and q is a screening parameter that determines the range of the potential. The Fourier transform, , of the lattice potential, , is expressed as
When the values of the off-diagonal elements between the reciprocal lattice vectors in the Hamiltonian almost go to zero. As a result, the magnitude of the band gap collapses and the empty lattice approximation is obtained.
The electron bands of common metal crystals
Apart from a few exotic exceptions, metals crystallize in three kinds of crystal structures: the BCC and FCC cubic crystal structures and the hexagonal close-packed HCP crystal structure.
References
External links
Brillouin Zone simple lattice diagrams by Thayer Watkins
Brillouin Zone 3d lattice diagrams by Technion.
DoITPoMS Teaching and Learning Package- "Brillouin Zones"
Quantum models
Electronic band structures | Empty lattice approximation | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,033 | [
"Electron",
"Quantum mechanics",
"Quantum models",
"Electronic band structures",
"Condensed matter physics"
] |
30,188,279 | https://en.wikipedia.org/wiki/Paleostress | Paleostress is a term used in geology (specifically in the fields of structural geology and tectonics) to indicate mechanical stress that has affected rock formations in the geological past.
In practice, a paleostress tensor may be quantified based on the measurement of certain geological structures (e.g. faults), whose specific geometries and spatial organization are theoretically linked to the parameters of the tensor (see paleostress inversion). The latter are quantified through inversion of the structures measured in the field (or potentially on rock samples in the lab).
Paleostress is a subset of mechanical stress within geology. Variations in stress fields within the Earth's crust can result in a variety of mechanical responses:
Microscopic:
Crystal deformation, including twinning,
Pressure solution
Microfractures,
Aligned fluid inclusions.
Macroscopic:
Folding
Fracturing
Faulting (fracturing accompanied by offset of rock bodies on either side of the fracture surface)
Traditionally, deformations (either folding or fracturing—without dissolution) are collectively termed mechanical strain.
Both macroscopic and microscopic strain may be elastic, and only exist as long as differential stress exists, or it may be inelastic -- that is the deformation due to a particular stress event remains even after the stress is removed. In the latter case, inelastic deformation, the stress field responsible for the deformation if it can be inferred, is, then, the paleostress. Anderson's classic analysis of faulting serves as a simple application of paleostress analysis in terms of principal components of stress.
Zoback and Zoback's (1986) synthesis of contemporary stress measurements in North America was subsequently expanded to a global study which continues as the World Stress Project.
A number of regional studies of paleostresses has been undertaken, including Europe; North America; and Australia.
References
Aleksandrowski, P. 1985. Graphical determination of principal stress
Pascal, C., 2021. Paleostress Inversion Techniques: Methods and Applications for Tectonics, Elsevier, 400 p. https://www.elsevier.com/books/paleostress-inversion-techniques/pascal/978-0-12-811910-5
Sippel, J., 2009, The Paleostress History of the Central European Basin System, Scientific Technical Report STR09/06, Dissertation zur *Erlangung des akademischen Grades doctor rerum naturalium im Fachbereich Geowissenschaften an der Freien Universität Berlin, http://bib.gfz-potsdam.de/pub/str0906/0906.pdf.
Structural geology
Deformation (mechanics) | Paleostress | [
"Materials_science",
"Engineering"
] | 557 | [
"Deformation (mechanics)",
"Materials science"
] |
30,189,546 | https://en.wikipedia.org/wiki/Fluorescent%20chloride%20sensor | Fluorescent chloride sensors are used for chemical analysis. The discoveries of chloride (Cl−) participations in physiological processes stimulates the measurements of intracellular Cl− in live cells and the development of fluorescent tools referred below.
Quinoline-based dyes
quinolinium - based Cl− indicators are based on the capability of halides to quench the fluorescence of heterocyclic organic compounds with quaternary nitrogen. Fluorescence is quenched by a collision mechanism with a linear Stern–Volmer relationship:
where:
is the fluorescence in the absence of halide
is the fluorescence in the presence of halide
is the Stern–Volmer quenching constant, which depends on the chloride concentration, . in a linear manner.
Thus, quinoline-based indicators are one-wavelength dyes - the signal results from monitoring the fluorescence at a single wavelength. Ratiometric measurement of halide concentration is not possible with quinolinium dyes. The kinetics of collision quenching are diffusion-limited only, and these indicators provide submillisecond time resolution. Quinolinium-based dyes are insensitive to physiological changes in pH, but they are prone to strong bleaching and demand ultraviolet excitation, which is harmful for living organisms. Because quinolinium is not occurring in the cells naturally, cell loading is necessary. However, quinolinium-based dyes aren't retained perfectly in the cell and can't be targeted easily to subcellular organelles. Also, they cannot be designed specific to a certain type of cell.
The most used quinolinium-based Cl− indicators are 6-methoxy-1-(3-sulfonatopropyl) quinolinium (SPQ), 6-methoxy-N-ethylquinolium Cl− (MEQ), and N-(6-methoxyquinolyl)-acetoethyl ester (MQAE).
YFP based Cl− sensors
Cl−indicators can be designed on the basis of endogenously expressed fluorescent proteins such as Yellow fluorescent protein (YFP). An advantage of endogenously expressed probes over dye-based probes is their ability to achieve cell-type-specificity by the choice of Promoter (genetics) promotor. YFP based indicators are mutated forms of Green fluorescent protein (GFP). YFP contains four point mutations and has a red-shifted excitation and emission spectrum compared with GFP. YFP fluorescence is sensitive to various small anions with relative potencies iodine > nitrate > chloride > bromide > formate > acetate. YFP sensitivity to these small anions results from ground-state binding near the chromophore, which apparently alters the chromophore ionization constant and hence the fluorescence emission. The fluorescence of YFP is sensitive to [Cl− ] and pH. The effect is fully reversible.
YFP is excited at visible range and is a genetically encoded probe. YFP based Cl− sensors have rather low kinetics of Cl− association / dissociation. The half time association/dissociation constants for YFP mutant range from 50 ms (YFP-H148Q I152L) to 2 sec (YFP-H148Q V163S). If a fluorescent indicators is based on one fluorescent protein only, it doesn't allow for ratiometric measurements. Hence, a rationale for ratiometric fluorescent indicators results.
FRET-based, genetically encoded Cl− indicators
Förster resonance energy transfer (FRET)-based Cl indicators consist of two fluorescent proteins, Cyan fluorescent protein (CFP) and YFP connected via a polypeptide linker. This allows ratiometric Cl− measurements based on the Cl− sensitivity of YFP and Cl− insensivity of CFP. Clomeleon and Cl− Sensor are FRET-based Cl indicators that allow ratiometric non-invasive monitoring of chloride activity in living cells.
Notes
References
Analytical chemistry
Chlorides
Fluorescent dyes | Fluorescent chloride sensor | [
"Chemistry"
] | 851 | [
"Chlorides",
"Inorganic compounds",
"nan",
"Salts"
] |
20,135,689 | https://en.wikipedia.org/wiki/Boilover | A boilover (or boil-over) is an extremely hazardous phenomenon in which a layer of water under a pool fire (e.g., an open-top tank fire) starts boiling, which results in a significant increase in fire intensity accompanied by violent expulsion of burning fluid to the surrounding areas. Boilover can only occur if the liquid fluid is a mixture of different chemical species with sufficiently diverse boiling points, although a so-called thin-layer boilover – a far less hazardous phenomenon – can arise from any water-immiscible liquid fuel. Crude oil, kerosene and some diesel oils are examples of fuels giving rise to boilover.
Boilovers at industrial scale are rare but can lead to serious plant damage. Given the sudden and not easily predictable onset of the phenomenon, fatalities can occur, especially among firefighters and bystanders that have not been made to leave the area.
Slopover and frothover are phenomena similar to boilover but distinct from it. A slopover occurs when pouring water over a liquid pool fire, which may result in sudden expulsion of blazing fluid as well as considerable flame growth if the fire is small, as is the case when dousing water over a chip pan fire. A frothover is a situation occurring when there is a layer of water under a layer of a viscous fuel that, although not on fire, is at higher temperature than the boiling point of water.
Features
The extreme violence of boilovers is due to the expansion of water from liquid to steam, which is by a factor of 1500 or more. In practical storage scenarios, the presence of water under the burning fluid is sometimes due to spurious accumulation during plant operation (e.g., rainwater entering a seam in the tank roof, off-specification products from the source, residual water from an oil reservoir, or humidity condensation) or as a consequence of attempts to extinguish the fire with water. A typical scenario for a tank fire that may eventually result in boilover is an initial confined explosion blowing off the tank roof.
Pure chemical species are not liable to boilover. In order for one to occur, the material must be a mixture of species with sufficiently diverse boiling points. Crude oil and some commercial hydrocarbon mixtures, such as kerosene and some diesel oils, are examples of such materials. The fact that these are stored in large atmospheric tanks in refineries, tank farms, power stations, etc. makes boilover a hazard of interest in terms of process safety. During a pool fire, a distillation process takes place in the fuel. Separation of light components from heavier ones occurs thanks to convective fluid motion. An intermediate fuel layer, called the hot zone or heat wave, is formed, which becomes progressively richer in higher-boiling-point species. Its temperature, as well as thickness, progressively increase. Its lower boundary moves downwards towards the fuel–water interface at a speed higher than the overall level of fuel decreases due to the fire burning it. As a result, when the hot zone reaches the water layer, a considerable amount of unburnt fuel may still be present above the water. Upon the water contacting the hot zone, some steam forms. The resulting turbulence promotes mixing of the water into the hot fuel. This can result in rapid water vaporization. The violent expansion of the steam bubbles will push out a significant part of the fuel above it, causing a violent overflow of flaming liquid. In these conditions water may be superheated, in which case part of it goes through an explosive boiling with homogeneous nucleation of steam. When this happens, the abruptness of the expansion further enhances the expulsion of blazing fuel. Typical hot-zone speeds are 0.3–0.5 meters per hour (1.0–1.7 ft/h), although speeds of up to 1.2 meters per hour (4.0 ft/h) have been recorded.
Apart from the presence of a water layer under the fuel, other conditions must be met for a hot-zone boilover to occur:
Since the upper fuel layers, including the hot zone, are at or near their boiling temperature, it is necessary for the boiling point of the fuel to be high enough, such that the hot zone temperature is higher than the water boiling temperature. Both the effect of the static head of fuel above the water and the fact that the hot zone composition is different from that of the initial fuel have to be considered. In general, boilover is possible if the fuel mean boiling point (calculated as a geometric mean of its lower and upper boiling points, i.e. the temperatures at which the mixture, respectively, starts to boil and is completely vaporized) is higher than :
As mentioned above, the composition of the fuel mixture must be sufficiently varied. It has been observed that the gap between and the higher value between and the boiling point of water at the fuel–water interface has to be higher than 60 °C (108 °F):Some sources indicate that the upper range of the boiling temperature has to be above :
The fuel viscosity must be sufficiently high to oppose the upwards movement of the steam bubbles. Otherwise, these may flow through the fuel without projecting it out of the blazing tank. Low viscosity may also make it difficult for a stable heavy-components hot zone to form, thanks to more efficient natural convection. Thus, experiments on gasoline (dynamic viscosity ≈ 0.37 cSt) pool fires have shown that boilover does not occur. In general, fuel dynamic viscosity has to be higher at least 0.73 cSt, which is the viscosity of kerosene.
The hazards posed by a hot-zone boilover are significant for several reasons. At industrial scale, hydrocarbon tanks can contain up to hundreds of thousands of barrels of fluid. If a boilover occurs, the amount of blazing liquid erupting from the tank can therefore be huge. Ejected blazing fluids can travel at speeds up to and attain distances well in excess of the limits of secondary containment bunding, often hundreds of meters or in the order of ten tank diameters downwind. Bunding, however, remains an important measure to reduce fire spread. Moreover, since boilover inception is sometimes unpredictable —either in terms of time to onset or whether it will occur at all (because the presence of water in the tank bottom may not be a known factor)— the impact on the firefighters that have intervened to control the fire can be deadly. In some cases, simple bystanders were caught in the blaze and perished.
Tank fires that appear to be relatively stable may burst into massive boilovers several hours after the fire starts, as it occurred in the Tacoa disaster. Failure to appreciate the hazards posed by a water layer underneath the fuel has been a significant contributing cause to the aftermath of boilover accidents, in terms of human and material losses. Uncertainty surrounding the time to boilover onset adds unpredictability that further complicates the efforts of the firefighting services. Mathematical models for boilover have been developed that predict the time necessary for boilover to initiate, among other things.
Notable accidents
The following are some notable accidents in which a standard, or hot-zone, boilover occurred:
20 January 1968, Shell refinery, Pernis, The Netherlands – Water emulsion and hot crude oil mixed and produced frothing, vapor release and boilover. The fire spread , destroying several refinery units and 80 tanks.
26 June 1971, Czechowice-Dziedzice oil refinery, Poland – A -diameter crude oil tank was hit by lightning, which caused a roof collapse and an open-top tank fire. After extended firefighting and a decrease in the fire intensity, boilover occurred, spewing flaming liquids up to away. A nearby tank exploded due to ignition of flammable vapors inside. Thirty-three people died.
19 December 1982, Ricardo Zuloaga thermal power plant in Tacoa, Vargas, Venezuela – In the Tacoa disaster more than 150 people, including journalists and bystanders not involved in fighting the fire, died when a massive boilover developed from a fuel oil tank. It is the worst tank fire ever occurred worldwide.
30 August 1983, Amoco oil refinery, Milford Haven, Wales – An open-top tank fire occurred at a crude storage tank. Filled with more than 46,000 tons of oil, the flaming storage tank experienced multiple boilovers, spreading the fire into the containment dyke. However, the fire did not propagate further. In all, 150 firefighters and 120 fire appliances were needed to tackle the blaze. While six firefighters were injured during the two-day fire, no one was killed.
Related phenomena
Thin-layer boilover
A thin-layer boilover occurs in one of two situations:
When the fuel layer is thin, such as in the case of spillage on a wet surface. In this case the boilover onset time is very short, typically about one minute.
When, regardless of the thickness of the fuel layer, distillation does not occur and a heat wave is not formed. In such a situation, for a boilover to occur, the fuel has to burn down until its warmer top layer reaches the fuel–water interface.
In a thin-layer boilover, the size of the flames increases upon boilover onset, and a characteristic crackling sound is produced. However, due to the little amount of fuel left, this phenomenon is far less hazardous than a standard boilover. The study of thin-layer boilover is of interest in the context of in-situ burning of oil spills over water.
Slopover
A slopover is a phenomenon similar to boilover, although distinct from it. It occurs when water is poured onto the fuel while a pool fire is occurring. If the fire is small enough, the water that instantly boils in contact with the fire or with the lower layers of blazing liquid (which are themselves not on fire but may be hotter than the water boiling point) can extend the flames, especially in the upwards direction.
In industrial-scale tank fires, there is no noticeable effect when water is doused on the fire, although water sinking to the bottom of the tank may contribute to a later boilover. However, at smaller scale, slopovers pose significant hazards. Trying to extinguish a chip pan or cooking oil fire with water, for example, causes slopover, which can harm people and spread the fire in the kitchen. Serious burn incidents have also occurred during Mid-autumn Festival celebrations, where boiling candlewax and pouring water on it for entertainment has become a habit.
Frothover
A frothover occurs when a water layer is present under a layer of a viscous oil that is not on fire and whose temperature is higher than the water boiling point. An example is hot asphalt loaded into a tank car containing some water. Although nothing may happen at first, water may eventually superheat and later start to boil violently, resulting in overflow.
Fire protection
Water is generally unsuitable for extinguishing liquid fires. In the context of boilovers and slopovers, the fuel is generally lighter than water. At industrial scale, this means that water applied to an open-top tank fire will sink to the bottom of the tank, which can cause boilover at a later stage. At small/domestic scale, assuming the water can find its way down through the fuel, use of water may cause the content of the vessel to spill over and spread the fire. If water does not sink efficiently to the bottom, then a violent slopover may occur. This makes water both inefficient as an extinguishing agent and potentially very hazardous.
Industrial-scale storage sites
Hot-zone boilovers of large tanks are relatively rare events. However, they can be extremely disruptive. Therefore, prevention and control are very important.
Boilover can be prevented by regularly checking for and draining water in the tank bottoms.
In terms of plant layout, intertank distances would have to exceed five tank diameters in order to prevent escalation to adjacent tanks. In most cases, it is not feasible to design for such an arrangement.
Open-top crude oil tank fires can be tackled using firefighting foam at rates of 10–12 L/(min × m2). However, it is not clear if these rates are adequate to minimize the potential for a boilover event, especially in cases where foam attack is initiated long after the inception of the tank fire. It has been suggested that foam firefighting should be started within 2–4 hours from ignition.
Thermal radiation during a boilover is considerably higher than during the pool fire that precedes it. Although the event is short-lived, emergency response activities, for which tenable levels of thermal radiations are typically 6.3 kW/m2, cannot be safely accomplished, so operations should take place from a safe distance.
Some approaches are available to assess the probability of and the proximity to boilover in tank fires. An estimation can be made a priori from the distillation curve and the properties of the fuel, with the aid of mathematical formulas, including the ones given above. However, this approach requires knowledge of the depth of the water layer at the bottom of the tank. Further, it does not consider the potential for a layer of water–fuel emulsion being present above the water. Progression of the hot zone can be monitored by using vertical strips of intumescent paint applied to the tank walls, or applying a water jet to the walls to assess at what height it starts boiling. Use of thermographic cameras or pyrometers has also been proposed. However, uncertainty regarding the presence and depth of a water or a water–fuel emulsion layer remains, and unpredictability about boilover onset cannot be completely dispelled. Draining the product from the tank may reduce accidental consequences, because less fluid would be subject to boilover. However, pumping out product may also reduce the time to boilover onset.
See also
Phreatic eruption, a similar concept in volcanic eruption.
Explanatory notes
References
Sources
External links
Types of fire
Petroleum production
Process safety | Boilover | [
"Chemistry",
"Engineering"
] | 2,898 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
20,137,770 | https://en.wikipedia.org/wiki/X-bracing | X-bracing is a structural engineering practice where the lateral load on a building is reduced by transferring the load into the exterior columns.
X-bracing was used in the construction of the 1908 Singer Building, then the tallest building in the world.
Some skyscrapers by engineer Fazlur Khan, such as the 1969 John Hancock Center, have a distinctive X-bracing exterior, allowing for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires.
References
Structural engineering | X-bracing | [
"Engineering"
] | 110 | [
"Construction",
"Civil engineering",
"Structural engineering"
] |
20,142,316 | https://en.wikipedia.org/wiki/Extrasolar%20Planets%20Encyclopaedia | The Extrasolar Planets Encyclopaedia (also known as Encyclopaedia of exoplanetary systems and Catalogue of Exoplanets) is an astronomy website, founded in Paris, France at the Meudon Observatory by Jean Schneider in February 1995, which maintains a database of all the currently known and candidate extrasolar planets, with individual pages for each planet and a full list interactive catalog spreadsheet. The main catalogue comprises databases of all of the currently confirmed extrasolar planets as well as a database of unconfirmed planet detections. The databases are frequently updated with new data from peer-reviewed publications and conferences.
In their respective pages, the planets are listed along with their basic properties, including the year of planet's discovery, mass, radius, orbital period, semi-major axis, eccentricity, inclination, longitude of periastron, time of periastron, maximum time variation, and time of transit, including all error range values.
The individual planet data pages also contain the data on the parent star, including name, distance in parsecs, spectral type, effective temperature, apparent magnitude, mass, radius, age, and celestial coordinates (Right Ascension and Declination). Even when they are known, not all of these figures are listed in the interactive spreadsheet catalog, and many missing planet figures that would simply require the application of Kepler's third law of motion are left blank. Most notably absent on all pages is a star's luminosity.
As of June 2011, the catalog includes objects up to 25 Jupiter masses, an increase on the previous inclusion criteria of 20 Jupiter masses.
As of 2016 this limit was increased to 60 Jupiter masses based on a study of mass–density relationships.
See also
NASA Exoplanet Archive
Exoplanet Data Explorer
References
External links
Astronomy websites
Exoplanet catalogues
Astronomical databases
20th-century encyclopedias
French online encyclopedias
Paris Observatory
Creative Commons-licensed websites
Creative Commons-licensed databases | Extrasolar Planets Encyclopaedia | [
"Astronomy"
] | 406 | [
"Astronomical databases",
"Works about astronomy",
"Astronomy websites"
] |
20,144,342 | https://en.wikipedia.org/wiki/Bacterial%20anaerobic%20corrosion | Bacterial anaerobic corrosion is the bacterially-induced oxidation of metals. Corrosion of metals typically alters the metal to a form that is more stable. Thus, bacterial anaerobic corrosion typically occurs in conditions favorable to the corrosion of the underlying substrate. In humid, anoxic conditions the corrosion of metals occurs as a result of a redox reaction. This redox reaction generates molecular hydrogen from local hydrogen ions. Conversely, anaerobic corrosion occurs spontaneously. Anaerobic corrosion primarily occurs on metallic substrates but may also occur on concrete.
Details
Bacterial anaerobic corrosion typically impacts metallic substrates but may also occur in concrete. Corrosion of concrete mediums leads to considerable losses in industrial settings. When considering the corrosion of concrete there is significant documentation of structural degradation in concrete wastewater infrastructure where wastewater is collected or treated. Similarly, biofilms are important for bacterial anaerobic corrosion of metals in wastewater pipes.
For bacterial anaerobic corrosion there is general corrosion of substrates as well as another form of corrosion known as pitting. In both general or pitting corrosion, the breakdown process occurs in aqueous conditions. Bacteria tend to form biofilms as their primary means of corroding metals, with different bacteria dominating across different settings. In municipal wastewater, Desulfovibrio desulfuricans is the main contributor to corrosion.
Chemistry
A base metal, such as iron (Fe) goes into aqueous solution as positively charged cation, Fe2+. As the metal is oxidized under anaerobic conditions by the protons of water, H+ ions are reduced to form molecular H2. This can be written in the following ways under acidic and neutral conditions respectively:
Fe + 2 H+ → Fe2+ + H2
Fe + 2 H2O → Fe(OH)2 + H2
Usually, a thin film of molecular hydrogen forms on the metal. Sulfate-reducing bacteria oxidize the molecular hydrogen to produce hydrogen sulfide ions (HS−) and water:
4 H2 + SO42− → HS− + 3 H2O + OH−
The iron ions partly precipitate to form iron (II) sulfide. Another reaction occurs between iron and water producing iron hydroxide.
Fe2+ + HS− → FeS + H+
3 Fe2+ + 6 H2O → 3 Fe(OH)2 + 6 H+
The net equation comes to:
4 Fe + SO42− + H+ + 3 H2O → FeS + 3 Fe(OH)2 + OH−
This form of corrosion by sulfate-reducing bacteria can, in this way, be far more harmful than anaerobic corrosion.
Biofilms and Bacterial Anaerobic Corrosion
There is varying impact on local corrosion noted from biofilms formed of diverse microbial communities. For instance, when isolating a sample of biofilm from a pipe within the first week of growth, the corrosion of the pipe accelerated, yet by the end of a month, the same biofilm began to act as a protective layer for the pipe. Variation between corrosion in similar environments might be attributed to the local bacterial communities. Biofilms further mediate corrosion by altering the electrochemical processes at the interface of the underlying substrate.
See also
Corrosion
Microbial corrosion
Methanogen
Denitrifying Bacteria
References
Bacteria
Biochemical reactions
Corrosion | Bacterial anaerobic corrosion | [
"Chemistry",
"Materials_science",
"Biology"
] | 678 | [
"Metallurgy",
"Prokaryotes",
"Corrosion",
"Biochemical reactions",
"Electrochemistry",
"Bacteria",
"Biochemistry",
"Materials degradation",
"Microorganisms"
] |
20,144,524 | https://en.wikipedia.org/wiki/Anaerobic%20corrosion | Anaerobic corrosion (also known as hydrogen corrosion) is a form of metal corrosion occurring in anoxic water. Typically following aerobic corrosion, anaerobic corrosion involves a redox reaction that reduces hydrogen ions and oxidizes a solid metal. This process can occur in either abiotic conditions through a thermodynamically spontaneous reaction or biotic conditions through a process known as bacterial anaerobic corrosion. Along with other forms of corrosion, anaerobic corrosion is significant when considering the safe, permanent storage of chemical waste.
Chemical mechanisms
The overall process of corrosion can be represented by a bimodal function, where the type of corrosion varies with time, including both oxygen-driven and anaerobic mechanisms. The dominant process will depend on the given conditions. During oxygen-driven corrosion, layers of rust form, creating various non-homogenous anoxic niches throughout the metal's surface. Within the niches the diffusion of oxygen is inhibited, leading to the ideal conditions for anaerobic corrosion to occur.
Abiotic
Under anoxic conditions, the mechanism for corrosion requires a substitute for oxygen as the oxidizing agent in the redox reaction. For abiotic anaerobic corrosion, that substitute is the hydrogen ion produced in the dissociation of water and the proceeding reduction of the hydrogen ions into diatomic hydrogen gas. The anodic half-reaction involves the oxidation of a metal in aqueous solution into a metal hydroxide. A common reaction that represents this process is the transformation of solid iron in steel into ferrous hydroxide as visualized in the following overall redox reaction.
The ferrous hydroxide may be oxidized further by additional hydrogen ions in water to form the mineral magnetite (Fe3O4) in the process called the Schikorr reaction.
In general, the anaerobic corrosion of metals, such as iron and copper, occur at very slow rates. However, when in chloride-containing aqueous environments, the rate increases because of the introduction of new mechanisms with the addition of a chloride anions.
Biotic
When in biotic conditions, anaerobic corrosion can be facilitated by the metabolic activity of microorganisms in the surrounding environment. This process is known as microbiologically-influenced corrosion or bacterial anaerobic corrosion. Most notably, the production of dissolved sulfides by sulfate-reducing bacteria (SRB) react with solid metals and hydrogen ions to form metal sulfides in a redox reaction.
Environmental significance
The effects of anaerobic corrosion are evident when evaluating the safety of chemical waste disposal. Currently, the permanent disposal of nuclear waste is commonly in deep geological repositories (DGR) that use copper coating to prevent metal corrosion. In the DGR, four major types of corrosion are expected to occur, including oxygen-driven, radiation-influenced, anaerobic, and microbiologically-influenced corrosion. Of these, the most notable process is the microbiologically-influenced corrosion in terms of the magnitude of corrosion. The ability of microorganisms such as SRB to survive in a wide range of environments also lends to their relevance when considering the threat of corrosion to permanent chemical waste disposal.
See also
Corrosion
Bacterial anaerobic corrosion
Electrochemistry
Sulfate-reducing microorganism
Redox reaction
References
Corrosion
Hydrogen production | Anaerobic corrosion | [
"Chemistry",
"Materials_science"
] | 691 | [
"Materials degradation",
"Electrochemistry",
"Metallurgy",
"Corrosion"
] |
3,723,978 | https://en.wikipedia.org/wiki/Physical%20Chemistry%20Chemical%20Physics | Physical Chemistry Chemical Physics is a weekly peer-reviewed scientific journal publishing research and review articles on any aspect of physical chemistry, chemical physics, and biophysical chemistry. It is published by the Royal Society of Chemistry on behalf of eighteen participating societies. The editor-in-chief is Anouk Rijs, (Vrije Universiteit Amsterdam).
The journal was established in 1999 as the results of a merger between Faraday Transactions and a number of other physical chemistry journals published by different societies.
Owner societies
The journal is run by an Ownership Board, on which all the member societies have equal representation. The eighteen participating societies are:
Canadian Society for Chemistry
Deutsche Bunsen-Gesellschaft für Physikalische Chemie (Germany)
Institute of Chemistry of Ireland
Israel Chemical Society
Kemian Seurat (Finland)
Kemisk Forening (Denmark)
Koninklijke Nederlandse Chemische Vereniging (Netherlands)
Korean Chemical Society
New Zealand Institute of Chemistry
Norsk Kjemisk Selskap (Norway)
Polskie Towarzystwo Chemiczne (Poland)
Real Sociedad Española de Química (Spain)
Royal Australian Chemical Institute
Royal Society of Chemistry (United Kingdom)
Società Chimica Italiana (Italy)
Svenska Kemisamfundet (Sweden)
Swiss Chemical Society
Türkiye Kimya Dernegi (Turkey)
Article types
The journal publishes the following types of articles
Research Papers, original scientific work that has not been published previously
Communications, original scientific work that has not been published previously and is of an urgent nature
Perspectives, review articles of interest to a broad readership which are commissioned by the editorial board
Comments, a medium for the discussion and exchange of scientific opinions, normally concerning material previously published in the journal
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2021 impact factor of 3.945.
See also
List of scientific journals in chemistry
Annual Reports on the Progress of Chemistry Section C
References
External links
Physical chemistry journals
Physics education in the United Kingdom
Academic journals established in 1999
Royal Society of Chemistry academic journals
Weekly journals
English-language journals
Chemical physics journals | Physical Chemistry Chemical Physics | [
"Chemistry"
] | 447 | [
"Physical chemistry journals",
"Chemical physics journals"
] |
3,728,000 | https://en.wikipedia.org/wiki/Hidden%20subgroup%20problem | The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems such as factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's algorithms for factoring and finding discrete logarithms in quantum computing are instances of the hidden subgroup problem for finite abelian groups, while the other problems correspond to finite groups that are not abelian.
Problem statement
Given a group , a subgroup , and a set , we say a function hides the subgroup if for all if and only if . Equivalently, is constant on each coset of H, while it is different between the different cosets of H.
Hidden subgroup problem: Let be a group, a finite set, and a function that hides a subgroup . The function is given via an oracle, which uses bits. Using information gained from evaluations of via its oracle, determine a generating set for .
A special case is when is a group and is a group homomorphism in which case corresponds to the kernel of .
Motivation
The hidden subgroup problem is especially important in the theory of quantum computing for the following reasons.
Shor's algorithm for factoring and for finding discrete logarithms (as well as several of its extensions) relies on the ability of quantum computers to solve the HSP for finite abelian groups.
The existence of efficient quantum algorithms for HSPs for certain non-abelian groups would imply efficient quantum algorithms for two major problems: the graph isomorphism problem and certain shortest vector problems (SVPs) in lattices. More precisely, an efficient quantum algorithm for the HSP for the symmetric group would give a quantum algorithm for the graph isomorphism. An efficient quantum algorithm for the HSP for the dihedral group would give a quantum algorithm for the unique SVP.
Quantum algorithms
There is an efficient quantum algorithm for solving HSP over finite abelian groups in time polynomial in . For arbitrary groups, it is known that the hidden subgroup problem is solvable using a polynomial number of evaluations of the oracle. However, the circuits that implement this may be exponential in , making the algorithm not efficient overall; efficient algorithms must be polynomial in the number of oracle evaluations and running time. The existence of such an algorithm for arbitrary groups is open. Quantum polynomial time algorithms exist for certain subclasses of groups, such as semi-direct products of some abelian groups.
Algorithm for abelian groups
The algorithm for abelian groups uses representations, i.e. homomorphisms from to , the general linear group over the complex numbers. A representation is irreducible if it cannot be expressed as the direct product of two or more representations of . For an abelian group, all the irreducible representations are the characters, which are the representations of dimension one; there are no irreducible representations of larger dimension for abelian groups.
Defining the quantum fourier transform
The quantum fourier transform can be defined in terms of , the additive cyclic group of order . Introducing the characterthe quantum fourier transform has the definition ofFurthermore, we define . Any finite abelian group can be written as the direct product of multiple cyclic groups . On a quantum computer, this is represented as the tensor product of multiple registers of dimensions respectively, and the overall quantum fourier transform is .
Procedure
The set of characters of forms a group called the dual group of . We also have a subgroup of size defined byFor each iteration of the algorithm, the quantum circuit outputs an element corresponding to a character , and since for all , it helps to pin down what is.
The algorithm is as follows:
Start with the state , where the left register's basis states are each element of , and the right register's basis states are each element of .
Create a superposition among the basis states of in the left register, leaving the state .
Query the function . The state afterwards is .
Measure the output register. This gives some for some , and collapses the state to because has the same value for each element of the coset . We discard the output register to get .
Perform the quantum fourier transform, getting the state .
This state is equal to , which can be measured to learn information about .
Repeat until (or a generating set for ) is determined.
The state in step 5 is equal to the state in step 6 because of the following:For the last equality, we use the following identity:
Each measurement of the final state will result in some information gained about since we know that for all . , or a generating set for , will be found after a polynomial number of measurements. The size of a generating set will be logarithmically small compared to the size of . Let denote a generating set for , meaning . The size of the subgroup generated by will at least be doubled when a new element is added to it, because and are disjoint and because . Therefore, the size of a generating set satisfiesThus a generating set for will be able to be obtained in polynomial time even if is exponential in size.
Instances
Many algorithms where quantum speedups occur in quantum computing are instances of the hidden subgroup problem. The following list outlines important instances of the HSP, and whether or not they are solvable.
See also
Hidden shift problem
Pontryagin duality
References
External links
Richard Jozsa: Quantum factoring, discrete logarithms and the hidden subgroup problem
Chris Lomont: The Hidden Subgroup Problem - Review and Open Problems
Hidden subgroup problem on arxiv.org
Complete implementation of Shor's algorithm for finding discrete logarithms with Classiq
Group theory
Quantum algorithms
Quantum computing
Theoretical computer science | Hidden subgroup problem | [
"Mathematics"
] | 1,157 | [
"Group theory",
"Applied mathematics",
"Fields of abstract algebra",
"Theoretical computer science"
] |
3,728,323 | https://en.wikipedia.org/wiki/Type%20Ia%20supernova | A Type Ia supernova (read: "type one-A") is a type of supernova that occurs in binary systems (two stars orbiting one another) in which one of the stars is a white dwarf. The other star can be anything from a giant star to an even smaller white dwarf.
Physically, carbon–oxygen white dwarfs with a low rate of rotation are limited to below 1.44 solar masses (). Beyond this "critical mass", they reignite and in some cases trigger a supernova explosion; this critical mass is often referred to as the Chandrasekhar mass, but is marginally different from the absolute Chandrasekhar limit, where electron degeneracy pressure is unable to prevent catastrophic collapse. If a white dwarf gradually accretes mass from a binary companion, or merges with a second white dwarf, the general hypothesis is that a white dwarf's core will reach the ignition temperature for carbon fusion as it approaches the Chandrasekhar mass. Within a few seconds of initiation of nuclear fusion, a substantial fraction of the matter in the white dwarf undergoes a runaway reaction, releasing enough energy () to unbind the star in a supernova explosion.
The Type Ia category of supernova produces a fairly consistent peak luminosity because of the fixed critical mass at which a white dwarf will explode. Their consistent peak luminosity allows these explosions to be used as standard candles to measure the distance to their host galaxies: the visual magnitude of a type Ia supernova, as observed from Earth, indicates its distance from Earth.
Consensus model
The Type Ia supernova is a subcategory in the Minkowski–Zwicky supernova classification scheme, which was devised by German-American astronomer Rudolph Minkowski and Swiss astronomer Fritz Zwicky. There are several means by which a supernova of this type can form, but they share a common underlying mechanism. Theoretical astronomers long believed the progenitor star for this type of supernova is a white dwarf, and empirical evidence for this was found in 2014 when a Type Ia supernova was observed in the galaxy Messier 82. When a slowly-rotating carbon–oxygen white dwarf accretes matter from a companion, it can exceed the Chandrasekhar limit of about , beyond which it can no longer support its weight with electron degeneracy pressure. In the absence of a countervailing process, the white dwarf would collapse to form a neutron star, in an accretion-induced non-ejective process, as normally occurs in the case of a white dwarf that is primarily composed of magnesium, neon, and oxygen.
The current view among astronomers who model Type Ia supernova explosions, however, is that this limit is never actually attained and collapse is never initiated. Instead, the increase in pressure and density due to the increasing weight raises the temperature of the core, and as the white dwarf approaches about 99% of the limit, a period of convection ensues, lasting approximately 1,000 years. At some point in this simmering phase, a deflagration flame front is born, powered by carbon fusion. The details of the ignition are still unknown, including the location and number of points where the flame begins. Oxygen fusion is initiated shortly thereafter, but this fuel is not consumed as completely as carbon.
Once fusion begins, the temperature of the white dwarf increases. A main sequence star supported by thermal pressure can expand and cool which automatically regulates the increase in thermal energy. However, degeneracy pressure is independent of temperature; white dwarfs are unable to regulate temperature in the manner of normal stars, so they are vulnerable to runaway fusion reactions. The flare accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. It is still a matter of considerable debate whether this flare transforms into a supersonic detonation from a subsonic deflagration.
Regardless of the exact details of how the supernova ignites, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf fuses into heavier elements within a period of only a few seconds, with the accompanying release of energy increasing the internal temperature to billions of degrees. The energy released (1–) is more than sufficient to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of , roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of Type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation. The Type Ia supernova leaves no compact remnant, but the whole mass of the former white dwarf dissipates through space.
The theory of this type of supernova is similar to that of novae, in which a white dwarf accretes matter more slowly and does not approach the Chandrasekhar limit. In the case of a nova, the infalling matter causes a hydrogen fusion surface explosion that does not disrupt the star.
Type Ia supernovae differ from Type II supernovae, which are caused by the cataclysmic explosion of the outer layers of a massive star as its core collapses, powered by release of gravitational potential energy via neutrino emission.
Formation
Single degenerate progenitors
One model for the formation of this category of supernova is a close binary star system. The progenitor binary system consists of main sequence stars, with the primary possessing more mass than the secondary. Being greater in mass, the primary is the first of the pair to evolve onto the asymptotic giant branch, where the star's envelope expands considerably. If the two stars share a common envelope then the system can lose significant amounts of mass, reducing the angular momentum, orbital radius and period. After the primary has degenerated into a white dwarf, the secondary star later evolves into a red giant and the stage is set for mass accretion onto the primary. During this final shared-envelope phase, the two stars spiral in closer together as angular momentum is lost. The resulting orbit can have a period as brief as a few hours. If the accretion continues long enough, the white dwarf may eventually approach the Chandrasekhar limit.
The white dwarf companion could also accrete matter from other types of companions, including a subgiant or (if the orbit is sufficiently close) even a main sequence star. The actual evolutionary process during this accretion stage remains uncertain, as it can depend both on the rate of accretion and the transfer of angular momentum to the white dwarf companion.
It has been estimated that single degenerate progenitors account for no more than 20% of all Type Ia supernovae.
Double degenerate progenitors
A second possible mechanism for triggering a Type Ia supernova is the merger of two white dwarfs whose combined mass exceeds the Chandrasekhar limit. The resulting merger is called a super-Chandrasekhar mass white dwarf. In such a case, the total mass would not be constrained by the Chandrasekhar limit.
Collisions of solitary stars within the Milky Way occur only once every to ; far less frequently than the appearance of novae. Collisions occur with greater frequency in the dense core regions of globular clusters (cf. blue stragglers). A likely scenario is a collision with a binary star system, or between two binary systems containing white dwarfs. This collision can leave behind a close binary system of two white dwarfs. Their orbit decays and they merge through their shared envelope. A study based on SDSS spectra found 15 double systems of the 4,000 white dwarfs tested, implying a double white dwarf merger every 100 years in the Milky Way: this rate matches the number of Type Ia supernovae detected in our neighborhood.
A double degenerate scenario is one of several explanations proposed for the anomalously massive () progenitor of SN 2003fg. It is the only possible explanation for SNR 0509-67.5, as all possible models with only one white dwarf have been ruled out. It has also been strongly suggested for SN 1006, given that no companion star remnant has been found there. Observations made with NASA's Swift space telescope ruled out existing supergiant or giant companion stars of every Type Ia supernova studied. The supergiant companion's blown out outer shell should emit X-rays, but this glow was not detected by Swift's XRT (X-ray telescope) in the 53 closest supernova remnants. For 12 Type Ia supernovae observed within 10 days of the explosion, the satellite's UVOT (ultraviolet/optical telescope) showed no ultraviolet radiation originating from the heated companion star's surface hit by the supernova shock wave, meaning there were no red giants or larger stars orbiting those supernova progenitors. In the case of SN 2011fe, the companion star must have been smaller than the Sun, if it existed. The Chandra X-ray Observatory revealed that the X-ray radiation of five elliptical galaxies and the bulge of the Andromeda Galaxy is 30–50 times fainter than expected. X-ray radiation should be emitted by the accretion discs of Type Ia supernova progenitors. The missing radiation indicates that few white dwarfs possess accretion discs, ruling out the common, accretion-based model of Ia supernovae. Inward spiraling white dwarf pairs are strongly-inferred candidate sources of gravitational waves, although they have not been directly observed.
Double degenerate scenarios raise questions about the applicability of Type Ia supernovae as standard candles, since total mass of the two merging white dwarfs varies significantly, meaning luminosity also varies.
Type Iax
It has been proposed that a group of sub-luminous supernovae should be classified as Type Iax. This type of supernova may not always completely destroy the white dwarf progenitor, but instead leave behind a zombie star. Known examples of type Iax supernovae include: the historical supernova SN 1181, SN 1991T, SN 1991bg, SN 2002cx, and SN 2012Z.
The supernova SN 1181 is believed to be associated with the supernova remnant Pa 30 and its central star IRAS 00500+6713, which is the result of a merger of a CO white dwarf and an ONe white dwarf. This makes Pa 30 and IRAS 00500+6713 the only SN Iax remnant in the Milky Way.
Observation
Unlike the other types of supernovae, Type Ia supernovae generally occur in all types of galaxies, including ellipticals. They show no preference for regions of current stellar formation. As white dwarf stars form at the end of a star's main sequence evolutionary period, such a long-lived star system may have wandered far from the region where it originally formed. Thereafter a close binary system may spend another million years in the mass transfer stage (possibly forming persistent nova outbursts) before the conditions are ripe for a Type Ia supernova to occur.
A long-standing problem in astronomy has been the identification of supernova progenitors. Direct observation of a progenitor would provide useful constraints on supernova models. As of 2006, the search for such a progenitor had been ongoing for longer than a century. Observation of the supernova SN 2011fe has provided useful constraints. Previous observations with the Hubble Space Telescope did not show a star at the position of the event, thereby excluding a red giant as the source. The expanding plasma from the explosion was found to contain carbon and oxygen, making it likely the progenitor was a white dwarf primarily composed of these elements.
Similarly, observations of the nearby SN PTF 11kx, discovered January 16, 2011 (UT) by the Palomar Transient Factory (PTF), lead to the conclusion that this explosion arises from single-degenerate progenitor, with a red giant companion, thus suggesting there is no single progenitor path to SN Ia. Direct observations of the progenitor of PTF 11kx were reported in the August 24 edition of Science and support this conclusion, and also show that the progenitor star experienced periodic nova eruptions before the supernova – another surprising discovery.
However, later analysis revealed that the circumstellar material is too massive for the single-degenerate scenario, and fits better the core-degenerate scenario.
In May 2015, NASA reported that the Kepler space observatory observed KSN 2011b, a Type Ia supernova in the process of exploding. Details of the pre-nova moments may help scientists better judge the quality of Type Ia supernovae as standard candles, which is an important link in the argument for dark energy.
In July 2019, the Hubble Space Telescope took three images of a Type Ia supernova through a gravitational lens. This supernova appeared at three different times in the evolution of its brightness due to the differing path length of the light in the three images; at −24, 92, and 107 days from peak luminosity. A fourth image will appear in 2037 allowing observation of the entire luminosity cycle of the supernova.
Light curve
Type Ia supernovae have a characteristic light curve, their graph of luminosity as a function of time after the explosion. Near the time of maximal luminosity, the spectrum contains lines of intermediate-mass elements from oxygen to calcium; these are the main constituents of the outer layers of the star. Months after the explosion, when the outer layers have expanded to the point of transparency, the spectrum is dominated by light emitted by material near the core of the star, heavy elements synthesized during the explosion; most prominently isotopes close to the mass of iron (iron-peak elements). The radioactive decay of nickel-56 through cobalt-56 to iron-56 produces high-energy photons, which dominate the energy output of the ejecta at intermediate to late times.
The use of Type Ia supernovae to measure precise distances was pioneered by a collaboration of Chilean and US astronomers, the Calán/Tololo Supernova Survey. In a series of papers in the 1990s the survey showed that while Type Ia supernovae do not all reach the same peak luminosity, a single parameter measured from the light curve can be used to correct unreddened Type Ia supernovae to standard candle values. The original correction to standard candle value is known as the Phillips relationship
and was shown by this group to be able to measure relative distances to 7% accuracy. The cause of this uniformity in peak brightness is related to the amount of nickel-56 produced in white dwarfs presumably exploding near the Chandrasekhar limit.
The similarity in the absolute luminosity profiles of nearly all known Type Ia supernovae has led to their use as a secondary standard candle in extragalactic astronomy.
Improved calibrations of the Cepheid variable distance scale and direct geometric distance measurements to NGC 4258 from the dynamics of maser emission
when combined with the Hubble diagram of the Type Ia supernova distances have led to an improved value of the Hubble constant.
In 1998, observations of distant Type Ia supernovae indicated the unexpected result that the universe seems to undergo an accelerating expansion.
Three members from two teams were subsequently awarded Nobel Prizes for this discovery.
Subtypes
There is significant diversity within the class of Type Ia supernovae. Reflecting this, a plethora of sub-classes have been identified. Two prominent and well-studied examples include 1991T-likes, an overluminous subclass that exhibits particularly strong iron absorption lines and abnormally small silicon features, and 1991bg-likes, an exceptionally dim subclass characterized by strong early titanium absorption features and rapid photometric and spectral evolution. Despite their abnormal luminosities, members of both peculiar groups can be standardized by use of the Phillips relation, defined at blue wavelengths, to determine distance.
See also
Carbon detonation
Cosmic distance ladder
History of supernova observation
List of supernova remnants
Near-Earth supernova
Supernova remnant
References
External links
List of all known Type Ia supernovae at The Open Supernova Catalog .
(A Type Ia progenitor found)
SNFactory Shows Type Ia ‘Standard Candles’ Have Many Masses (March 4, 2014)
Type 1a Supernova
Articles containing video clips | Type Ia supernova | [
"Chemistry",
"Astronomy"
] | 3,379 | [
"Supernovae",
"Astronomical events",
"Explosions"
] |
3,728,348 | https://en.wikipedia.org/wiki/SN%201994D | SN 1994D was a Type Ia supernova event in the outskirts of galaxy NGC 4526. It was offset by west and south of the galaxy center and positioned near a prominent dust lane. It was caused by the explosion of a white dwarf star composed of carbon and oxygen. This event was discovered on March 7, 1994 by R. R. Treffers and associates using the automated 30-inch telescope at Leuschner Observatory. It reached peak visual brightness two weeks later on March 22. Modelling of the light curve indicates the explosion would have been visible around March 3-4. A possible detection of helium in the spectrum was made by W. P. S. Meikle and associates in 1996. A mass of 0.014 to in helium would be needed to produce this feature.
See also
History of supernova observation
References
Further reading
External links
Light curves and spectra on the Open Supernova Catalog
Supernovae
19940307
Virgo (constellation)
1994 in outer space | SN 1994D | [
"Chemistry",
"Astronomy"
] | 200 | [
"Supernovae",
"Astronomical events",
"Virgo (constellation)",
"Constellations",
"Explosions"
] |
25,609,547 | https://en.wikipedia.org/wiki/Proton%20beam%20writing | Proton beam writing (or p-beam writing) is a direct-write lithography process that uses a focused beam of high-energy (MeV) protons to pattern resist material at nanodimensions. The process, although similar in many ways to direct writing using electrons, nevertheless offers some interesting and unique advantages.
Protons, which are approximately 1800 times more massive than electrons, have deeper penetration in materials and travel in an almost straight path. This feature allows the fabrication of three-dimensional, high-aspect-ratio structures with vertical, smooth sidewalls, and low line-edge roughness. Calculations have also indicated that p-beam writing exhibits minimal proximity effects (unwanted exposure due to secondary electrons), since the secondary electrons induced in proton–electron collisions have low energy. A further advantage stems from the ability of protons to displace atoms while traversing material, thereby increasing localized damage especially at the end of range. P-beam writing produces resistive patterns at depth in silicon, allowing patterning of selective regions with different optical properties as well as the removal of undamaged regions via electrochemical etching.
The primary mechanisms for producing structures in resist materials is, in general, bond scissioning in positive resists such as polymethylmethacrylate (PMMA), or cross-linking in negative resists such as SU-8. In positive resists the regions damaged by protons are removed by chemical development to produce structures, whereas in negative resists the development procedures remove the undamaged resist leaving the cross-linked structures behind. In e-beam writing, the primary and secondary electrons create the scissioning or cross-linking, whereas in p-beam writing the damage is caused by short range proton-induced secondary electrons. The proton fluence required for exposure varies from 30–150 nCmm−2 depending on the resist material, and is around 80–100 times less than that required by e-beam writing. Remark: The unit of the fluence in proton beam writing is usually given in "charge/area". It can be converted into "particles/area" by dividing "charge/area" by the charge of a proton, Q = 1,602·10−19C.
P-beam writing is a new technology of great potential, and both current experimental data and theoretical predictions indicate that sub-10 nm 3D structuring is feasible. However, the lack of a user friendly commercial instrument with a small footprint is currently holding back the potentially wide range of application fields in which p-beam writing could make a substantial impact. Hopefully, this will be addressed in the near future.
See also
Electron-beam lithography
Ion beam lithography
References
Lithography (microfabrication) | Proton beam writing | [
"Materials_science"
] | 565 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
25,613,762 | https://en.wikipedia.org/wiki/Solar%20air%20heat | Solar air heating is a solar thermal technology in which the energy from the sun, insolation, is captured by an absorbing medium and used to heat air. Solar air heating is a renewable energy heating technology used to heat or condition air for buildings or process heat applications. It is typically the most cost-effective out of all the solar technologies, especially in commercial and industrial applications, and it addresses the largest usage of building energy in heating climates, which is space heating and industrial process heating.
Solar air collectors can be divided into two categories:
Unglazed Air Collectors or Transpired Solar Collector (used primarily to heat ambient air in commercial, industrial, agriculture and process applications)
Glazed Solar Collectors (recirculating types that are usually used for space heating)
Collector types
Solar collectors for air heat may be classified by their air distribution paths or by their materials, such as glazed or unglazed. For example:
through-pass collectors
front-pass
back pass
combination front and back pass collectors
unglazed
glazed
Unglazed air collectors and transpired solar collectors
Background
The term "unglazed air collector" refers to a solar air heating system that consists of an absorber without any glass or glazing over top. The most common type of unglazed collector on the market is the transpired solar collector. This technology was invented and patented by Canadian engineer John Hollick of Conserval Engineering Inc. in the 1990s, who worked with the U.S. Department of Energy (NREL) and Natural Resources Canada on the commercialization of the technology around the world. The technology has been extensively monitored by these government agencies, and Natural Resources Canada developed the feasibility tool RETScreen to model the energy savings from transpired solar collectors. John Hollick and the transpired solar collector were honored by the American Society of Mechanical Engineers (ASME) in 2014 as being one of the best inventions of the industrialized age, alongside Thomas Edison, Henry Ford, the steam engine and the Panama Canal – in a New York exhibition recognizing the best inventions, inventors and engineering feats of the past two centuries.
Several thousand transpired solar collector systems have been installed in a variety of commercial, industrial, institutional, agricultural, and process applications in over 35 countries around the world. The technology was originally used primarily in industrial applications such as manufacturing and assembly plants where there were high ventilation requirements, stratified ceiling heat, and often negative pressure in the building. The first unglazed transpired collector in the world was installed by Ford Motor Company on their assembly plant in Oakville, Canada.
With the increasing drive to install renewable energy systems on buildings, transpired solar collectors are now used across the entire building stock because of high energy production (up to 500-600 peak thermal Watts/square metre), high solar conversion (up to 90%) and lower capital costs when compared against solar photovoltaic and solar water heating.
Method of operation
Unglazed air collectors heat ambient (outside) air instead of recirculated building air. Transpired solar collectors are usually wall-mounted to capture the lower sun angle in the winter heating months as well as sun reflection off the snow and achieve their optimum performance and return on investment when operating at flow rates of between 4 and 8 CFM per square foot (72 to 144 m3/h.m2) of collector area.
The exterior surface of a transpired solar collector consists of thousands of tiny micro-perforations that allow the boundary layer of heat to be captured and uniformly drawn into an air cavity behind the exterior panels. This solar heated ventilation air is drawn into the building’s ventilation system from air outlets positioned along the top of the collector and the air is then distributed in the building via conventional means or using a solar ducting system.
The extensive monitoring by Natural Resources Canada and NREL has shown that transpired solar collector systems reduce between 10-50% of the conventional heating load and that RETScreen is an accurate predictor of system performance.
Transpired solar collectors act as a rainscreen and they also capture heat loss escaping from the building envelope which is collected in the collector air cavity and drawn back into the ventilation system. There is no maintenance required with solar air heating systems and the expected lifespan is over 30 years.
Variations of transpired solar collectors
Unglazed transpired collectors can also be roof-mounted for applications in which there is not a suitable south facing wall or for other architectural considerations. A number of companies offer transpired air collectors suitable for roof mounting either mounted directly onto a sloped metal roof or as modules affixed to ducts and connected to nearby fans and HVAC units.
Higher temperatures are also possible with transpired collectors which can be configured to heat the air twice to increase the temperature rise making it suitable for space heating of larger buildings. In a 2-stage system, the first stage is the typical unglazed transpired collector and the second stage has glazing covering the transpired collector. The glazing allows all of that heated air from the first stage to be directed through a second set of transpired collectors for a second stage of solar heating.
Another innovation is to recover heat from the photovoltaic (PV) modules (which is often four times more than the electrical energy produced by the PV module) by mounting the PV modules onto the solar air system. In cases where there is a heating requirement, incorporating a solar air component into the PV system provides two technical advantages; it removes the PV heat and allows the PV system to operate closer to its rated efficiency (which is 25 C); and it decreases the total energy payback period associated with the combined system because the heat energy is captured and used to offset conventional heating.
Glazed air systems
Functioning in a similar manner as a conventional forced air furnace, systems provide heat by recirculating conditioned building air through solar collectors. Through the use of an energy collecting surface to absorb the sun’s thermal energy, and ducting air to come in contact with it, a simple and effective collector can be made for a variety of air conditioning and process applications.
A simple solar air collector consists of an absorber material, sometimes having a selective surface, to capture radiation from the sun and transfers this thermal energy to air via conduction heat transfer. This heated air is then ducted to the building space or to the process area where the heated air is used for space heating or process heating needs.
The pioneering figure for this type of system was George Löf, who built solar heated air system for a house in Boulder, Colorado, in 1945. He later included a gravel bed for heat storage.
Through-pass air collector
In the through-pass configuration, air ducted onto one side of the absorber passes through a perforated or fibrous type material and is heated from the conductive properties of the material and the convective properties of the moving air. Through-pass absorbers have the most surface area which enables relatively high conductive heat transfer rates, but significant pressure drop can require greater fan power, and deterioration of certain absorber material after many years of solar radiation exposure can additionally create problems with air quality and performance.
Back, front, and combination passage air collector
In back-pass, front-pass, and combination type configurations the air is directed on either the back, the front, or on both sides of the absorber to be heated from the return to the supply ducting headers. Although passing the air on both sides of the absorber will provide a greater surface area for conductive heat transfer, issues with dust (fouling) can arise from passing air on the front side of the absorber which reduces absorber efficiency by limiting the amount of sunlight received. In cold climates, air passing next to the glazing will additionally cause greater heat loss, resulting in lower overall performance of the collector.
Solar air heat applications
A variety of applications can utilize solar air heat technologies to reduce the carbon footprint from use of conventional heat sources, such as fossil fuels, to create a sustainable means to produce thermal energy. Applications such as space heating, greenhouse season extension, pre-heating ventilation makeup air, or process heat can be addressed by solar air heat devices. In the field of ‘solar co-generation’ solar thermal technologies are paired with photovoltaics (PV) to increase the efficiency of the system by cooling the PV panels to improve their electrical performance while simultaneously warming air for space heating.
Space heating applications
Space heating for residential and commercial applications can be done through the use of solar air heating panels. This configuration operates by drawing air from the building envelope or from the outdoor environment and passing it through the collector where the air warms via conduction from the absorber and is then supplied to the living or working space by either passive means or with the assistance of a fan.
Process heat applications
Solar air heat can also be used in process applications such as drying laundry, crops (i.e. tea, corn, coffee) and other drying applications. Air heated through a solar collector and then passed over a medium to be dried can provide an efficient means by which to reduce the moisture content of the material.
Night cooling applications
Radiation cooling to the night sky is based on the principle of heat loss by long-wave radiation from a warm surface (roof) to another body at a lower temperature (sky). On a clear night, a typical sky-facing surface can cool at a rate of about 75 W/m2 (25 BTU/hr/ft2) This means that a metal roof facing the sky will be colder than the surrounding air temperature. Collectors can take advantage of this cooling phenomena. As warm night air touches the cooler surface of a transpired collector, heat is transferred to the metal, radiated to the sky and the cooled air is then drawn in through the perforated surface. Cool air may then be drawn into HVAC units. See also
Ventilation applications
By drawing air through a properly designed air collector or air heater, solar heated fresh air can reduce the heating load during sunny operation. Applications include transpired collectors preheating fresh air entering a heat recovery ventilator, or suction created by venting heated air out of some other solar chimney.
See also
Solarwall
Passive solar building design
Passive house
Low-energy house
Zero-energy building
List of low-energy building techniques
Sustainable architecture
References
Heating
Solar energy
Solar architecture
Sustainable building
Low-energy building | Solar air heat | [
"Engineering"
] | 2,117 | [
"Construction",
"Sustainable building",
"Building engineering"
] |
24,198,544 | https://en.wikipedia.org/wiki/H%C3%A1jek%E2%80%93Le%20Cam%20convolution%20theorem | In statistics, the Hájek–Le Cam convolution theorem states that any regular estimator in a parametric model is asymptotically equivalent to a sum of two independent random variables, one of which is normal with asymptotic variance equal to the inverse of Fisher information, and the other having arbitrary distribution.
The obvious corollary from this theorem is that the “best” among regular estimators are those with the second component identically equal to zero. Such estimators are called efficient and are known to always exist for regular parametric models.
The theorem is named after Jaroslav Hájek and Lucien Le Cam.
Statement
Let ℘ = {Pθ | θ ∈ Θ ⊂ ℝk} be a regular parametric model, and q(θ): Θ → ℝm be a parameter in this model (typically a parameter is just one of the components of vector θ). Assume that function q is differentiable on Θ, with the m × k matrix of derivatives denoted as q̇θ. Define
— the information bound for q,
— the efficient influence function for q,
where I(θ) is the Fisher information matrix for model ℘, is the score function, and ′ denotes matrix transpose.
Theorem . Suppose Tn is a uniformly (locally) regular estimator of the parameter q. Then
<li> There exist independent random m-vectors and Δθ such that
where d denotes convergence in distribution. More specifically,
<li> If the map θ → q̇θ is continuous, then the convergence in (A) holds uniformly on compact subsets of Θ. Moreover, in that case Δθ = 0 for all θ if and only if Tn is uniformly (locally) asymptotically linear with influence function ψq(θ)
References
Theorems in statistics | Hájek–Le Cam convolution theorem | [
"Mathematics"
] | 369 | [
"Mathematical theorems",
"Mathematical problems",
"Theorems in statistics"
] |
24,200,136 | https://en.wikipedia.org/wiki/Epstein%E2%80%93Barr%20virus%20small%20nucleolar%20RNA%201 | V-snoRNA1 is a box CD-snoRNA identified in B lymphocytes infected with the Epstein–Barr virus (human herpesvirus 4 (HHV-4)). This snoRNA is the first known example of a snoRNA expressed from a viral genome. It is homologous to eukaryotic snoRNAs because it contains the C and D boxes sequence motifs but lacks a terminal stem-loop structure. The nucleolar localization of v-snoRNA1 was determined by in situ hybridization. V-snoRNA1 can form into a ribonucleoprotein complex (snoRNP) as co-immunoprecipitation (CoIP) assays showed that this snoRNA interacts with the snoRNA core proteins, fibrillarin, Nop56, Nop58. It has also been proposed that this snoRNA may act as a miRNA-like precursor that is processed into 24-nucleotide-sized RNA fragments that target the 3'UTR of viral DNA polymerase mRNA.
See also
Epstein–Barr virus stable intronic sequence RNAs
References
External links
Molecular genetics
Non-coding RNA
Epstein–Barr virus | Epstein–Barr virus small nucleolar RNA 1 | [
"Chemistry",
"Biology"
] | 257 | [
"Molecular genetics",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.