id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
617,263
https://en.wikipedia.org/wiki/Burin%20%28lithic%20flake%29
In archaeology and the field of lithic reduction, a burin (from the French burin, meaning "cold chisel" or modern engraving burin) is a type of stone tool, a handheld lithic flake with a chisel-like edge which prehistoric humans used for carving or finishing wood or bone tools or weapons, and sometimes for engraving images. In archaeology, burin use is often associated with "burin spalls", which are a form of debitage created when toolmakers strike a small flake obliquely from the edge of the burin flake in order to form the graving edge. Documented use Standardized burin usage is typical of the Middle Paleolithic and Upper Palaeolithic cultures in Europe, but archaeologists have also identified them in North American cultural assemblages, and in his book Early Man in China, Jia Lanpo of Beijing University lists dihedral burins and burins for truncation among artifacts uncovered along the banks of the Liyigon river near Xujiayao. Burins can also be associated with compound microblade projectile technology, found with microblade cores and/or microblades. In these cases, their purpose is interpreted as both a rapid retouch and hafting preparation strategy for blade-based edge tools and bifaces and as a class of dedicated flake or blade-based tools used to insert microblades and other microliths into organic armatures. Types Multiple types of burin exist. A type of burin diagnostic of the archaeological stratum where they are found is the "Noailles" burin, named for its original find-site, the Grotte de Noailles, in the commune of Brive-la-Gaillarde, Corrèze, in southwestern France. It consists of a small multiple burin characteristic of the Upper Paleolithic cultural stage called the Gravettian, ca. 28–23,000 BC; these flake tools have been restruck and refined to give several chisellike edges and a blunt, grippable rear edge. Another type of burin is called the "ordinary burin", which occurs when a burin facet is backed against another burin facet. A bec-de-flute burin, or "axial burin" began as a long flake, but one of both ends have been knocked off, giving two working facets meeting at an angle. References External links Archaeological artefact types Hand tools Lithics Chisels
Burin (lithic flake)
[ "Engineering" ]
524
[ "Human–machine interaction", "Hand tools" ]
617,307
https://en.wikipedia.org/wiki/Rotameter
A rotameter is a device that measures the volumetric flow rate of fluid in a closed tube. It belongs to a class of meters called variable-area flowmeters, which measure flow rate by allowing the cross-sectional area the fluid travels through to vary, causing a measurable effect. History The first variable area meter with rotating float was invented by Karl Kueppers (1874–1933) in Aachen in 1908. This is described in the German patent 215225. Felix Meyer founded the company "Deutsche Rotawerke GmbH" in Aachen recognizing the fundamental importance of this invention. They improved this invention with new shapes of the float and of the glass tube. Kueppers invented the special shape for the inside of the glass tube that realized a symmetrical flow scale. The brand name Rotameter was registered by the British company GEC Elliot automation, Rotameter Co. In many other countries the brand name Rotameter is registered by Rota Yokogawa GmbH & Co. KG in Germany which is now owned by Yokogawa Electric Corp. Description A rotameter consists of a tapered tube, typically made of glass with a 'float' (a shaped weight, made either of anodized aluminum or a ceramic), inside that is pushed up by the drag force of the flow and pulled down by gravity. The drag force for a given fluid and float cross section is a function of flow speed squared only, see drag equation. A higher volumetric flow rate through a given area increases flow speed and drag force, so the float will be pushed upwards. However, as the inside of the rotameter is cone shaped (widens), the area around the float through which the medium flows increases, the flow speed and drag force decrease until there is mechanical equilibrium with the float's weight. Floats are made in many different shapes, with spheres and ellipsoids being the most common. The float may be diagonally grooved and partially colored so that it rotates axially as the fluid passes. This shows if the float is stuck since it will only rotate if it is free. Readings are usually taken at the top of the widest part of the float; the center for an ellipsoid, or the top for a cylinder. Some manufacturers use a different standard. The "float" must not float in the fluid: it has to have a higher density than the fluid, otherwise it will float to the top even if there is no flow. The mechanical nature of the measuring principle provides a flow measurement device that does not require any electrical power. If the tube is made of metal, the float position is transferred to an external indicator via a magnetic coupling. This capability has considerably expanded the range of applications for the variable area flowmeter, since the measurement can observed remotely from the process or used for automatic control. Advantages A rotameter requires no external power or fuel, it uses only the inherent properties of the fluid, along with gravity, to measure flow rate. A rotameter is also a relatively simple device that can be mass manufactured out of cheap materials, allowing for its widespread use. Since the area of the flow passage increases as the float moves up the tube, the scale is approximately linear. Clear glass is used which is highly resistant to thermal shock and chemical action. Disadvantages Due to its reliance on the ability of the fluid or gas to displace the float, graduations on a given rotameter will only be accurate for a given substance at a given temperature. The main property of importance is the density of the fluid; however, viscosity may also be significant. Floats are ideally designed to be insensitive to viscosity; however, this is seldom verifiable from manufacturers' specifications. Either separate rotameters for different densities and viscosities may be used, or multiple scales on the same rotameter can be used. Because operation of a rotameter depends on the force of gravity for operation, a rotameter must be oriented vertically. Significant error can result if the orientation deviates significantly from the vertical. Due to the direct flow indication the resolution is relatively poor compared to other measurement principles. Readout uncertainty gets worse near the bottom of the scale. Oscillations of the float and parallax may further increase the uncertainty of the measurement. Since the float must be read through the flowing medium, some fluids may obscure the reading. A transducer may be required for electronically measuring the position of the float. Rotameters are not easily adapted for reading by machine; although magnetic floats that drive a follower outside the tube are available. Rotameters are not generally manufactured in sizes greater than 6 inches/150 mm, but bypass designs are sometimes used on very large pipes. See also Thorpe tube flowmeter References External links Rota Yokogawa GmbH & Co. KG: Rotameter measuring devices Rota Yokogawa GmbH & Co. KG: Company history of the founder of Rotameter eFunda: Introduction to Variable Area Flowmeters KROHNE: Measuring Principle Fluid dynamics Flow meters
Rotameter
[ "Chemistry", "Technology", "Engineering" ]
1,033
[ "Chemical engineering", "Measuring instruments", "Piping", "Fluid dynamics", "Flow meters" ]
617,379
https://en.wikipedia.org/wiki/Retrotransposon
Retrotransposons (also called Class I transposable elements) are mobile elements which move in the host genome by converting their transcribed RNA into DNA through reverse transcription. Thus, they differ from Class II transposable elements, or DNA transposons, in utilizing an RNA intermediate for the transposition and leaving the transposition donor site unchanged. Through reverse transcription, retrotransposons amplify themselves quickly to become abundant in eukaryotic genomes such as maize (49–78%) and humans (42%). They are only present in eukaryotes but share features with retroviruses such as HIV, for example, discontinuous reverse transcriptase-mediated extrachromosomal recombination. There are two main types of retrotransposons, long terminal repeats (LTRs) and non-long terminal repeats (non-LTRs). Retrotransposons are classified based on sequence and method of transposition. Most retrotransposons in the maize genome are LTR, whereas in humans they are mostly non-LTR. LTR retrotransposons LTR retrotransposons are characterized by their long terminal repeats (LTRs), which are present at both the 5' and 3' ends of their sequences. These LTRs contain the promoters for these transposable elements (TEs), are essential for TE integration, and can vary in length from just over 100 base pairs (bp) to more than 1,000 bp. On average, LTR retrotransposons span several thousand base pairs, with the largest known examples reaching up to 30 kilobases (kb). LTRs are highly functional sequences, and for that reason LTR and non-LTR retrotransposons differ greatly in their reverse transcription and integration mechanisms. Non-LTR retrotransposons use a target-primed reverse transcription (TPRT) process, which requires the RNA of the TE to be brought to the cleavage site of the retrotransposon’s integrase, where it is reverse transcribed. In contrast, LTR retrotransposons undergo reverse transcription in the cytoplasm, utilizing two rounds of template switching, and a formation of a pre-integration complex (PIC) composed of double-stranded DNA and an integrase dimer bound to LTRs. This complex then moves into the nucleus for integration into a new genomic location. LTR retrotransposons typically encode the proteins gag and pol, which may be combined into a single open reading frame (ORF) or separated into distinct ORFs. Similar to retroviruses, the gag protein is essential for capsid assembly and the packaging of the TE's RNA and associated proteins. The pol protein is necessary for reverse transcription and includes these crucial domains: PR (protease), RT (reverse transcriptase), RH (RNase H), and INT (integrase). Additionally, some LTR retrotransposons have an ORF for an envelope (env) protein that is incorporated into the assembled capsid, facilitating attachment to cellular surfaces. Endogenous retrovirus An endogenous retrovirus is a retrovirus without virus pathogenic effects that has been integrated into the host genome by inserting their inheritable genetic information into cells that can be passed onto the next generation like a retrotransposon. Because of this, they share features with retroviruses and retrotransposons. When the retroviral DNA is integrated into the host genome they evolve into endogenous retroviruses that influence eukaryotic genomes. So many endogenous retroviruses have inserted themselves into eukaryotic genomes that they allow insight into biology between viral-host interactions and the role of retrotransposons in evolution and disease. Many retrotransposons share features with endogenous retroviruses, the property of recognising and fusing with the host genome. However, there is a key difference between retroviruses and retrotransposons, which is indicated by the env gene. Although similar to the gene carrying out the same function in retroviruses, the env gene is used to determine whether the gene is retroviral or retrotransposon. If the gene is retroviral it can evolve from a retrotransposon into a retrovirus. They differ by the order of sequences in pol genes. Env genes are found in LTR retrotransposon types Ty1-copia (Pseudoviridae), Ty3-gypsy (Metaviridae) and BEL/Pao. They encode glycoproteins on the retrovirus envelope needed for entry into the host cell. Retroviruses can move between cells whereas LTR retrotransposons can only move themselves into the genome of the same cell. Many vertebrate genes were formed from retroviruses and LTR retrotransposons. One endogenous retrovirus or LTR retrotransposon has the same function and genomic locations in different species, suggesting their role in evolution. Non-LTR retrotransposons Like LTR retrotransposons, non-LTR retrotransposons contain genes for reverse transcriptase, RNA-binding protein, nuclease, and sometimes ribonuclease H domain but they lack the long terminal repeats. RNA-binding proteins bind the RNA-transposition intermediate and nucleases are enzymes that break phosphodiester bonds between nucleotides in nucleic acids. Instead of LTRs, non-LTR retrotransposons have short repeats that can have an inverted order of bases next to each other aside from direct repeats found in LTR retrotransposons that is just one sequence of bases repeating itself. Although they are retrotransposons, they cannot carry out reverse transcription using an RNA transposition intermediate in the same way as LTR retrotransposons. Those two key components of the retrotransposon are still necessary but the way they are incorporated into the chemical reactions is different. This is because unlike LTR retrotransposons, non-LTR retrotransposons do not contain sequences that bind tRNA. They mostly fall into two types – LINEs (Long interspersed nuclear elements) and SINEs (Short interspersed nuclear elements). SVA elements are the exception between the two as they share similarities with both LINEs and SINEs, containing Alu elements and different numbers of the same repeat. SVAs are shorter than LINEs but longer than SINEs. While historically viewed as "junk DNA", research suggests in some cases, both LINEs and SINEs were incorporated into novel genes to form new functions. LINEs When a LINE is transcribed, the transcript contains an RNA polymerase II promoter that ensures LINEs can be copied into whichever location it inserts itself into. RNA polymerase II is the enzyme that transcribes genes into mRNA transcripts. The ends of LINE transcripts are rich in multiple adenines, the bases that are added at the end of transcription so that LINE transcripts would not be degraded. This transcript is the RNA transposition intermediate. The RNA transposition intermediate moves from the nucleus into the cytoplasm for translation. This gives the two coding regions of a LINE that in turn binds back to the RNA it is transcribed from. The LINE RNA then moves back into the nucleus to insert into the eukaryotic genome. LINEs insert themselves into regions of the eukaryotic genome that are rich in bases AT. At AT regions LINE uses its nuclease to cut one strand of the eukaryotic double-stranded DNA. The adenine-rich sequence in LINE transcript base pairs with the cut strand to flag where the LINE will be inserted with hydroxyl groups. Reverse transcriptase recognises these hydroxyl groups to synthesise LINE retrotransposon where the DNA is cut. Like with LTR retrotransposons, this new inserted LINE contains eukaryotic genome information so it can be copied and pasted into other genomic regions easily. The information sequences are longer and more variable than those in LTR retrotransposons. Most LINE copies have variable length at the start because reverse transcription usually stops before DNA synthesis is complete. In some cases this causes RNA polymerase II promoter to be lost so LINEs cannot transpose further. Human L1 LINE-1 (L1) retrotransposons make up a significant portion of the human genome, with an estimated 500,000 copies per genome. Genes encoding for human LINE1 usually have their transcription inhibited by methyl groups binding to its DNA carried out by PIWI proteins and enzymes DNA methyltransferases. L1 retrotransposition can disrupt the nature of genes transcribed by pasting themselves inside or near genes which could in turn lead to human disease. LINE1s can only retrotranspose in some cases to form different chromosome structures contributing to differences in genetics between individuals. There is an estimate of 80–100 active L1s in the reference genome of the Human Genome Project, and an even smaller number of L1s within those active L1s retrotranspose often. L1 insertions have been associated with tumorigenesis by activating cancer-related genes oncogenes and diminishing tumor suppressor genes. Each human LINE1 contains two regions from which gene products can be encoded. The first coding region contains a leucine zipper protein involved in protein-protein interactions and a protein that binds to the terminus of nucleic acids. The second coding region has a purine/pyrimidine nuclease, reverse transcriptase and protein rich in amino acids cysteines and histidines. The end of the human LINE1, as with other retrotransposons is adenine-rich. Human L1 actively retrotransposes in the human genome. A recent study identified 1,708 somatic L1 retrotransposition events, especially in colorectal epithelial cells. These events occur from early embryogenesis and retrotransposition rate is substantially increased during colorectal tumourigenesis. SINEs SINEs are much shorter (300bp) than LINEs. They share similarity with genes transcribed by RNA polymerase II, the enzyme that transcribes genes into mRNA transcripts, and the initiation sequence of RNA polymerase III, the enzyme that transcribes genes into ribosomal RNA, tRNA and other small RNA molecules. SINEs such as mammalian MIR elements have tRNA gene at the start and adenine-rich at the end like in LINEs. SINEs do not encode a functional reverse transcriptase protein and rely on other mobile transposons, especially LINEs. SINEs exploit LINE transposition components despite LINE-binding proteins prefer binding to LINE RNA. SINEs cannot transpose by themselves because they cannot encode SINE transcripts. They usually consist of parts derived from tRNA and LINEs. The tRNA portion contains an RNA polymerase III promoter which the same kind of enzyme as RNA polymerase II. This makes sure the LINE copies would be transcribed into RNA for further transposition. The LINE component remains so LINE-binding proteins can recognise the LINE part of the SINE. Alu elements Alus are the most common SINE in primates. They are approximately 350 base pairs long, do not encode proteins and can be recognized by the restriction enzyme AluI (hence the name). Their distribution may be important in some genetic diseases and cancers. Copy and pasting Alu RNA requires the Alu's adenine-rich end and the rest of the sequence bound to a signal. The signal-bound Alu can then associate with ribosomes. LINE RNA associates on the same ribosomes as the Alu. Binding to the same ribosome allows Alus of SINEs to interact with LINE. This simultaneous translation of Alu element and LINE allows SINE copy and pasting. SVA elements SVA elements are present at lower levels than SINES and LINEs in humans. The starts of SVA and Alu elements are similar, followed by repeats and an end similar to endogenous retrovirus. LINEs bind to sites flanking SVA elements to transpose them. SVA are one of the youngest transposons in great apes genome and among the most active and polymorphic in the human population. SVA was created by a fusion between an Alu element, a VNTR (variable number tandem repeat), and an LTR fragment. Role in human disease Retrotransposons ensure they are not lost by chance by occurring only in cell genetics that can be passed on from one generation to the next from parent gametes. However, LINEs can transpose into the human embryo cells that eventually develop into the nervous system, raising the question whether this LINE retrotransposition affects brain function. LINE retrotransposition is also a feature of several cancers, but it is unclear whether retrotransposition itself causes cancer instead of just a symptom. Uncontrolled retrotransposition is bad for both the host organism and retrotransposons themselves so they have to be regulated. Retrotransposons are regulated by RNA interference. RNA interference is carried out by a bunch of short non-coding RNAs. The short non-coding RNA interacts with protein Argonaute to degrade retrotransposon transcripts and change their DNA histone structure to reduce their transcription. Role in evolution LTR retrotransposons came about later than non-LTR retrotransposons, possibly from an ancestral non-LTR retrotransposon acquiring an integrase from a DNA transposon. Retroviruses gained additional properties to their virus envelopes by taking the relevant genes from other viruses using the power of LTR retrotransposon. Due to their retrotransposition mechanism, retrotransposons amplify in number quickly, composing 40% of the human genome. The insertion rates for LINE1, Alu and SVA elements are 1/200 – 1/20, 1/20 and 1/900 respectively. The LINE1 insertion rates have varied a lot over the past 35 million years, so they indicate points in genome evolution. Notably a large number of 100 kilobases in the maize genome show variety due to the presence or absence of retrotransposons. However since maize is unusual genetically as compared to other plants it cannot be used to predict retrotransposition in other plants. Mutations caused by retrotransposons include: Gene inactivation Changing gene regulation Changing gene products Acting as DNA repair sites Role in biotechnology See also Copy-number variation Genomic organization Insertion sequences Interspersed repeat Paleogenetics Paleovirology RetrOryza Retrotransposon markers, a powerful method of reconstructing phylogenies. Tn3 transposon Transposon Retron References Mobile genetic elements Molecular biology Non-coding DNA
Retrotransposon
[ "Chemistry", "Biology" ]
3,110
[ "Biochemistry", "Molecular genetics", "Mobile genetic elements", "Molecular biology" ]
617,475
https://en.wikipedia.org/wiki/Fenobucarb
Fenobucarb is a carbamate insecticide, also widely known as BPMC. A pale yellow or pale red liquid, insoluble in water; used as an agricultural insecticide, especially for control of Hemipteran pests, on rice and cotton and moderately toxic for humans. Synonyms 2-(1-methylpropyl)phenol methylcarbamate; 2-(1-methylpropyl)phenyl methylcarbamate; 2-sec-Butylphenyl N-methylcarbamate; BPMC; fenocarb; N-methyl o-sec-butylphenyl carbamate Tradenames Fenobucarb, Osbac, Bassa, Bipvin, Baycarb, etc LD50 Male Mouse 340 mg/kg Male Rat 410 mg/kg References Acetylcholinesterase inhibitors Carbamate insecticides Phenol esters Aromatic carbamates Sec-Butyl compounds
Fenobucarb
[ "Chemistry" ]
205
[]
617,522
https://en.wikipedia.org/wiki/Issai%20Schur
Issai Schur (10 January 1875 – 10 January 1941) was a Russian mathematician who worked in Germany for most of his life. He studied at the University of Berlin. He obtained his doctorate in 1901, became lecturer in 1903 and, after a stay at the University of Bonn, professor in 1919. As a student of Ferdinand Georg Frobenius, he worked on group representations (the subject with which he is most closely associated), but also in combinatorics and number theory and even theoretical physics. He is perhaps best known today for his result on the existence of the Schur decomposition and for his work on group representations (Schur's lemma). Schur published under the name of both I. Schur, and J. Schur, the latter especially in Journal für die reine und angewandte Mathematik. This has led to some confusion. Childhood Issai Schur was born into a Jewish family, the son of the businessman Moses Schur and his wife Golde Schur (née Landau). He was born in Mogilev on the Dnieper River in what was then the Russian Empire. Schur used the name Schaia (Isaiah as the epitaph on his grave) rather than Issai up in his middle twenties. Schur's father may have been a wholesale merchant. In 1888, at the age of 13, Schur went to Liepāja (Courland, now in Latvia), where his married sister and his brother lived, 640 km north-west of Mogilev. Kurland was one of the three Baltic governorates of Tsarist Russia, and since the Middle Ages the Baltic Germans were the upper social class. The local Jewish community spoke mostly German and not Yiddish. Schur attended the German-speaking Nicolai Gymnasium in Libau from 1888 to 1894 and reached the top grade in his final examination, and received a gold medal. Here he became fluent in German. Education In October 1894, Schur attended the University of Berlin, with concentration in mathematics and physics. In 1901, he graduated summa cum laude under Frobenius and Lazarus Immanuel Fuchs with his dissertation On a class of matrices that can be assigned to a given matrix, which contains a general theory of the representation of linear groups. According to Vogt, he began to use the name Issai at this time. Schur thought that his chance of success in the Russian Empire was rather poor, and because he spoke German so perfectly, he remained in Berlin. He graduated in 1903 and was a lecturer at the University of Berlin. Schur held a position as professor at the Berlin University for the ten years from 1903 to 1913. In 1913 he accepted an appointment as associate professor and successor of Felix Hausdorff at the University of Bonn. In the following years Frobenius tried various ways to get Schur back to Berlin. Among other things, Schur's name was mentioned in a letter dated 27 June 1913 from Frobenius to Robert Gnehm (the School Board President of the ETH) as a possible successor to Carl Friedrich Geiser. Frobenius complained that they had never followed his advice before and then said: "That is why I can't even recommend Prof. J. Schur (now in Bonn) to you. He's too good for Zurich, and should be my successor in Berlin". Hermann Weyl got the job in Zurich. The efforts of Frobenius were finally successful in 1916, when Schur succeeded Johannes Knoblauch as adjunct professor. Frobenius died a year later, on 3 August 1917. Schur and Carathéodory were both named as the frontrunners for his successor. But they chose Constantin Carathéodory in the end. In 1919 Schur finally received a personal professorship, and in 1921 he took over the chair of the retired Friedrich Hermann Schottky. In 1922, he was also added to the Prussian Academy of Sciences. During the time of Nazism After the takeover by the Nazis and the elimination of the parliamentary opposition, the Law for the Restoration of the Professional Civil Service on 7 April 1933, prescribed the release of all distinguished public servants that held unpopular political opinions or who were "Jewish" in origin; a subsequent regulation extended this to professors and therefore also to Schur. Schur was suspended and excluded from the university system. His colleague Erhard Schmidt fought for his reinstatement, and since Schur had been a Prussian official before the First World War, he was allowed to participate in certain special lectures on teaching in the winter semester of 1933/1934 again. Schur withdrew his application for leave from the Science Minister and passed up the offer of a visiting professorship at the University of Wisconsin–Madison for the academic year 1933–34. One element that likely played a role in the rejection of the offer was that Schur no longer felt he could cope with the requirements that would have come with a new beginning in an English-speaking environment. Already in 1932, Schur's daughter Hilde had married the doctor Chaim Abelin in Bern. As a result, Issai Schur visited his daughter in Bern several times. In Zurich he met often with George Pólya, with whom he was on friendly terms since before the First World War. On such a trip to Switzerland in the summer of 1935, a letter reached Schur from Ludwig Bieberbach signed on behalf of the Rector's, stating that Schur should urgently seek him out in the University of Berlin. They needed to discuss an important matter with him. It involved Schur's dismissal on 30 September 1935. Schur remained a member of the Prussian Academy of Sciences after his release as a professor, but a little later he lost this last remnant of his official position. Due to an intervention from Bieberbach in the spring of 1938 he was forced to explain his resignation from the commission of the Academy. His membership in the Advisory Board of Mathematische Zeitschrift was ended in early 1939. Emigration Schur found himself lonely after the flight of many of his students and the expulsion of renowned scientists from his previous place of work. Only Dr. Helmut Grunsky had been friendly to him, as Schur reported in the late thirties to his expatriate student Max Menachem Schiffer. The Gestapo was everywhere. Since Schur had announced to his wife his intentions to commit suicide in case of a summons to the Gestapo, in the summer of 1938 his wife took his letters, and with them a summons from the Gestapo, sent Issai Schur to a relaxing stay in a home outside of Berlin and went with medical certificate allowing her to meet the Gestapo in place of her husband. There they flatly asked why they were still staying in Germany. But there were economic obstacles to the planned emigration: emigrating Germans had a pre-departure Reich Flight Tax to pay, which was a quarter of their assets. Now Schur's wife had inherited a mortgage on a house in Lithuania, which because of the Lithuanian foreign exchange determination could not be repaid. On the other hand, Schur was forbidden to default or leave the mortgage to the German Reich. Thus the Schurs lacked cash and cash equivalents. Finally, the missing sum of money was somehow supplied, and to this day it does not seem to be clear who were the donors. Schur was able to leave Germany in early 1939. His health, however, was already severely compromised. He traveled in the company of a nurse to his daughter in Bern, where his wife also followed a few days later. There they remained for several weeks and then emigrated to Palestine. Two years later, on his 66th birthday, on 10 January 1941, he died in Tel Aviv of a heart attack. Work Schur continued the work of his teacher Frobenius with many important works for group theory and representation theory. In addition, he published important results and elegant proofs of known results in almost all branches of classical algebra and number theory. His collected works are proof of this. There, his work on the theory of integral equations and infinite series can be found. Linear groups In his doctoral thesis Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen Issai Schur determined the polynomial representations of the general linear group on the field of complex numbers. The results and methods of this work are still relevant today. In his book, J.A. Green determined the polynomial representations of over infinite fields with arbitrary characteristic. It is mainly based on Schur's dissertation. Green writes, "This remarkable work (of Schur) contained many very original ideas, developed with superb algebraic skill. Schur showed that these (polynomial) representations are completely reducible, that each irreducible one is "homogeneous" of some degree , and that the equivalence types of irreducible polynomial representations of , of fixed homogeneous degree , are in one-one correspondence with the partitions of into not more than parts. Moreover Schur showed that the character of an irreducible representation of type is given by a certain symmetric function in variables (since described as a "Schur function")." According to Green, the methods of Schur's dissertation today are important for the theory of algebraic groups. In 1927 Schur, in his work On the rational representations of the general linear group, gave new proofs for the main results of his dissertation. If is the natural -dimensional vector space on which operates, and if is a natural number, then the -fold tensor product over is a -module, on which the symmetric group of degree also operates by permutation of the tensor factors of each generator of . By exploiting these -bimodule actions on , Schur manages to find elegant proofs of his sentences. This work of Schur was once very well known. Professorship in Berlin Schur lived in Berlin as a highly respected member of the academic world, an apolitical scholar. A leading mathematician and outstanding and very successful teacher, he held a prestigious chair at the University of Berlin for 16 years. Until 1933, his research group had an excellent reputation at the University of Berlin in Germany and beyond. With Schur in the center, his faculty worked with representation theory, which was extended by his students in different directions (including solvable groups, combinatorics, matrix theory). Schur made fundamental contributions to algebra and group theory which, according to Hermann Weyl, were comparable in scope and depth to those of Emmy Noether (1882–1935). When Schur's lectures were canceled in 1933, there was an outcry among the students and professors who appreciated him and liked him. By the efforts of his colleague Erhard Schmidt Schur was allowed to continue lecturing until the end of September 1935 for the time being. Schur was the last Jewish professor who lost his job at this time. Zurich lecture In Switzerland, Schur's colleagues Heinz Hopf and George Pólya were informed of the dismissal of Schur in 1935. They tried to help as best they could. On behalf of the Mathematical Seminars chief Michel Plancherel, on 12 December 1935 the school board president Arthur Rohn invited Schur to une série de conférences sur la théorie de la représentation des groupes finis. At the same time he asked that the formal invitation should come from President Rohn, comme le prof. Schur doit obtenir l'autorisation du ministère compétent de donner ces conférences. George Pólya arranged from this invitation of the Mathematical Seminars the Conference of the Department of Mathematics and Physics on 16 December. Meanwhile, on 14 December the official invitation letter from President Rohn had already been dispatched to Schur. Schur was promised for his guest lecture a fee of CHF 500. Schur did not reply until 28 January 1936, on which day he was first in the possession of the required approval of the local authority. He declared himself willing to accept the invitation. He envisaged beginning the lecture on 4 February. Schur spent most of the month of February in Switzerland. Before his return to Germany he visited his daughter in Bern for a few days, and on 27 February he returned via Karlsruhe, where his sister lived, to Berlin. In a letter to Pólya from Berne, he concludes with the words: From Switzerland I take farewell with a heavy heart. In Berlin, meanwhile, mathematician and Nazi Ludwig Bieberbach, in a letter dated 20 February 1936, informed the Reich Minister for Science, Art, and Education on the journey of Schur, and announced that he wanted to find out what was the content of the lecture in Zurich. Significant students Schur had a total of 26 graduate students, some of whom acquired a mathematical reputation. Among them are Alfred Brauer, University of Berlin (1928) Richard Brauer, University of Berlin (1925) , University of Berlin (1925) Bernhard Neumann, University of Berlin, Cambridge University (1932, 1935) Félix Pollaczek, University of Berlin (1922) Heinz Pruefer, University of Berlin, (1921) Richard Rado, University of Berlin, Cambridge University (1933, 1935) Isaac Jacob Schoenberg, Alexandru Ioan Cuza University of Iaşi (1926) Wilhelm Specht, University of Berlin (1932) Helmut Wielandt, University of Berlin (1935) Legacy Concepts named after Schur Among others, the following concepts are named after Issai Schur: List of things named after Issai Schur Schur algebra Schur complement Schur index Schur indicator Schur multiplier Schur orthogonality relations Schur polynomial Schur product Schur test Schur's inequality Schur's theorem Schur-convex function Schur–Weyl duality Lehmer–Schur algorithm Schur's property for normed spaces. Jordan–Schur theorem Schur–Zassenhaus theorem Schur triple Schur decomposition Schur's lower bound Quotes In his commemorative speech, Alfred Brauer (PhD candidate of Schur) spoke about Issai Schur as follows: As a teacher, Schur was excellent. His lectures were very clear, but not always easy and required cooperation – During the winter semester of 1930, the number of students who wanted to attend Schur's theory of numbers lecture, was such that the second largest university lecture hall with about 500 seats was too small. His most human characteristics were probably his great modesty, his helpfulness and his human interest in his students. Heinz Hopf, who had been in Berlin before his appointment to Zurich at the ETH Privatdozent, held – as is clear from oral statements and also from letters – Issai Schur as a mathematician and greatly appreciated man. Here, this appreciation was based entirely on reciprocity: in a letter of 1930 to George Pólya on the occasion of the re-appointment of Hermann Weyl, Schur says of Hopf: Hopf is a very excellent teacher, a mathematician of strong temperament and strong effect, a master's discipline, trained excellent in other areas. – If I have to characterize him as a man, it may suffice if I say that I sincerely look forward to each time I meet with him. Schur was, however, known for putting a correct distance in personal affairs. The testimony of Hopf is in accordance with statements of Schur's former students in Berlin, by Walter Ledermann and Bernhard Neumann. Publications Notes References Review External links 1875 births 1941 deaths People from Mogilev People from Mogilyovsky Uyezd (Mogilev Governorate) Belarusian Jews Emigrants from the Russian Empire to Germany German people of Belarusian-Jewish descent 19th-century German mathematicians Combinatorialists 20th-century German mathematicians Group theorists Linear algebraists Humboldt University of Berlin alumni Academic staff of the Humboldt University of Berlin Academic staff of the University of Bonn Members of the Prussian Academy of Sciences Corresponding Members of the USSR Academy of Sciences Jewish emigrants from Nazi Germany to Mandatory Palestine Deaths from coronary artery disease Burials at Trumpeldor Cemetery Issai Schur
Issai Schur
[ "Mathematics" ]
3,294
[ "Combinatorialists", "Combinatorics" ]
617,530
https://en.wikipedia.org/wiki/Lead%20glass
Lead glass, commonly called crystal, is a variety of glass in which lead replaces the calcium content of a typical potash glass. Lead glass contains typically 18–40% (by mass) lead(II) oxide (PbO), while modern lead crystal, historically also known as flint glass due to the original silica source, contains a minimum of 24% PbO. Lead glass is often desirable for a variety of uses due to its clarity. In marketing terms it is often called crystal glass. The term lead crystal is, technically, not an accurate term to describe lead glass, because glass lacks a crystalline structure and is instead an amorphous solid. The use of the term remains popular for historical and commercial reasons, but is sometimes changed to simply crystal because of lead's reputation as a toxic substance. It is retained from the Venetian word cristallo to describe the rock crystal (quartz) imitated by Murano glassmakers. This naming convention has been maintained to the present day to describe decorative holloware. Lead crystal glassware was formerly used to store and serve drinks, but due to the health risks of lead, this has become rare. One alternative material is modern crystal glass, in which barium oxide, zinc oxide, or potassium oxide are employed instead of lead oxide. In the European Union, labelling of "crystal" products is regulated by Council Directive 69/493/EEC, which defines four categories, depending on the chemical composition and properties of the material. Only glass products containing at least 24% of lead oxide may be referred to as "lead crystal". Products with less lead oxide, or glass products with other metal oxides used in place of lead oxide, must be labelled "crystalline" or "crystal glass". Properties The addition of lead oxide to glass raises its refractive index and lowers its working temperature and viscosity. The attractive optical properties of lead glass result from the high content of the heavy metal lead. Lead also raises the density of the glass, being over 7 times as dense as calcium. The density of soda glass is or below, while typical lead crystal has a density of around and high-lead glass can be over or even up to . The brilliance of lead crystal relies on the high refractive index caused by the lead content. Ordinary glass has a refractive (n) of 1.5, while the addition of lead produces a range up to 1.7 or 1.8. This heightened refractive index also correlates with increased dispersion, which measures the degree to which a medium separates light into its component wavelengths, thus producing a spectrum, just as a prism does. Crystal cutting techniques exploit these properties to create a brilliant, sparkling effect as each cut facet in cut glass reflects and transmits light through the object. The high refractive index is useful for lens making, since a given focal length can be achieved with a thinner lens. However, the dispersion must be corrected by other components of the lens system if it is to be achromatic. The addition of lead oxide to potash glass also reduces its viscosity, rendering it more fluid than ordinary soda glass above softening temperature (about ), with a working point of . The viscosity of glass varies radically with temperature, but that of lead glass is roughly two orders of magnitude lower than that of ordinary soda glasses across working temperature ranges (up to ). From the glassmaker's perspective, this results in two practical developments. First, lead glass may be worked at a lower temperature, leading to its use in enamelling, and second, clear vessels may be made without trapped air bubbles with less difficulty than ordinary glasses, allowing the manufacture of perfectly clear, flawless objects. When tapped, lead crystal makes a ringing sound, unlike ordinary glasses. The wine glasses were always valued also for their "ring" made by the clinking glasses. The sound was better when large quantity of the lead oxide was present in the glassmaking material, like in the British and Irish wine glasses of the 17th-19th centuries with their "rich bell-notes of F and G sharp". Consumers still rely on this property to distinguish it from cheaper glasses. Emil Deeg had published a major study on the ringing of the lead crystal in 1958. Since the potassium ions are bound more tightly in a lead-silica matrix than in a soda–lime glass, the former absorbs more energy when struck. This causes the lead crystal to oscillate, thereby producing its characteristic sound. Lead also increases the solubility of tin, copper, and antimony, leading to its use in colored enamels and glazes. The low viscosity of lead glass melt is the reason for typically high lead oxide content in the glass solders. The presence of lead is used in glasses absorbing gamma radiation and X-rays, used in radiation shielding as a form of lead shielding (e.g. in cathode-ray tubes, thus lowering the exposure of the viewer to soft X-rays). In particle physics, the combination of the low radiation length resulting from the high density and presence of heavy nuclei with the high refractive index which leads to both pronounced Cherenkov radiation and containment of the Cherenkov light by total internal reflection makes lead glass one of the prominent tools for photon detection by means of electromagnetic showers. The high ionic radius of the Pb2+ ion renders it highly immobile in the matrix and hinders the movement of other ions; lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda–lime glass (108.5 vs 106.5 ohm·cm, DC at ). Lead-containing glass is frequently used in light fixtures. History Lead may be introduced into glass either as an ingredient of the primary melt or added to preformed leadless glass or frit. The lead oxide used in lead glass could be obtained from a variety of sources. In Europe, galena, lead sulfide, was widely available, which could be smelted to produce metallic lead. The lead metal would be calcined to form lead oxide by roasting it and scraping off the litharge. In the medieval period lead metal could be obtained through recycling from abandoned Roman sites and plumbing, even from church roofs. Metallic lead was demanded in quantity for silver cupellation, and the resulting litharge could be used directly by glassmakers. Lead was also used for ceramic lead glazes. This material interdependence suggests a close working relationship between potters, glassmakers, and metalworkers. Glasses with lead oxide content first appeared in Mesopotamia, the birthplace of the glass industry. The earliest known example is a blue glass fragment from Nippur dated to 1400 BC containing 3.66% PbO. Glass is mentioned in clay tablets from the reign of Assurbanipal (668–631 BC), and a recipe for lead glaze appears in a Babylonian tablet of 1700 BC. A red sealing-wax cake found in the Burnt Palace at Nimrud, from the early 6th century BC, contains 10% PbO. These low values suggest that lead oxide may not have been consciously added, and was certainly not used as the primary fluxing agent in ancient glasses. Lead glass also occurs in Han-period China (206 BC – 220 AD). There, it was cast to imitate jade, both for ritual objects such as big and small figures, as well as jewellery and a limited range of vessels. Since glass first occurs at such a late date in China, it is thought that the technology was brought along the Silk Road by glassworkers from the Middle East. The fundamental compositional difference between Western silica-natron glass and the unique Chinese lead glass, however, may indicate an autonomous development. In medieval and early modern Europe, lead glass was used as a base in coloured glasses, specifically in mosaic tesserae, enamels, stained-glass painting, and bijouterie, where it was used to imitate precious stones. Several textual sources describing lead glass survive. In the late 11th-early 12th century, Schedula Diversarum Artium (List of Sundry Crafts), the author known as "Theophilus Presbyter" describes its use as imitation gemstone, and the title of a lost chapter of the work mentions the use of lead in glass. The 12–13th century pseudonymous "Heraclius" details the manufacture of lead enamel and its use for window painting in his De coloribus et artibus Romanorum (Of Hues and Crafts of the Romans). This refers to lead glass as "Jewish glass", perhaps indicating its transmission to Europe. A manuscript preserved in the Biblioteca Marciana, Venice, describes the use of lead oxide in enamels and includes recipes for calcining lead to form the oxide. Lead glass was ideally suited for enamelling vessels and windows owing to its lower working temperature than the forest glass of the body. Antonio Neri devoted book four of his L’Arte Vetraria ("The Art of Glass-making", 1612) to lead glass. In this first systematic treatise on glass, he again refers to the use of lead glass in enamels, glassware, and for the imitation of precious stones. Christopher Merrett translated this into English in 1662 (The Art of Glass), paving the way for the production of English lead crystal glass by George Ravenscroft. George Ravenscroft (1618–1681) was the first to produce clear lead crystal glassware on an industrial scale. The son of a merchant with close ties to Venice, Ravenscroft had the cultural and financial resources necessary to revolutionise the glass trade, setting the basis from which England overtook Venice and Bohemia as the centre of the glass industry in the eighteenth and nineteenth centuries. With the aid of Venetian glassmakers, especially da Costa, and under the auspices of the Worshipful Company of Glass Sellers of London, Ravenscroft sought to find an alternative to Venetian cristallo. His use of flint as the silica source has led to the term flint glass to describe these crystal glasses, despite his later switch to sand. At first, his glasses tended to crizzle, developing a network of small cracks destroying its transparency, which was eventually overcome by replacing some of the potash flux with lead oxide to the melt, up to 30%. Crizzling results from the destruction of the glass network by an excess of alkali, and may be caused by excess humidity as well as inherent defects in glass composition. He was granted a protective patent in 1673, where production moved from his glasshouse in the precinct of the Savoy, London, to the seclusion of Henley-on-Thames. In 1676, having apparently overcome the crizzling problem, Ravenscroft was granted the use of a raven's head seal as a guaranty of quality. In 1681, the year of his death, the patent expired and operations quickly developed among several firms, where by 1696 twenty-seven of the eighty-eight glasshouses in England, especially at London and Bristol, were producing flint glass containing 30–35% PbO. At this period, glass was sold by weight, and the typical forms were rather heavy and solid with minimal decoration. Such was its success on the international market, however, that in 1746, the British Government imposed a lucrative tax by weight. Rather than drastically reduce the lead content of their glass, manufacturers responded by creating highly decorated, smaller, more delicate forms, often with hollow stems, known to collectors today as Excise glasses. In 1780, the government granted Ireland free trade in glass without taxation. English labour and capital then shifted to Dublin and Belfast, and new glassworks specialising in cut glass were installed in Cork and Waterford. In 1825, the tax was renewed, and gradually the industry declined until the mid-nineteenth century, when the tax was finally repealed. From the 18th century, English lead glass became popular throughout Europe, and was ideally suited to the new taste for wheel-cut glass decoration perfected on the Continent owing to its relatively soft properties. In Holland, local engraving masters such as David Wolff and Frans Greenwood stippled imported English glassware, a style that remained popular through the eighteenth century. Such was its popularity in Holland that the first Continental production of lead-crystal glass began there, probably as the result of imported English workers. Imitating lead-crystal à la façon d’Angleterre presented technical difficulties, as the best results were obtained with covered pots in a coal-fired furnace, a particularly English process requiring specialised cone-furnaces. Towards the end of the eighteenth century, lead-crystal glass was being produced in France, Hungary, Germany, and Norway. By 1800, Irish lead crystal had overtaken lime-potash glasses on the Continent, and traditional glassmaking centres in Bohemia began to focus on colored glasses rather than compete directly against it. The development of lead glass continued through the twentieth century, when in 1932 scientists at the Corning Glassworks, New York State, developed a new lead glass of high optical clarity. This became the focus of Steuben Glass Works, a division of Corning, which produced decorative vases, bowls, and glasses in Art Deco style. Lead-crystal continues to be used in industrial and decorative applications. Lead glazes The fluxing and refractive properties valued for lead glass also make it attractive as a pottery or ceramic glaze. Lead glazes first appear in first century BC to first century AD Roman wares, and occur nearly simultaneously in China. They were very high in lead, 45–60% PbO, with a very low alkali content, less than 2%. From the Roman period, they remained popular through the Byzantine and Islamic periods in the Near East, on pottery vessels and tiles throughout medieval Europe, and up to the present day. In China, similar glazes were used from the twelfth century for colored enamels on stoneware, and on porcelain from the fourteenth century. These could be applied in three different ways. Lead could be added directly to a ceramic body in the form of a lead compound in suspension, either from galena (PbS), red lead (Pb3O4), white lead (2PbCO3·Pb(OH)2), or lead oxide (PbO). The second method involves mixing the lead compound with silica, which is then placed in suspension and applied directly. The third method involves fritting the lead compound with silica, powdering the mixture, and suspending and applying it. The method used on a particular vessel may be deduced by analysing the interaction layer between the glaze and the ceramic body microscopically. Tin-opacified glazes appear in Iraq in the eighth century AD. Originally containing 1–2% PbO; by the eleventh century high-lead glazes had developed, typically containing 20–40% PbO and 5–12% alkali. These were used throughout Europe and the Near East, especially in Iznik ware, and continue to be used today. Glazes with even-higher lead content occur in Spanish and Italian maiolica, with up to 55% PbO and as low as 3% alkali. Adding lead to the melt allows the formation of tin oxide more readily than in an alkali glaze: tin oxide precipitates into crystals in the glaze as it cools, creating its opacity. The use of lead glaze has several advantages over alkali glazes in addition to their greater optical refractivity. Lead compounds in suspension may be added directly to the ceramic body. Alkali glazes must first be mixed with silica and fritted prior to use, since they are soluble in water, requiring additional labor. A successful glaze must not crawl, or peel away from the pottery surface upon cooling, leaving areas of unglazed ceramic. Lead reduces this risk by reducing the surface tension of the glaze. It must not craze, forming a network of cracks, caused when the thermal contraction of the glaze and the ceramic body do not match properly. Ideally, the glaze contraction should be 5–15% less than the body contraction, as glazes are stronger under compression than under tension. A high-lead glaze has a linear expansion coefficient of between 5 and 7×10−6/°C, compared to 9 to 10×10−6/°C for alkali glazes. Those of earthenware ceramics vary between 3 and 5×10−6/°C for non-calcareous bodies and 5 to 7×10−6/°C for calcareous clays, or those containing 15–25% CaO. Therefore, the thermal contraction of lead glaze matches that of the ceramic more closely than an alkali glaze, rendering it less prone to crazing. A glaze should also have a low enough viscosity to prevent the formation of pinholes as trapped gasses escape during firing, typically between 900 and 1100 °C, but not so low as to run off. The relatively low viscosity of lead glaze mitigates this issue. It may also have been cheaper to produce than alkali glazes. Lead glass and glazes have a long and complex history, and continue to play new roles in industry and technology today. Lead crystal Lead oxide added to the molten glass gives lead crystal a much higher index of refraction than normal glass, and consequently much greater "sparkle" by increasing specular reflection and the range of angles of total internal reflection. Ordinary glass has a refractive index of n = 1.5; the addition of lead produces an index of refraction of up to 1.7. This higher refractive index also raises the correlated dispersion, the degree to which the glass separates light into its colors, as in a prism. The increases in refractive index and dispersion significantly increase the amount of reflected light and thus the "fire" in the glass. In cut glass, which has been hand- or machine-cut with facets, the presence of lead also makes the glass softer and easier to cut. Crystal can consist of up to 35% lead, at which point it has the most sparkle. Makers of lead crystal objects include: Safety Several studies have demonstrated that serving food or drink in glassware containing lead oxide can cause lead to leach into the contents, even when the glassware has not been used for storage. Due to an inability to "indicate a threshold for the key effects of lead," a 2011 World Health Organization committee on food additives "concluded that it was not possible to establish a new PTWI (provisional tolerable daily intake) that would be considered health protective." The amount of lead released from lead glass increases with the acidity of the substance being served. Vinegar, for example, has been shown to cause more rapid leaching compared to white wine, as vinegar is more acidic. Citrus juices and other acidic drinks leach lead from crystal as effectively as alcoholic beverages. Daily usage of lead crystalware (without longer-term storage) was found to add up to 14.5 μg of lead from drinking a 350ml cola beverage. The amount of lead released into a food or drink increases with the amount of time it stays in the vessel. In a study performed at North Carolina State University, the amount of lead migration was measured for port wine stored in lead crystal decanters. After two days, lead levels were 89 μg/L (micrograms per liter). After four months, lead levels were between 2,000 and 5,000 μg/L. White wine doubled its lead content within an hour of storage and tripled it within four hours. Some brandy stored in lead crystal for over five years had lead levels around 20,000 μg/L. Lead leaching from the same decanter decreases with repeated uses. This finding is "consistent with ceramic chemistry theory, which predicts that leaching of lead from crystal is self-limiting exponentially as a function of increasing distance from the crystal-liquid interface." It has been proposed that the historic association of gout with the upper classes in Europe and America was, in part, caused by the extensive use of lead crystal decanters to store fortified wines and whisky. Statistical evidence linking gout to lead poisoning has been published. See also Steuben Crystal Waterford Crystal Edinburgh Crystal Swarovski Ajka Crystal List of indices of refraction Lead Hot cell Val Saint Lambert References Sources Glass compositions Lead(II) compounds
Lead glass
[ "Chemistry" ]
4,222
[ "Glass compositions", "Glass chemistry" ]
617,565
https://en.wikipedia.org/wiki/GenBank
The GenBank sequence database is an open access, annotated collection of all publicly available nucleotide sequences and their protein translations. It is produced and maintained by the National Center for Biotechnology Information (NCBI; a part of the National Institutes of Health in the United States) as part of the International Nucleotide Sequence Database Collaboration (INSDC). In October 2024, GenBank contained 34 trillion base pairs from over 4.7 billion nucleotide sequences and more than 580,000 formally described species. The database started in 1982 by Walter Goad and Los Alamos National Laboratory. GenBank has become an important database for research in biological fields and has grown in recent years at an exponential rate by doubling roughly every 18 months. GenBank is built by direct submissions from individual laboratories, as well as from bulk submissions from large-scale sequencing centers. Submissions Only original sequences can be submitted to GenBank. Direct submissions are made to GenBank using BankIt, which is a Web-based form, or the stand-alone submission program, Sequin. Upon receipt of a sequence submission, the GenBank staff examines the originality of the data and assigns an accession number to the sequence and performs quality assurance checks. The submissions are then released to the public database, where the entries are retrievable by Entrez or downloadable by FTP. Bulk submissions of Expressed Sequence Tag (EST), Sequence-tagged site (STS), Genome Survey Sequence (GSS), and High-Throughput Genome Sequence (HTGS) data are most often submitted by large-scale sequencing centers. The GenBank direct submissions group also processes complete microbial genome sequences. History Walter Goad of the Theoretical Biology and Biophysics Group at Los Alamos National Laboratory (LANL) and others established the Los Alamos Sequence Database in 1979, which culminated in 1982 with the creation of the public GenBank. Funding was provided by the National Institutes of Health, the National Science Foundation, the Department of Energy, and the Department of Defense. LANL collaborated on GenBank with the firm Bolt, Beranek, and Newman, and by the end of 1983 more than 2,000 sequences were stored in it. In the mid-1980s, the Intelligenetics bioinformatics company at Stanford University managed the GenBank project in collaboration with LANL. As one of the earliest bioinformatics community projects on the Internet, the GenBank project started BIOSCI/Bionet news groups for promoting open access communications among bioscientists. During 1989 to 1992, the GenBank project transitioned to the newly created National Center for Biotechnology Information (NCBI). Growth The GenBank release notes for release 250.0 (June 2022) state that "from 1982 to the present, the number of bases in GenBank has doubled approximately every 18 months". As of 15 June 2022, GenBank release 250.0 has over 239 million loci, 1,39 trillion nucleotide bases, from 239 million reported sequences. The GenBank database includes additional data sets that are constructed mechanically from the main sequence data collection, and therefore are excluded from this count. Limitations An analysis of Genbank and other services for the molecular identification of clinical blood culture isolates using 16S rRNA sequences showed that such analyses were more discriminative when GenBank was combined with other services such as EzTaxon-e and the BIBI databases. GenBank may contain sequences wrongly assigned to a particular species, because the initial identification of the organism was wrong. A recent study showed that 75% of mitochondrial Cytochrome c oxidase subunit I sequences were wrongly assigned to the fish Nemipterus mesoprion resulting from continued usage of sequences of initially misidentified individuals. The authors provide recommendations how to avoid further distribution of publicly available sequences with incorrect scientific names. Numerous published manuscripts have identified erroneous sequences on GenBank. These are not only incorrect species assignments (which can have different causes) but also include chimeras and accession records with sequencing errors. A recent manuscript on the quality of all Cytochrome b records of birds further showed that 45% of the identified erroneous records lack a voucher specimen that prevents a reassessment of the species identification. Another problem is that sequence records are often submitted as anonymous sequences without species names (e.g. as "Pelomedusa sp. A CK-2014" because the species are either unknown or withheld for publication purposes. However, even after the species have been identified or published, these sequence records are not updated and thus may cause ongoing confusion. See also Ensembl Human Protein Reference Database (HPRD) Sequence analysis UniProt List of sequenced eukaryotic genomes List of sequenced archaeal genomes RefSeq — the Reference Sequence Database Geneious — includes a GenBank Submission Tool Open science data Open Standard References External links GenBank Example sequence record, for hemoglobin beta BankIt Sequin — a stand-alone software tool developed by the NCBI for submitting and updating entries to the GenBank sequence database. EMBOSS — free, open source software for molecular biology GenBank, RefSeq, TPA and UniProt: What's in a Name? National Institutes of Health Genetics databases Genome databases Bioinformatics Biological databases
GenBank
[ "Engineering", "Biology" ]
1,094
[ "Bioinformatics", "Biological engineering", "Biological databases" ]
617,624
https://en.wikipedia.org/wiki/Cerussite
Cerussite (also known as lead carbonate or white lead ore) is a mineral consisting of lead carbonate with the chemical formula PbCO3, and is an important ore of lead. The name is from the Latin cerussa, white lead. Cerussa nativa was mentioned by Conrad Gessner in 1565, and in 1832 F. S. Beudant applied the name céruse to the mineral, whilst the present form, cerussite, is due to W. Haidinger (1845). Miners' names in early use were lead-spar and white-lead-ore. Cerussite crystallizes in the orthorhombic crystal system and is isomorphous with aragonite. Like aragonite it is very frequently twinned, the compound crystals being pseudo-hexagonal in form. Three crystals are usually twinned together on two faces of the prism, producing six-rayed stellate groups with the individual crystals intercrossing at angles of nearly 60°. Crystals are of frequent occurrence and they usually have very bright and smooth faces. The mineral also occurs in compact granular masses, and sometimes in fibrous forms. The mineral is usually colorless or white, sometimes grey or greenish in tint and varies from transparent to translucent with an adamantine lustre. It is very brittle, and has a conchoidal fracture. It has a Mohs hardness of 3 to 3.75 and a specific gravity of 6.5. A variety containing 7% of zinc carbonate, replacing lead carbonate, is known as iglesiasite, from Iglesias in Sardinia, where it is found. The mineral may be readily recognized by its characteristic twinning, in conjunction with the adamantine lustre and high specific gravity. It dissolves with effervescence in dilute nitric acid. A blowpipe test will cause it to fuse very readily, and gives indications for lead. Finely crystallized specimens have been obtained from the Friedrichssegen mine in Lahnstein in Rhineland-Palatinate, Johanngeorgenstadt in Saxony, Stříbro in the Czech Republic, Phoenixville in Pennsylvania, Broken Hill in New South Wales, and several other localities. Delicate acicular crystals of considerable length were found long ago in the Pentire Glaze mine near St Minver in Cornwall. Cerussite is often found in considerable quantities, and has a lead content of up to 77.5%. Lead(II) carbonate is practically insoluble in neutral water (solubility product [Pb2+][CO32−] ≈ 1.5×10−13 at 25 °C), but will dissolve in dilute acids. Commercial uses "White lead" is the key ingredient in (now discontinued) lead paints. Ingestion of lead-based paint chips is the most common cause of lead poisoning in children. Both "white lead" and lead acetate have been used in cosmetics throughout history, though this practice has ceased in Western countries. Gallery See also Venetian ceruse – Cerussite-based cosmetic popularly thought to be worn by Elizabeth I of England References External links Mineral galleries Carbonate minerals Gemstones Lead minerals Luminescent minerals Minerals in space group 62 Orthorhombic minerals Aragonite group Minerals described in 1845
Cerussite
[ "Physics", "Chemistry" ]
676
[ "Luminescence", "Luminescent minerals", "Materials", "Gemstones", "Matter" ]
617,777
https://en.wikipedia.org/wiki/Electron%20deficiency
In chemistry, electron deficiency (and electron-deficient) is jargon that is used in two contexts: chemical species that violate the octet rule because they have too few valence electrons and species that happen to follow the octet rule but have electron-acceptor properties, forming donor-acceptor charge-transfer salts. Octet rule violations Traditionally, "electron-deficiency" is used as a general descriptor for boron hydrides and other molecules which do not have enough valence electrons to form localized (2-centre 2-electron) bonds joining all atoms. For example, diborane (B2H6) would require a minimum of 7 localized bonds with 14 electrons to join all 8 atoms, but there are only 12 valence electrons. A similar situation exists in trimethylaluminium. The electron deficiency in such compounds is similar to metallic bonding. Electron-acceptor molecules Alternatively, electron-deficiency describes molecules or ions that function as electron acceptors. Such electron-deficient species obey the octet rule, but they have (usually mild) oxidizing properties. 1,3,5-Trinitrobenzene and related polynitrated aromatic compounds are often described as electron-deficient. Electron deficiency can be measured by linear free-energy relationships: "a strongly negative ρ value indicates a large electron demand at the reaction center, from which it may be concluded that a highly electron-deficient center, perhaps an incipient carbocation, is involved." References Chemical bonding
Electron deficiency
[ "Physics", "Chemistry", "Materials_science" ]
315
[ "Chemical bonding", "Condensed matter physics", "nan" ]
617,831
https://en.wikipedia.org/wiki/Semicircle
In mathematics (and more specifically geometry), a semicircle is a one-dimensional locus of points that forms half of a circle. It is a circular arc that measures 180° (equivalently, radians, or a half-turn). It only has one line of symmetry (reflection symmetry). In non-technical usage, the term "semicircle" is sometimes used to refer to either a closed curve that also includes the diameter segment from one end of the arc to the other or to the half-disk, which is a two-dimensional geometric region that further includes all the interior points. By Thales' theorem, any triangle inscribed in a semicircle with a vertex at each of the endpoints of the semicircle and the third vertex elsewhere on the semicircle is a right triangle, with a right angle at the third vertex. All lines intersecting the semicircle perpendicularly are concurrent at the center of the circle containing the given semicircle. Arithmetic and geometric means A semicircle can be used to construct the arithmetic and geometric means of two lengths using straight-edge and compass. For a semicircle with a diameter of a + b, the length of its radius is the arithmetic mean of a and b (since the radius is half of the diameter). The geometric mean can be found by dividing the diameter into two segments of lengths a and b, and then connecting their common endpoint to the semicircle with a segment perpendicular to the diameter. The length of the resulting segment is the geometric mean. This can be proven by applying the Pythagorean theorem to three similar right triangles, each having as vertices the point where the perpendicular touches the semicircle and two of the three endpoints of the segments of lengths a and b. The construction of the geometric mean can be used to transform any rectangle into a square of the same area, a problem called the quadrature of a rectangle. The side length of the square is the geometric mean of the side lengths of the rectangle. More generally, it is used as a lemma in a general method for transforming any polygonal shape into a similar copy of itself with the area of any other given polygonal shape. Farey diagram The Farey sequence of order n is the sequence of completely reduced fractions which when in lowest terms have denominators less than or equal to n, arranged in order of increasing size. With a restricted definition, each Farey sequence starts with the value 0, denoted by the fraction , and ends with the fraction . Ford circles can be constructed tangent to their neighbours, and to the x-axis at these points. Semicircles joining adjacent points on the x-axis pass through the points of contact at right angles. Equation The equation of a semicircle with midpoint on the diameter between its endpoints and which is entirely concave from below is If it is entirely concave from above, the equation is Arbelos An arbelos is a region in the plane bounded by three semicircles connected at their endpoints, all on the same side of a straight line (the baseline) that contains their diameters. See also Amphitheater Archimedes' twin circles Archimedes' quadruplets Great semicircle Salinon Wigner semicircle distribution References External links Elementary geometry es:Semicírculo
Semicircle
[ "Mathematics" ]
693
[ "Elementary mathematics", "Elementary geometry" ]
617,947
https://en.wikipedia.org/wiki/Weather%20lore
Weather lore is the body of informal folklore related to the prediction of the weather and its greater meaning. Much like regular folklore, weather lore is passed down through speech and writing from normal people without the use of external measuring instruments. The origin of weather lore can be dated back to primeval men and their usage of star studying in navigation. However, more recently during the Late Middle Ages, the works of two Greek philosopher-poets, Theophrastus of Eresus on Lesbos and Aratus of Macedonia, are known for shaping the prediction of weather. Theophrastus and Aratus collated their works in two main collections for weather lore: On Weather Signs and On Winds. These were used for helping farmers with harvest, merchants for trade and determining the weather the next day. Astrology and weather lore have been closely interlinked for many years - with each planet often being associated with a weather state. For example, Mars is red and must therefore be hot and dry. Prevalent in ancient Roman thought, astrologists used weather lore to teach commoners of the star and cloud formations and how they can be used to see the future. From this, three main schools of weather lore thoughts developed during the Late Middle Ages as Astrology became more popular throughout Europe. One which related to winds and clouds and had some scientific basis. A second type connected with saints' days possessed doubtful validity but was quite popular nonetheless during the Middle Ages. A third type treated the behaviour of birds and animals, which has been found to be controlled more by past and present weather rather than to be a true indication of the future. Before the invention of temperature measuring devices, such as the mercury thermometer, it was difficult to gather predictive, numerical data. Therefore, communities used their surroundings to predict and explain the weather in upcoming days. Today, the majority of weather lore can be found in proverbs. However, much of the weather lore fantasy is still prevalent in today's seasonal calendar, with mentions such as the annual saints' days, the passage of the months, and weather predictions made from animal behaviour. The creation of the astrological signs in Babylonian mythology can also be attributed to the study of stars and its association with weather lore. The occurrence of 'weather' Weather can be defined as the constant shift in conditions relating to temperature, cloudiness, rainfall etc. at a particular time and place. A significant portion of weather occurs in Earth's middle latitudes, between roughly 30° to 60° North and South. A great percentage of the world's population lives in the equatorial regions, but for the most part, these regions do not experience weather as it is understood by this definition. The Sahara in northern Africa, for instance, is almost uniformly hot, sunny and dry all year long especially due to the non-stop presence of high atmospheric pressure aloft, whereas weather trends on the Indian subcontinent and in the western Pacific, for instance, the monsoonal belt, occur gradually over the very long term, and the diurnal weather patterns remain constant. Weather folklore, therefore, refers to this mid-latitude region of daily variability. While most of it applies equally to the Southern Hemisphere, the Southern Hemisphere resident may need to take into account the fact that weather systems rotate opposite to those in the North. For instance, the "crossed winds" rule (see below) must be reversed for the Australian reader. Common proverbs When clouds look like black smoke When clouds look like black smoke A wise man will put on his cloak Thick, moisture-laden storm clouds absorb sunlight. It gives them an appearance that somewhat resembles black smoke. Red sky at night Red sky at night, shepherd's delight. Red sky in the morning, shepherd's warning. (In a common variation, "shepherd" is replaced by "sailor") A red sky – in the morning or evening – is a result of high pressure air in the atmosphere trapping particles of dust or soot. Air molecules scatter the shorter blue wavelengths of sunlight, but particles of dust, soot and other aerosols scatter the longer red wavelength of sunlight in a process called Rayleigh scattering. At sunrise and sunset, the sun is lower in the sky causing the sunlight to travel through more of the atmosphere so scattering more light. This effect is further enhanced when there are at least some high level clouds to reflect this light back to the ground. When weather systems predominantly move from west to east, a red sky at night indicates that the high pressure air (and better weather) is westwards. In the morning the light is eastwards, and so a red sky then indicates the high pressure (and better weather) has already passed, and an area of low pressure is following behind. Low-pressure regions When the wind is blowing in the North No fisherman should set forth, When the wind is blowing in the East, 'Tis not fit for man nor beast, When the wind is blowing in the South It brings the food over the fish's mouth, When the wind is blowing in the West, That is when the fishing's best! In western European seas, this description of wind direction is an excellent illustration of how the weather events of an active low pressure area present themselves. With the approach of a low, easterly winds typically pick up. These gusty winds can be unpleasant for a number of reasons; they are often uncomfortably warm, dry, and dusty in the summer and bitterly cold in the winter. Northerly winds, which follow around a low, are cold and blustery. Sailing in conditions of northerly winds requires expertise and a boat capable of handling heavy waves. Southerly winds usually bring warm temperatures, and though they may not necessarily feed the fish, they do provide pleasant fishing weather. Wind and weather observations will be different for a low passing to the north of the observer than for one passing to the south. When a low passes to the north, the winds typically pick up from the east, swing to southerly (possibly accompanied by light precipitation, usually not) with the passage of the low's warm front, and then switch to northwesterly or westerly as the cold front passes. Typically, if there is any heavy precipitation, it will accompany the passage of the cold front. When a low passes to the south, on the other hand, winds will initially pick up from the east, but will gradually shift to northerly. Overcast skies and steady precipitation often occur as the center of the low passes due south, but skies will clear and winds will gradually become westerly as the low moves off to the east. No observer will experience all the weather elements of a low in a single passage. Calm conditions No weather is ill if the wind be still. Calm conditions, especially with clear skies, indicate the dominance of a high pressure area. Because highs are broad regions of descending air, they discourage the formation of phenomena typically associated with weather, such as clouds, wind, and precipitation. Calm conditions, though, may also result from a circumstance known as "the calm before the storm," in which a large thunderstorm cell to the west may be updrafting the westerly surface wind before it can arrive locally. This situation is readily identifiable by looking to the west – such an approaching storm will be close enough to be unmistakable. In winter, though, calm air and clear skies may signal the presence of an Arctic high, typically accompanied by very cold air, and it is difficult to imagine describing a temperature of –35 °C (–31 °F) as pleasant. A ring around the Moon When halo rings the Moon or Sun, rain's approaching on the run A halo around the Sun or Moon is caused by the refraction of that body's light by ice crystals at high altitude. Such high-level moisture is a precursor to moisture moving in at increasingly lower levels, and is a good indicator that an active weather system is on its way. Halos typically evolve into what is known as "milk sky", when the sky appears clear, but the typical blue is either washed-out or barely noticeable. This high, thick cirrostratus cloud is a clear indicator of an approaching low. In the coldest days of winter, a halo around the Sun is evidence of very cold and typically clear air at and above the surface. But sun dogs are indicators that weather conditions are likely to change in the next 18 to 36 hours. Humidity indicators When windows won't open and the salt clogs the shaker, The weather will favour the umbrella maker! Moisture in the air causes wood to swell, making doors and windows sticky, and salt is a very effective absorber of moisture. With a high level of moisture in the air, the likelihood of precipitation is increased. The magnesium carbonate and later calcium silicate in iodized salt acts as an anti-clumping agent in humid conditions, leading to Morton Salt's umbrella girl logo and slogan "When it rains, it pours". Fog A summer fog for fair, A winter fog for rain. A fact most everywhere, In valley or on plain. Fog is formed when the air cools enough that the vapour pressure encourages condensation over evaporation. In order for the air to be cool on a summer night, the sky must be clear, so excess heat can be radiated into space. Cloudy skies act like a blanket, absorbing and reradiating the heat, keeping it in. So if it is cool enough (and clear enough) for fog to form, it will probably be clear the next day. Winter fog is the result of two entirely different circumstances. Above the ocean or a large lake, air is typically more humid than above land. When the humid air moves over cold land, it will form fog and precipitation. (To the east of the North American Great Lakes, this is a common phenomenon, and is known as the "lake effect.") In northerly climates, ice fog may form when the temperature drops substantially below freezing. It is almost exclusively an urban phenomenon, when the air is so cold that any vapor pressure results in condensation, and additional vapour emitted by automobiles, household furnaces, and industrial plants simply accumulates as fog. Cloud movement If clouds move against the wind, rain will follow. This rule may be true under a few special circumstances, otherwise it is false. By standing with one's back to the ground-level wind and observing the movement of the clouds, it is possible to determine whether the weather will improve or deteriorate. For the Northern Hemisphere, it works like this: If the upper-level clouds are moving from the right, a low-pressure area has passed and the weather will improve; if from the left, a low pressure area is arriving and the weather will deteriorate. (Reverse for the Southern Hemisphere.) This is known as the "crossed-winds" rule. Clouds traveling parallel to but against the wind may indicate a thunderstorm approaching. Outflow winds typically blow opposite to the updraft zone, and clouds carried in the upper level wind will appear to be moving against the surface wind. However, if such a storm is in the offing, it is not necessary to observe the cloud motions to know rain is a good possibility. The nature of airflows directly at a frontal boundary can also create conditions in which lower winds contradict the motions of upper clouds, and the passage of a frontal boundary is often marked by precipitation. Most often, however, this situation occurs in the lee of a low pressure area, to the north of the frontal zones and convergence region, and does not indicate a change in weather, but rather, that the weather, fair or showery, will remain so for a period of hours at least. Fallibility of lore One of the problems in testing the veracity of traditions about the weather is the wide variety to be found in the details of sayings and traditions. Some variations are regional, while others exhibit less of a pattern. Empirical studies One case where weather lore has been studied for reliability against actual weather observations is the Groundhog Day lore. It predicts that if the groundhog sees its shadow on this day (February 2), six weeks of winter remain. One analysis concluded the creature demonstrated no ability to predict. Other studies gave accuracy percentages, but differing figures. and some of the numbers were slightly better than hazarded guess (33% accurate), according to one source. In other words, there is no appreciable correlation between cloud cover on that day, and the imminence of springlike weather. There are some meteorological basis suggested, but is a fuzzy mechanism, and fixing a precise date compromises the effectiveness. Calendrical lore There are weather lore marked by dates on the year's calendar. January The Hispanic tradition of cabañuelas predicts the weather for the year based on the 12, 18 or 24 days of January or August. February There are weather lore around February 2, known as Candlemas, Brigid's Day, or St. Blaise's Day (St. Blaze's Day). One French lore says that if it rains on Candlemas (Chandeleur) there will be forty more days of rainy day: Quand il pleut pour la Chandeleur, il pleut pendant quarante jours. Groundhog Day also falls on this day. Candlemas and animal Groundhog Day, observed in the U. S. and Canada, also falls on February 2 and is thought to derive from the Candlemas weather lore in Europe, particularly the German which features the badger as the predictor. An example of such German weather rhyme translates as: If the badger is in the sun at Candlemas, he will have to go back into his hole for another four weeks. There are also French counterparts. One for Saint-Vallier in Lorraine states: If it is fair weather on Candlemas, the bear returns to its cave for six weeks And another from Courbesseaux says that if it is sunny on Candlemas the wolf returns to its cave for six weeks, and if not, for forty days. In French Canada, it may be a marmot or groundhog (siffleux), bear, skunk, otter etc. which if it sees its shadow on Candlemas, causes winter to prolong for 40 days. English traditional weather lore recites, "If Candlemas Day be fair and bright Winter will have another fight If Candlemas Day brings cloud and rain Winter won't come again" March Lion and Lamb An English proverb describes typical March weather: March comes in like a lion and goes out like a lamb. In the 19th century it was used as a prediction contingent on a year's early March weather: If March comes in like a lion, it will go out like a lamb. March thunderstorms When March blows its horn, your barn will be filled with hay and corn. "Blows its horn" refers to thunderstorms. While March thunderstorms indicate that the weather is unusually warm for that time of year (thunderstorms can occur only with a sufficiently large temperature difference between ground and sky and sufficient amounts of moisture to produce charge differential within a cloud). July In the British Isles, Saint Swithun's day (July 15) is said to forecast the weather for the rest of the summer. If St Swithun's day is dry, then the legend says that the next forty days will also be dry: "St.Swithin's Day if thou be fair, 'Twill rain for forty days no mair; St. Swithin's Day if thou dost rain, For forty days it will remain.". If however it rains, the rain will continue for forty days. There is a scientific basis to the legend of St Swithun's day. Around the middle of July, the jet stream settles into a pattern which, in the majority of years, holds reasonably steady until the end of August. When the jet stream lies north of the British Isles then continental high pressure is able to move in; when it lies across or south of the British Isles, Arctic air and Atlantic weather systems predominate. August The Hispanic tradition of cabañuelas predicts the weather for the year based on the first 12, 18 or 24 days of January or August. November A Swedish proverb uses 30 November (St Andrew's day, referencing the name day of Anders, a localised variant) as an indicator of the weather over Christmas (Anders slaskar, julen braskar, translation: Slushy Anders [St. Andrew's day], frozen Christmas). Other feast days In France, Saint Medard (June 8), Urban of Langres (April 2), and Saint Gervase and Saint Protais (June 19) are credited with an influence on the weather almost identical with that attributed to St Swithun in England, while in Flanders there is St Godelieve (July 6) and in Germany the Seven Sleepers Day (June 27). In Russia, the weather on the feast of the Protecting Veil is popularly believed to indicate the severity of the forthcoming winter. There was an old proverb from Romagna that ran: "Par San Paternian e' trema la coda a e' can." ("On St. Paternian's day, the dog's tail wags"). This Cervian proverb refers to the fact that the cold began to be felt around the saint's feast day. A farmers' saying associated with Quirinus' feast day of March 30 was "Wie der Quirin, so der Sommer" ("As St. Quirinus' Day goes, so will the summer"). The Ice Saints is the name given in German, Austrian, and Swiss folklore to a period noted to bring a brief spell of colder weather in the Northern Hemisphere under the Julian Calendar in May, because the Roman Catholic feast days of St. Mamertus, St. Pancras, and St. Servatus fall on the days of May 11, May 12, and May 13 respectively. In Northern Spain, the four yearly periods of ember days (témporas) are used to predict the weather of the following season. Biological signs Animal signs Seagulls Sometimes the lore is concurrent with existing conditions, more than prediction, as in: Seagull, seagull sit on the sand. It's never good weather when you're on land. Seagulls tend to sleep on the water. However, seagulls, like people, find gusty, turbulent wind difficult to contend with, and under such circumstances, the water is also choppy and unpleasant. Seagulls huddled on the ground may be a sign that the weather is already bad. Cows in pasture A cow with its tail to the West makes the weather best, A cow with its tail to the East makes the weather least Cows prefer not to have the wind blowing in their faces, and so typically stand with their backs to the wind. Since westerly winds typically mean arriving or continuing fair weather and easterly winds usually indicate arriving or continuing unsettled weather, a cowvane is as good a way as any of knowing what the weather will be up to for the next few hours. Pets eating grass Cats and dogs eat grass before a rain. While it is true that cats and dogs eat grass, it has nothing to do with the weather and is because cats and dogs are not exclusively carnivorous. Some researchers believe that dogs eat grass as an emetic when feeling ill. Frogs Centered in the German-speaking world, there was the belief that frogs could predict the weather. It grew from observing European tree frogs climb up vegetation in sunny weather, and led to frogs being held inside jars equipped with a small ladder. The term Wetterfrosch (weather frog) has survived as a humorous, if somewhat derogatory epithet for meteorologists, insinuating their predictions can not be trusted. Leeches Some early barometers used leeches in a jar to predict when a storm was coming. This is because leeches tend to climb and become agitated when low pressure is approaching. Ref Plant signs Onion skins Onion skins very thin Mild winter coming in; Onion skins thick and tough Coming winter cold and rough. This verse, and so many others like it, attempts to predict long-range conditions. These predictions have stood the test of time only because they rely on selective memory: people remember when they have predicted correctly and forget when predictions don't hold. One possible factor which could provide these predictions with a thin edge of credibility is that there is some degree of consistency in weather from year to year. Drought cycles or El Niño winters are a perfect example of such circumstances. A pattern of cool summers and warm winters, for instance, can produce patterns in other natural events sensitive enough to be affected by changes in temperature or precipitation. Meteorological signs Early-morning rain Rain before seven, clear by eleven. Late-night rains and early morning rains may simply be the last precipitation of a passing weather front. However, since fronts pass at night as often as they do in the day, morning rain is no predictor of a dry afternoon. However, this lore can describe non-frontal weather. Given sufficient surface heating, a late-day rainstorm may continue to develop into the night, produce early precipitation, then dissipate by late morning. This, though, is the exception rather than the rule. Only 40% of rain is produced by convective events – 60% is the result of a frontal passage. Explanatory notes References Further reading Bill Giles. The Story of Weather. ). External links Weather lore and proverbs Skywatch – Signs of the Weather ParemioRom – Weather proverbs in the Romance languages Proverbs
Weather lore
[ "Physics" ]
4,472
[ "Weather", "Physical phenomena", "Weather lore" ]
618,009
https://en.wikipedia.org/wiki/Stabbing
A stabbing is penetration or rough contact with a sharp or pointed object at close range. Stab connotes purposeful action, as by an assassin or murderer, but it is also possible to accidentally stab oneself or others. Stabbing differs from slashing or cutting in that the motion of the object used in a stabbing generally moves perpendicular to and directly into the victim's body, rather than being drawn across it. Stabbings have been common among gangs and in prisons because knives are cheap, easy to acquire (or manufacture), easily concealable and relatively effective. Epidemiology In 2013, about 8 million stabbings occurred worldwide. In the US in 2020, 9% of the 22,429 homicides involved a sharp instrument; of these a larger proportion of females used a sharp instrument (13%) versus males (8.2%). History Stabbings have been common throughout human history and were the means used to assassinate a number of distinguished historical figures, such as Second Caliph Umar and Roman dictator Julius Caesar and emperor Caligula. In Japan, the historical practice of stabbing oneself deliberately in ritual suicide is known as seppuku (more colloquially hara-kiri, literally "belly-cutting" since it involves cutting open the abdomen). The ritual is highly codified, and the person committing suicide is assisted by a "second" who is entrusted to decapitate him cleanly (and thus expedite death and prevent an undignified spectacle) once he has made the abdominal wound. Mechanism The human skin has a somewhat elastic property as a self-defense; when the human body is stabbed by a thin object such as a small kitchen knife, the skin often closes tightly around the object and closes again if the object is removed, which can trap some blood within the body. It has thus been speculated that the fuller, an elongated concave depression in a metal blade, functions to let blood out of the body in order to cause more damage. This misconception has led to fullers becoming widely known as "blood grooves". The fuller is actually a structural reinforcement of the blade similar in design to a metal I-beam used in construction. However, internal bleeding is just as dangerous as external bleeding; if enough blood vessels are severed to cause serious injury, the skin's elasticity will do nothing to prevent blood from exiting the circulatory system and accumulating uselessly in other parts of the body. Death from stabbing is caused by shock, severe blood loss, infection, or loss of function of an essential organ such as the heart and/or lungs. Medical treatment Although previously a victim of abdominal stabbing would be subject to exploratory surgery laparotomy, it is now considered safe not to operate if the patient is stable. In that case, they should be observed for signs of decompensation indicating a serious injury. If the patient initially presents stabbing injuries and is unstable, then laparotomy should be initiated to discover and rectify any internal injury. Autopsy examination When someone who has sustained a stab wound dies, the body is autopsied and the wound is inspected by a forensic pathologist. Such examination can yield valuable information about the weapon used to produce the injury. From the external appearance and internal findings, the pathologist will usually be able to offer opinion about the dimensions of the weapon including the width and minimum possible length of the blade. It is possible to determine whether the weapon was single edged or double edged. Sometimes factors like the taper of the blade and movement of knife in the wound can also be determined. Bruises or abrasions may give information about the guard. See also Stabbing as a terrorist tactic Wound Impalement Mass stabbing References External links Injuries Violence Attacks by method Causes of death Suicide methods Murder Terrorism tactics
Stabbing
[ "Biology" ]
767
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
618,063
https://en.wikipedia.org/wiki/Satellite%20phone
A satellite telephone, satellite phone or satphone is a type of mobile phone that connects to other phones or the telephone network by radio link through satellites orbiting the Earth instead of terrestrial cell sites, as cellphones do. Therefore, they can work in most geographic locations on the Earth's surface, as long as open sky and the line-of-sight between the phone and the satellite are provided. Depending on the architecture of a particular system, coverage may include the entire Earth or only specific regions. Satellite phones provide similar functionality to terrestrial mobile telephones; voice calling, text messaging, and low-bandwidth Internet access are supported through most systems. The advantage of a satellite phone is that it can be used in such regions where local terrestrial communication infrastructures, such as landline and cellular networks, are not available. Satellite phones are popular on expeditions into remote locations where there is no reliable cellular service, such as recreational hiking, hunting, fishing, and boating trips, as well as for business purposes, such as mining locations and maritime shipping. Satellite phones rarely get disrupted by natural disasters on Earth or human actions such as war, so they have proven to be dependable communication tools in emergency and humanitarian situations, when the local communications system have been compromised. The mobile equipment, also known as a terminal, varies widely. Early satellite phone handsets had a size and weight comparable to that of a late-1980s or early-1990s mobile phone, but usually with a large retractable antenna. More recent satellite phones are similar in size to a regular mobile phone while some prototype satellite phones have no distinguishable difference from an ordinary smartphone. A fixed installation, such as one used aboard a ship, may include large, rugged, rack-mounted electronics, and a steerable microwave antenna on the mast that automatically tracks the overhead satellites. Smaller installations using VoIP over a two-way satellite broadband service such as BGAN or VSAT bring the costs within the reach of leisure vessel owners. Internet service satellite phones have notoriously poor reception indoors, though it may be possible to get a consistent signal near a window or in the top floor of a building if the roof is sufficiently thin. The phones have connectors for external antennas that can be installed in vehicles and buildings. The systems also allow for the use of repeaters, much like terrestrial mobile phone systems. In the early 2020s, various manufacturers began to integrate satellite messaging connectivity and satellite emergency services into conventional mobile phones for use in remote regions, where there is no reliable terrestrial network. History The first satellite relayed phone calls were achieved early on in the space age, after the first relay test was conducted by Pioneer 1 and the first broadcast by SCORE in 1958 at the end of the year, after Sputnik I became at the beginning of the year the first satellite in history. MARISAT was the first mobile communications satellite, eventually operated by the first privatized satellite communication INMARSAT organization, which was formed in 1979. Satellite network Satellite phone systems can be classified into two types: systems that use satellites in a high geostationary orbit, above the Earth's surface, and systems that use satellites in low Earth orbit (LEO), above the Earth. Geostationary satellites Some satellite phones use satellites in geostationary orbit (GSO), which appear at a fixed position in the sky. These systems can maintain near-continuous global coverage with only three or four satellites, reducing the launch costs. The satellites used for these systems are very heavy (about 5000 kg) and expensive to build and launch. The satellites orbit at an altitude of above the Earth's surface; a noticeable delay is present while making a phone call or using data services due to the large distance from users. The amount of bandwidth available on these systems is substantially higher than that of the low Earth orbit systems; all three active systems provide portable satellite Internet using laptop-sized terminals with speeds ranging from 60 to 512 kbit per second (kbps). Geostationary satellite phones can only be used at lower latitudes, generally between 70 degrees north of the equator and 70 degrees south of the equator. At higher latitudes the satellite appears at such a low angle in the sky that radio frequency interference from terrestrial sources in the same frequency bands can interfere with the signal. Another disadvantage of geostationary satellite systems is that in many areas—even where a large amount of open sky is present—the line-of-sight between the phone and the satellite is broken by obstacles such as steep hills and forest. The user will need to find an area with line-of-sight before using the phone. This is not the case with LEO services: even if the signal is blocked by an obstacle, one can wait a few minutes until another satellite passes overhead, but a GSO satellite may drop a call when line of sight is lost. ACeS: This former Indonesia-based small regional operator provided voice and data services in East Asia, South Asia, and Southeast Asia using a single satellite. It ceased operations in 2014. Inmarsat: The oldest satellite phone operator, a British company founded in 1979. It originally provided large fixed installations for ships, but has recently entered the market of hand-held phones in a joint venture with ACeS. The company operates eleven satellites. Coverage is available on most of the Earth, except polar regions. Thuraya: Established in 1997, United Arab Emirates-based Thuraya's satellites provide coverage across Europe, Africa, the Middle East, Asia and Australia. MSAT / SkyTerra: An American satellite-phone company that uses equipment similar to Inmarsat, but plans to launch a service using hand-held devices in the Americas similar to Thuraya's. Terrestar: Satellite-phone system for North America. ICO Global Communications: An American satellite-phone company which has launched a single geosynchronous satellite, not yet active. Tiantong: A Chinese satellite-phone system is planned to provide satellite phone and short message sending and receiving functions to users in China and its surrounding areas, the Middle East, Africa and other related regions, as well as most of the Pacific and Indian Oceans. Low Earth orbit Satellite phones may utilize satellites in low Earth orbit (LEO). The advantages include the possibility of providing worldwide wireless coverage with no gaps. LEO satellites orbit the Earth in high-speed, low-altitude orbits with an orbital time of 70–100 minutes, an altitude of . Since the satellites are not geostationary, they move with respect to the ground. Any given satellite is only in view of a phone for a short time, so the call must be "handed off" electronically to another satellite when one passes beyond the local horizon. Depending on the positions of both the satellite and terminal, a usable pass of an individual LEO satellite will typically last 4–15 minutes on average. At least one satellite must have line-of-sight to every coverage area at all times to guarantee coverage; thus a constellation of satellites, typically 40 to 70, is required to maintain worldwide coverage. Globalstar: A network covering most of the world's landmass using 48 active satellites. However, many areas of the Earth's surface are left without coverage, since a satellite requires to be in range of an Earth station gateway. Satellites fly in an inclined orbit of 52 degrees, therefore polar regions cannot be covered. The network went into full commercial service in February 2000. A second-generation constellation consists of 24 low Earth orbiting (LEO) satellites. The launch of the second-generation constellation was completed on February 6, 2013. Iridium: A network operating 66 satellites in a polar orbit that claims to have coverage everywhere on Earth. Radio cross-links are used between satellites to relay data to the nearest satellite with a connection to an Earth station. Commercial service started in November 1998 and fell into Chapter 11 bankruptcy in August 1999. In 2001, service was re-established by Iridium Satellite LLC. Iridium NEXT, a second-generation constellation of the communications satellites, was completed on January 11, 2019. Both systems, based in the United States, started in the late 1990s, but soon went into bankruptcy after failing to gain enough subscribers to fund launch costs. They are now operated by new owners who bought the assets for a fraction of their original cost and are now both planning to launch replacement constellations supporting higher bandwidth. Data speeds for current networks are between 2200 and 9600 bit/s using a satellite handset. A third system was announced in 2022 when T-Mobile US and SpaceX announced a partnership to add satellite cellular service to Starlink second generation (Gen2) satellites that are to begin launching to orbit in late 2022. The service is aimed to provide dead-zone cell phone coverage across the US using existing midband PCS spectrum that T-Mobile owns. Cell coverage will begin with messaging and expand to include voice and limited data services later, with testing to begin in 2023. With Starlink Gen2 satellites in low Earth orbit using existing PCS spectrum, T-Mobile plans to be able to connect ordinary mobile phones to satellites, unlike earlier satellite phones in the market which used specialized radios to connect to geosynchronous-orbit satellites, which have longer communications latencies. T-Mobile has offered to extend the offering globally if cellular carriers in other countries wish to exchange roaming services via the T-Mobile partnership with SpaceX, with other carriers working with their regulators to enable midband communications landing rights on a country-by-country basis. Bandwidth will be limited to approximately 2 to 4 megabits per second spread across a very large cell coverage area, with thousands of voice calls or millions of text messages simultaneously in an area. The size of a single coverage area has not yet been specified. Geotracking LEO systems have the ability to track a mobile unit's location using Doppler navigation from the satellite. However, this method can be inaccurate by tens of kilometers. On some Iridium hardware the coordinates can be extracted using AT commands, while recent Globalstar handsets will display them on the screen. Most VSAT terminals can be reprogrammed in-field using AT-commands to bypass automatic acquisition of GPS coordinates and instead accept manually injected GPS coordinates. Virtual country codes Satellite phones are usually issued with numbers in a special country calling code. Inmarsat satellite phones are issued with codes +870. In the past, additional country codes were allocated to different satellites, but the codes +871 to +874 were phased out at the end of 2008 leaving Inmarsat users with the same country code, regardless of which satellite their terminal is registered with. Low Earth orbit systems including some of the defunct ones have been allocated number ranges in the International Telecommunication Union's Global Mobile Satellite System virtual country code +881. Iridium satellite phones are issued with codes +881 6 and +881 7. Globalstar, although allocated +881 8 and +881 9 use U.S. telephone numbers except for service resellers located in Brazil, which use the +881 range. Small regional satellite phone networks are allocated numbers in the +882 code designated for "international networks" which is not used exclusively for satellite phone networks. Cost While it is possible to obtain used handsets for the Thuraya, Iridium, and Globalstar networks for approximately , the newest handsets are quite expensive. The Iridium 9505A, released in 2001, sold in March 2010 for over $1,000. Satellite phones are purpose-built for one particular network and cannot be switched to other networks. The price of handsets varies with network performance. If a satellite phone provider encounters trouble with its network, handset prices will fall, then increase once new satellites are launched. Similarly, handset prices will increase when calling rates are reduced. Among the most expensive satellite phones are BGAN terminals, often costing several thousand dollars. These phones provide about 0.5 Mbit/s Internet and voice communications. Satellite phones are sometimes subsidised by the provider if one signs a post-paid contract, but subsidies are usually only a few hundred dollars or less. Since most satellite phones are built under license or the manufacturing of handsets is contracted out to OEMs, operators have a large influence over the selling price. Satellite networks operate under proprietary protocols, making it difficult for manufacturers to independently make handsets. A startup is proposing the use of standard mobile phone technology in satellites to enable low bandwidth text message with satellites from cheap mobile phones. Calling cost The cost of making voice calls from a satellite phone varies from around $0.15 to $2 per minute, while calling them from landlines and regular mobile phones is more expensive. Costs for data transmissions (particularly broadband data) can be much higher. Rates from landlines and mobile phones range from $3 to $14 per minute with Iridium, Thuraya and Inmarsat being some of the most expensive networks to call. The receiver of the call pays nothing, unless they are being called via a special reverse-charge service. Calls between different satellite phone networks are often very expensive, with calling rates of up to $15 per minute. Calls from satellite phones to landlines are usually around $0.80 to $1.50 per minute unless special offers are used. Such promotions are usually bound to a particular geographic area where traffic is low. Most satellite phone networks have pre-paid plans, with vouchers ranging from $100 to $5,000. One-way services Some satellite phone networks provide a one-way paging channel to alert users in poor coverage areas (such as indoors) of the incoming call. When the alert is received on the satellite phone it must be taken to an area with better coverage before the call can be accepted. Globalstar provides a one-way data uplink service, typically used for asset tracking. Iridium operates a one-way pager service as well as the call alert feature. Legal restrictions In some countries, possession of a satellite phone is illegal. Their signals will usually bypass local telecoms systems, hindering censorship and wiretapping attempts, which has led some intelligence agencies to believe that satellite phones aid terrorist activity. It is also common for restrictions to be in place in countries with oppressive governments regimes as a way to both expose subversive agents within their country and maximize the control of the information that makes it past their borders. China – Inmarsat became the first company permitted to sell satellite phones in 2016. China Telecom began selling satellite phones in 2018 and six other satellite phone companies expressed their interest in entering the Chinese market shortly after. Cuba India – only Inmarsat-based satellite services are permitted within territories and areas under Indian jurisdiction. Importation and operation of all other satellite services, including Thuraya and Iridium, is illegal. International shipping is obliged to comply with Indian Directorate-General of Shipping (DGS) Order No. 02 of 2012 which prohibits the unauthorised import and operation of Thuraya, Iridium and other such satellite phones in Indian waters. The legislation to this effect is Section 6 of Indian Wireless Act and Section 20 of Indian Telegraph Act. International Long Distance (ILD) licences and No Objection Certificates (NOC) issued by Indian Department of Telecommunications (DOT) are mandatory for satellite communication services on Indian territory. Mauritius – In 2022, the Information and Communications Authority started regulating the ownership and use of satellite phones. Myanmar North Korea – The US Bureau of Diplomatic Security advises visitors that they have "no right to privacy in North Korea and should assume your communications are monitored" which excludes the possibility of satellite phone technology. Russia – in 2012, new regulations governing the use of satellite phones inside Russia or its territories were developed with the stated aim of fighting terrorism by enabling the Russian government to intercept calls. These regulations allow non-Russian visitors to register their SIM cards for use within Russian territory for up to six months. Security concerns All modern satellite phone networks encrypt voice traffic to prevent eavesdropping. In 2012, a team of academic security researchers reverse-engineered the two major proprietary encryption algorithms in use. One algorithm (used in GMR-1 phones) is a variant of the A5/2 algorithm used in GSM (used in common mobile phones), and both are vulnerable to cipher-text only attacks. The GMR-2 standard introduced a new encryption algorithm which the same research team also cryptanalysed successfully. Thus satellite phones need additional encrypting if used for high-security applications. Use in disaster response Most mobile telephone networks operate close to capacity during normal times, and large spikes in call volumes caused by widespread emergencies often overload the systems when they are needed most. Examples reported in the media where this has occurred include the 1999 İzmit earthquake, the September 11 attacks, the 2006 Kiholo Bay earthquake, the 2003 Northeast blackouts, Hurricane Katrina, the 2007 Minnesota bridge collapse, the 2010 Chile earthquake, and the 2010 Haiti earthquake. Reporters and journalists have also been using satellite phones to communicate and report on events in war zones such as Iraq. Terrestrial cell antennas and networks can be damaged by natural disasters. Satellite telephony can avoid this problem and be useful during natural disasters. Satellite phone networks themselves are prone to congestion as satellites and spot beams cover a large area with relatively few voice channels. Integration into conventional mobile phones In the early 2020s, manufacturers began to integrate satellite connectivity into smartphone devices for use in remote areas, out of the cellular network range. The satellite-to-phone services use L band frequencies, which are compatible with most modern handsets. However, due to the antenna limitations in the conventional phones, in the early stages of implementation satellite connectivity is limited to satellite messaging and satellite emergency services. In 2022, the Apple iPhone 14 started supporting sending emergency text messages via Globalstar satellites. In 2023, the Apple iPhone 15 added satellite communication with roadside service in the United States. In 2022, T-Mobile formed a partnership to use Starlink services via existing LTE spectrum, expected in late 2024. In 2022, AST SpaceMobile started building a 3GPP standard-based cellular space network to allow existing, unmodified smartphones to connect to satellites in areas with coverage gaps. In 2023, Qualcomm announced Snapdragon Satellite, the service that will allow supported cellphones, starting with Snapdragon 8 Gen 2 chipset, to send and receive text messages via 5G non-terrestrial networks (NTN). In 2024, Iridium introduced Project Stardust, a standard-based satellite-to-cellphone service supported via NB-IoT for 5G non-terrestrial networks, which will be utilized over Iridium's existing low-earth orbit satellites. Scheduled for launch in 2026, the service provides messaging, emergency communications and IoT for devices like cars, smartphones, tablets and related consumer applications. See also Broadband Global Area Network (BGAN) Mobile-satellite service Satellite internet Telecommunications References External links University of Surrey pages with information on some satellite systems, including currently planned, and defunct proposals such as Teledesic (non-commercial) Satellite Phone FAQ (satellite phone services and equipment reviews, non-commercial) Satellite mobile system architecture (technical) History of the Handheld Satellite Phone (2018) at GlobalCom Emergency communication Mobile phones Mobile telecommunications
Satellite phone
[ "Technology" ]
3,938
[ "Mobile telecommunications", "Satellite telephony" ]
618,076
https://en.wikipedia.org/wiki/Opticks
Opticks: or, A Treatise of the Reflexions, Refractions, Inflexions and Colours of Light is a collection of three books by Isaac Newton that was published in English in 1704 (a scholarly Latin translation appeared in 1706). The treatise analyzes the fundamental nature of light by means of the refraction of light with prisms and lenses, the diffraction of light by closely spaced sheets of glass, and the behaviour of color mixtures with spectral lights or pigment powders. Opticks was Newton's second major work on physical science and it is considered one of the three major works on optics during the Scientific Revolution (alongside Johannes Kepler's Astronomiae Pars Optica and Christiaan Huygens' Treatise on Light). Overview The publication of Opticks represented a major contribution to science, different from but in some ways rivalling the Principia, yet Isaac Newton's name did not appear on the cover page of the first edition. Opticks is largely a record of experiments and the deductions made from them, covering a wide range of topics in what was later to be known as physical optics. That is, this work is not a geometric discussion of catoptrics or dioptrics, the traditional subjects of reflection of light by mirrors of different shapes and the exploration of how light is "bent" as it passes from one medium, such as air, into another, such as water or glass. Rather, the Opticks is a study of the nature of light and colour and the various phenomena of diffraction, which Newton called the "inflexion" of light. Newton sets forth in full his experiments, first reported to the Royal Society of London in 1672, on dispersion, or the separation of light into a spectrum of its component colours. He demonstrates how the appearance of color arises from selective absorption, reflection, or transmission of the various component parts of the incident light. The major significance of Newton's work is that it overturned the dogma, attributed to Aristotle or Theophrastus and accepted by scholars in Newton's time, that "pure" light (such as the light attributed to the Sun) is fundamentally white or colourless, and is altered into color by mixture with darkness caused by interactions with matter. Newton showed the opposite was true: light is composed of different spectral hues (he describes seven – red, orange, yellow, green, blue, indigo and violet), and all colours, including white, are formed by various mixtures of these hues. He demonstrates that color arises from a physical property of light – each hue is refracted at a characteristic angle by a prism or lens – but he clearly states that color is a sensation within the mind and not an inherent property of material objects or of light itself. For example, he demonstrates that a red violet (magenta) color can be mixed by overlapping the red and violet ends of two spectra, although this color does not appear in the spectrum and therefore is not a "color of light". By connecting the red and violet ends of the spectrum, he organised all colours as a color circle that both quantitatively predicts color mixtures and qualitatively describes the perceived similarity among hues. Newton's contribution to prismatic dispersion was the first to outline multiple-prism arrays. Multiple-prism configurations, as beam expanders, became central to the design of the tunable laser more than 275 years later and set the stage for the development of the multiple-prism dispersion theory. Comparison to the Principia Opticks differs in many respects from the Principia. It was first published in English rather than in the Latin used by European philosophers, contributing to the development of a vernacular science literature. The books were a model of popular science exposition: although Newton's English is somewhat dated—he shows a fondness for lengthy sentences with much embedded qualifications—the book can still be easily understood by a modern reader. In contrast, few readers of Newton's time found the Principia accessible or even comprehensible. His formal but flexible style shows colloquialisms and metaphorical word choice. Unlike the Principia, Opticks is not developed using the geometric convention of propositions proved by deduction from either previous propositions, lemmas or first principles (or axioms). Instead, axioms define the meaning of technical terms or fundamental properties of matter and light, and the stated propositions are demonstrated by means of specific, carefully described experiments. The first sentence of Book I declares "My Design in this Book is not to explain the Properties of Light by Hypotheses, but to propose and prove them by Reason and Experiments. In an Experimentum crucis or "critical experiment" (Book I, Part II, Theorem ii), Newton showed that the color of light corresponded to its "degree of refrangibility" (angle of refraction), and that this angle cannot be changed by additional reflection or refraction or by passing the light through a coloured filter. The work is a vade mecum of the experimenter's art, displaying in many examples how to use observation to propose factual generalisations about the physical world and then exclude competing explanations by specific experimental tests. Unlike the Principia, which vowed Non fingo hypotheses or "I make no hypotheses" outside the deductive method, the Opticks develops conjectures about light that go beyond the experimental evidence: for example, that the physical behaviour of light was due its "corpuscular" nature as small particles, or that perceived colours were harmonically proportioned like the tones of a diatonic musical scale. Queries Newton originally considered to write four books, but he dropped the last book on action at a distance. Instead he concluded Opticks a set of unanswered questions and positive assertions referred as queries in Book III. The first set of queries were brief, but the later ones became short essays, filling many pages. In the first edition, these were sixteen such queries; that number was increased to 23 in the Latin edition, published in 1706, and then in the revised English edition, published in 1717/18. In the fourth edition of 1730, there were 31 queries. These queries, especially the later ones, deal with a wide range of physical phenomena that go beyond the topic of optics. The queries concern the nature and transmission of heat; the possible cause of gravity; electrical phenomena; the nature of chemical action; the way in which God created matter; the proper way to do science; and even the ethical conduct of human beings. These queries are not really questions in the ordinary sense. These queries are almost all posed in the negative, as rhetorical questions. That is, Newton does not ask whether light "is" or "may be" a "body." Rather, he declares: "Is not Light a Body?" Stephen Hales, a firm Newtonian of the early eighteenth century, declared that this was Newton's way of explaining "by Quaere." The first query reads: "Do not Bodies act upon Light at a distance, and by their action bend its Rays; and is not this action (caeteris paribus) strongest at the least distance?" suspecting on the effect of gravity on the trajectory of light rays. This query predates the prediction of gravitational lensing by Albert Einstein's general relativity by two centuries and later confirmed by Eddington experiment in 1919. The first part of query 30 reads "Are not gross Bodies and Light convertible into one another" thereby anticipating mass-energy equivalence. Query 6 of the book reads "Do not black Bodies conceive heat more easily from Light than those of other Colours do, by reason that the Light falling on them is not reflected outwards, but enters into the Bodies, and is often reflected and refracted within them, until it be stifled and lost?", thereby introducing the concept of a black body. The last query (number 31) wonders if a corpuscular theory could explain how different substances react more to certain substances than to others, in particular how aqua fortis (nitric acid) reacts more with calamine that with iron. This 31st query has been often been linked to the origin of the concept of affinity in chemical reactions. Various 18th century historians and chemists like William Cullen and Torbern Bergman, credited Newton for the development affinity tables. Reception The Opticks was widely read and debated in England and on the Continent. The early presentation of the work to the Royal Society stimulated a bitter dispute between Newton and Robert Hooke over the "corpuscular" or particle theory of light, which prompted Newton to postpone publication of the work until after Hooke's death in 1703. On the Continent, and in France in particular, both the Principia and the Opticks were initially rejected by many natural philosophers, who continued to defend Cartesian natural philosophy and the Aristotelian version of color, and claimed to find Newton's prism experiments difficult to replicate. Indeed, the Aristotelian theory of the fundamental nature of white light was defended into the 19th century, for example by the German writer Johann Wolfgang von Goethe in his 1810 Theory of Colours (). Newtonian science became a central issue in the assault waged by the philosophes in the Age of Enlightenment against a natural philosophy based on the authority of ancient Greek or Roman naturalists or on deductive reasoning from first principles (the method advocated by French philosopher René Descartes), rather than on the application of mathematical reasoning to experience or experiment. Voltaire popularised Newtonian science, including the content of both the Principia and the Opticks, in his Elements de la philosophie de Newton (1738), and after about 1750 the combination of the experimental methods exemplified by the Opticks and the mathematical methods exemplified by the Principia were established as a unified and comprehensive model of Newtonian science. Some of the primary adepts in this new philosophy were such prominent figures as Benjamin Franklin, Antoine-Laurent Lavoisier, and James Black. Subsequent to Newton, much has been amended. Thomas Young and Augustin-Jean Fresnel showed that the wave theory Christiaan Huygens described in his Treatise on Light (1690) could prove that colour is the visible manifestation of light's wavelength. Science also slowly came to recognize the difference between perception of colour and mathematisable optics. The German poet Goethe, with his epic diatribe Theory of Colours, could not shake the Newtonian foundation – but "one hole Goethe did find in Newton's armour.. Newton had committed himself to the doctrine that refraction without colour was impossible. He therefore thought that the object-glasses of telescopes must for ever remain imperfect, achromatism and refraction being incompatible. This inference was proved by Dollond to be wrong." (John Tyndall, 1880) See also Color theory Luminiferous aether Prism (optics) Theory of Colours Book of Optics (Ibn al-Haytham) Elements of the Philosophy of Newton (Voltaire) Multiple-prism dispersion theory Notes References External links Full and free online editions of Newton's Opticks Rarebookroom, First edition ETH-Bibliothek, First edition Gallica, First edition Internet Archive, Fourth edition Project Gutenberg digitized text & images of the Fourth Edition Cambridge University Digital Library, Papers on Hydrostatics, Optics, Sound and Heat – Manuscript papers by Isaac Newton containing draft of Opticks 1704 non-fiction books 1704 in science English non-fiction literature Books by Isaac Newton History of optics Mathematics books Physics books Treatises Light
Opticks
[ "Physics" ]
2,411
[ "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Waves", "Light" ]
618,077
https://en.wikipedia.org/wiki/Power%20engineering
Power engineering, also called power systems engineering, is a subfield of electrical engineering that deals with the generation, transmission, distribution, and utilization of electric power, and the electrical apparatus connected to such systems. Although much of the field is concerned with the problems of three-phase AC power – the standard for large-scale power transmission and distribution across the modern world – a significant fraction of the field is concerned with the conversion between AC and DC power and the development of specialized power systems such as those used in aircraft or for electric railway networks. Power engineering draws the majority of its theoretical base from electrical engineering and mechanical engineering. History Pioneering years Electricity became a subject of scientific interest in the late 17th century. Over the next two centuries a number of important discoveries were made including the incandescent light bulb and the voltaic pile. Probably the greatest discovery with respect to power engineering came from Michael Faraday who in 1831 discovered that a change in magnetic flux induces an electromotive force in a loop of wire—a principle known as electromagnetic induction that helps explain how generators and transformers work. In 1881 two electricians built the world's first power station at Godalming in England. The station employed two waterwheels to produce an alternating current that was used to supply seven Siemens arc lamps at 250 volts and thirty-four incandescent lamps at 40 volts. However supply was intermittent and in 1882 Thomas Edison and his company, The Edison Electric Light Company, developed the first steam-powered electric power station on Pearl Street in New York City. The Pearl Street Station consisted of several generators and initially powered around 3,000 lamps for 59 customers. The power station used direct current and operated at a single voltage. Since the direct current power could not be easily transformed to the higher voltages necessary to minimise power loss during transmission, the possible distance between the generators and load was limited to around half-a-mile (800 m). That same year in London Lucien Gaulard and John Dixon Gibbs demonstrated the first transformer suitable for use in a real power system. The practical value of Gaulard and Gibbs' transformer was demonstrated in 1884 at Turin where the transformer was used to light up forty kilometres (25 miles) of railway from a single alternating current generator. Despite the success of the system, the pair made some fundamental mistakes. Perhaps the most serious was connecting the primaries of the transformers in series so that switching one lamp on or off would affect other lamps further down the line. Following the demonstration George Westinghouse, an American entrepreneur, imported a number of the transformers along with a Siemens generator and set his engineers to experimenting with them in the hopes of improving them for use in a commercial power system. One of Westinghouse's engineers, William Stanley, recognised the problem with connecting transformers in series as opposed to parallel and also realised that making the iron core of a transformer a fully enclosed loop would improve the voltage regulation of the secondary winding. Using this knowledge he built the world's first practical transformer based alternating current power system at Great Barrington, Massachusetts in 1886. In 1885 the Italian physicist and electrical engineer Galileo Ferraris demonstrated an induction motor and in 1887 and 1888 the Serbian-American engineer Nikola Tesla filed a range of patents related to power systems including one for a practical two-phase induction motor which Westinghouse licensed for his AC system. By 1890 the power industry had flourished and power companies had built thousands of power systems (both direct and alternating current) in the United States and Europe – these networks were effectively dedicated to providing electric lighting. During this time a fierce rivalry in the US known as the "war of the currents" emerged between Edison and Westinghouse over which form of transmission (direct or alternating current) was superior. In 1891, Westinghouse installed the first major power system that was designed to drive an electric motor and not just provide electric lighting. The installation powered a synchronous motor at Telluride, Colorado with the motor being started by a Tesla induction motor. On the other side of the Atlantic, Oskar von Miller built a 20 kV 176 km three-phase transmission line from Lauffen am Neckar to Frankfurt am Main for the Electrical Engineering Exhibition in Frankfurt. In 1895, after a protracted decision-making process, the Adams No. 1 generating station at Niagara Falls began transmitting three-phase alternating current power to Buffalo at 11 kV. Following completion of the Niagara Falls project, new power systems increasingly chose alternating current as opposed to direct current for electrical transmission. Twentieth century Power engineering and Bolshevism The generation of electricity was regarded as particularly important following the Bolshevik seizure of power. Lenin stated "Communism is Soviet power plus the electrification of the whole country." He was subsequently featured on many Soviet posters, stamps etc. presenting this view. The GOELRO plan was initiated in 1920 as the first Bolshevik experiment in industrial planning and in which Lenin became personally involved. Gleb Krzhizhanovsky was another key figure involved, having been involved in the construction of a power station in Moscow in 1910. He had also known Lenin since 1897 when they were both in the St. Petersburg chapter of the Union of Struggle for the Liberation of the Working Class. Power engineering in the USA In 1936 the first commercial high-voltage direct current (HVDC) line using mercury-arc valves was built between Schenectady and Mechanicville, New York. HVDC had previously been achieved by installing direct current generators in series (a system known as the Thury system) although this suffered from serious reliability issues. In 1957 Siemens demonstrated the first solid-state rectifier (solid-state rectifiers are now the standard for HVDC systems) however it was not until the early 1970s that this technology was used in commercial power systems. In 1959 Westinghouse demonstrated the first circuit breaker that used SF6 as the interrupting medium. SF6 is a far superior dielectric to air and, in recent times, its use has been extended to produce far more compact switching equipment (known as switchgear) and transformers. Many important developments also came from extending innovations in the ICT field to the power engineering field. For example, the development of computers meant load flow studies could be run more efficiently allowing for much better planning of power systems. Advances in information technology and telecommunication also allowed for much better remote control of the power system's switchgear and generators. Power Power Engineering deals with the generation, transmission, distribution and utilization of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors and power electronics. Power engineers may also work on systems that do not connect to the grid. These systems are called off-grid power systems and may be used in preference to on-grid systems for a variety of reasons. For example, in remote locations it may be cheaper for a mine to generate its own power rather than pay for connection to the grid and in most mobile applications connection to the grid is simply not practical. Fields Electricity generation covers the selection, design and construction of facilities that convert energy from primary forms to electric power. Electric power transmission requires the engineering of high voltage transmission lines and substation facilities to interface to generation and distribution systems. High voltage direct current systems are one of the elements of an electric power grid. Electric power distribution engineering covers those elements of a power system from a substation to the end customer. Power system protection is the study of the ways an electrical power system can fail, and the methods to detect and mitigate for such failures. In most projects, a power engineer must coordinate with many other disciplines such as civil and mechanical engineers, environmental experts, and legal and financial personnel. Major power system projects such as a large generating station may require scores of design professionals in addition to the power system engineers. At most levels of professional power system engineering practice, the engineer will require as much in the way of administrative and organizational skills as electrical engineering knowledge. Professional societies and international standards organizations In both the UK and the US, professional societies had long existed for civil and mechanical engineers. The Institution of Electrical Engineers (IEE) was founded in the UK in 1871, and the AIEE in the United States in 1884. These societies contributed to the exchange of electrical knowledge and the development of electrical engineering education. On an international level, the International Electrotechnical Commission (IEC), which was founded in 1906, prepares standards for power engineering, with 20,000 electrotechnical experts from 172 countries developing global specifications based on consensus. See also Energy economics Industrial ecology Power electronics Power system simulation Power engineering software References External links IEEE Power Engineering Society Jadavpur University, Department of Power Engineering Power Engineering International Magazine Articles Power Engineering Magazine Articles American Society of Power Engineers, Inc. National Institute for the Uniform Licensing of Power Engineer Inc. Worcester Polytechnic Institute Power Systems Engineering P P
Power engineering
[ "Physics", "Engineering" ]
1,795
[ "Applied and interdisciplinary physics", "Energy engineering", "Mechanical engineering", "Power engineering", "Electrical engineering" ]
618,086
https://en.wikipedia.org/wiki/Interpretability%20logic
Interpretability logics comprise a family of modal logics that extend provability logic to describe interpretability or various related metamathematical properties and relations such as weak interpretability, Π1-conservativity, cointerpretability, tolerance, cotolerance, and arithmetic complexities. Main contributors to the field are Alessandro Berarducci, Petr Hájek, Konstantin Ignatiev, Giorgi Japaridze, Franco Montagna, Vladimir Shavrukov, Rineke Verbrugge, Albert Visser, and Domenico Zambella. Examples Logic ILM The language of ILM extends that of classical propositional logic by adding the unary modal operator and the binary modal operator (as always, is defined as ). The arithmetical interpretation of is “ is provable in Peano arithmetic (PA)”, and is understood as “ is interpretable in ”. Axiom schemata: All classical tautologies Rules of inference: “From and conclude ” “From conclude ”. The completeness of ILM with respect to its arithmetical interpretation was independently proven by Alessandro Berarducci and Vladimir Shavrukov. Logic TOL The language of TOL extends that of classical propositional logic by adding the modal operator which is allowed to take any nonempty sequence of arguments. The arithmetical interpretation of is “ is a tolerant sequence of theories”. Axioms (with standing for any formulas, for any sequences of formulas, and identified with ⊤): All classical tautologies Rules of inference: “From and conclude ” “From conclude ”. The completeness of TOL with respect to its arithmetical interpretation was proven by Giorgi Japaridze. References Giorgi Japaridze and Dick de Jongh, The Logic of Provability. In Handbook of Proof Theory, S. Buss, ed., Elsevier, 1998, pp. 475-546. Provability logic Interpretation (philosophy)
Interpretability logic
[ "Mathematics" ]
411
[ "Provability logic", "Proof theory" ]
618,119
https://en.wikipedia.org/wiki/Provability%20logic
Provability logic is a modal logic, in which the box (or "necessity") operator is interpreted as 'it is provable that'. The point is to capture the notion of a proof predicate of a reasonably rich formal theory, such as Peano arithmetic. Examples There are a number of provability logics, some of which are covered in the literature mentioned in . The basic system is generally referred to as GL (for Gödel–Löb) or L or K4W (W stands for well-foundedness). It can be obtained by adding the modal version of Löb's theorem to the logic K (or K4). Namely, the axioms of GL are all tautologies of classical propositional logic plus all formulas of one of the following forms: Distribution axiom: Löb's axiom: And the rules of inference are: Modus ponens: From p → q and p conclude q; Necessitation: From p conclude . History The GL model was pioneered by Robert M. Solovay in 1976. Since then, until his death in 1996, the prime inspirer of the field was George Boolos. Significant contributions to the field have been made by Sergei N. Artemov, Lev Beklemishev, Giorgi Japaridze, Dick de Jongh, Franco Montagna, Giovanni Sambin, Vladimir Shavrukov, Albert Visser and others. Generalizations Interpretability logics and Japaridze's polymodal logic present natural extensions of provability logic. See also Hilbert–Bernays provability conditions Interpretability logic Kripke semantics Japaridze's polymodal logic Löb's theorem Doxastic logic References George Boolos, The Logic of Provability. Cambridge University Press, 1993. Giorgi Japaridze and Dick de Jongh, The logic of provability. In: Handbook of Proof Theory, S. Buss, ed. Elsevier, 1998, pp. 475–546. Sergei N. Artemov and Lev Beklemishev, Provability logic. In: Handbook of Philosophical Logic, D. Gabbay and F. Guenthner, eds., vol. 13, 2nd ed., pp. 189–360. Springer, 2005. Per Lindström, Provability logic—a short introduction. Theoria 62 (1996), pp. 19–61. Craig Smoryński, Self-reference and modal logic. Springer, Berlin, 1985. Robert M. Solovay, ``Provability Interpretations of Modal Logic``, Israel Journal of Mathematics, Vol. 25 (1976): 287–304. Rineke Verbrugge, Provability logic, from the Stanford Encyclopedia of Philosophy. External links Modal logic Proof theory
Provability logic
[ "Mathematics" ]
599
[ "Mathematical logic", "Modal logic", "Provability logic", "Proof theory" ]
618,120
https://en.wikipedia.org/wiki/Electric%20power%20conversion
In electrical engineering, power conversion is the process of converting electric energy from one form to another. A power converter is an electrical device for converting electrical energy between alternating current (AC) and direct current (DC). It can also change the voltage or frequency of the current. Power converters include simple devices such as transformers, and more complex ones like resonant converters. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation. Power converters are classified based on the type of power conversion they perform. One way of classifying power conversion systems is based on whether the input and output is alternating or direct current. DC power conversion DC to DC The following devices can convert DC to DC: Linear regulator Voltage regulator Motor–generator Rotary converter Switched-mode power supply DC to AC The following devices can convert DC to AC: Power inverter Motor–generator Rotary converter Switched-mode power supply Chopper (electronics) AC power conversion AC to DC The following devices can convert AC to DC: Rectifier Mains power supply unit (PSU) Motor–generator Rotary converter Switched-mode power supply AC to AC The following devices can convert AC to AC: Transformer or autotransformer Voltage converter Voltage regulator Cycloconverter Variable-frequency transformer Motor–generator Rotary converter Switched-mode power supply Other systems There are also devices and methods to convert between power systems designed for single and three-phase operation. The standard power voltage and frequency vary from country to country and sometimes within a country. In North America and northern South America, it is usually 120 volts, 60 hertz (Hz), but in Europe, Asia, Africa, and many other parts of the world, it is usually 230 volts, 50 Hz. Aircraft often use 400 Hz power internally, so 50 Hz or 60 Hz to 400 Hz frequency conversion is needed for use in the ground power unit used to power the airplane while it is on the ground. Conversely, internal 400 Hz internal power may be converted to 50 Hz or 60 Hz for convenience power outlets available to passengers during flight. Certain specialized circuits can also be considered power converters, such as the flyback transformer subsystem powering a CRT, generating high voltage at approximately 15 kHz. Consumer electronics usually include an AC adapter (a type of power supply) to convert mains-voltage AC current to low-voltage DC suitable for consumption by microchips. Consumer voltage converters (also known as "travel converters") are used when traveling between countries that use ~120 V versus ~240 V AC mains power. (There are also consumer "adapters" which merely form an electrical connection between two differently shaped AC power plugs and sockets, but these change neither voltage nor frequency.) Why use transformers in power converters Transformers are used in power converters to incorporate electrical isolation and voltage step-down or step up. The secondary circuit is floating, when you touch the secondary circuit, you merely drag its potential to your body's potential or the earth's potential. There will be no current flowing through your body. That's why you can use your cellphone safely when it is being charged, even if your cellphone has a metal shell and is connected to the secondary circuit. Operating at high frequency and supplying low power, power converters have much smaller transformers compared with those of fundamental-frequency, high-power applications. The current in the primary winding of a transformer help to sets up the mutual flux in accordance with Ampere's law and balances the demagnetizing effect of the load current in the secondary winding. Flyback converter's transformer works differently, like an inductor. In each cycle, the flyback converter's transformer first gets charged and then releases its energy to the load. Accordingly, the flyback converter's transformer air gap has two functions. It not only determines inductance but also stores energy. For the flyback converter, the transformer gap can have the function of energy transmission through cycles of charging and discharging. The core's relative permeability can be > 1,000, even > 10,000. While the air gap features much lower permeability, accordingly it has higher energy density. See also Power supply Cascade converter Motor-generator Resonant converter Rotary converter References Abraham I. Pressman (1997). Switching Power Supply Design. McGraw-Hill. . Ned Mohan, Tore M. Undeland, William P. Robbins (2002). Power Electronics: Converters, Applications, and Design. Wiley. . Fang Lin Luo, Hong Ye, Muhammad H. Rashid (2005). Digital Power Electronics and Applications. Elsevier. . Fang Lin Luo, Hong Ye (2004). Advanced DC/DC Converters. CRC Press. . Mingliang Liu (2006). Demystifying Switched-Capacitor Circuits. Elsevier. . External links A general description of DC-DC converters U.S. based 50 Hz, 60 Hz, and 400 Hz frequency converter manufacturer GlobTek, Inc. Glossary of electric power supply and power conversion terms Electric power systems components Electronic engineering
Electric power conversion
[ "Technology", "Engineering" ]
1,096
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]
618,171
https://en.wikipedia.org/wiki/Resource%20Reservation%20Protocol
The Resource Reservation Protocol (RSVP) is a transport layer protocol designed to reserve resources across a network using the integrated services model. RSVP operates over an IPv4 or IPv6 and provides receiver-initiated setup of resource reservations for multicast or unicast data flows. It does not transport application data but is similar to a control protocol, like Internet Control Message Protocol (ICMP) or Internet Group Management Protocol (IGMP). RSVP is described in . RSVP can be used by hosts and routers to request or deliver specific levels of quality of service (QoS) for application data streams. RSVP defines how applications place reservations and how they can relinquish the reserved resources once no longer required. RSVP operations will generally result in resources being reserved in each node along a path. RSVP is not a routing protocol but was designed to interoperate with current and future routing protocols. In 2003, development effort was shifted from RSVP to RSVP-TE for teletraffic engineering. Next Steps in Signaling (NSIS) was a proposed replacement for RSVP. Main attributes RSVP requests resources for simplex flows: a traffic stream in only one direction from sender to one or more receivers. RSVP is not a routing protocol but works with current and future routing protocols. RSVP is receiver oriented in that the receiver of a data flow initiates and maintains the resource reservation for that flow. RSVP maintains soft state (the reservation at each node needs a periodic refresh) of the host and routers' resource reservations, hence supporting dynamic automatic adaptation to network changes. RSVP provides several reservation styles (a set of reservation options) and allows for future styles to be added in protocol revisions to fit varied applications. RSVP transports and maintains traffic and policy control parameters that are opaque to RSVP. History and related standards The basic concepts of RSVP were originally proposed in 1993. RSVP is described in a series of RFC documents from the IETF: : The version 1 functional specification was described in RFC 2205 (Sept. 1997) by IETF. Version 1 describes the interface to admission (traffic) control that is based "only" on resource availability. Later RFC2750 extended the admission control support. defines the use of RSVP with controlled-load RFC 2211 and guaranteed RFC 2212 QoS control services. More details in Integrated Services. Also defines the usage and data format of the data objects (that carry resource reservation information) defined by RSVP in RFC 2205. specifies the network element behavior required to deliver Controlled-Load services. specifies the network element behavior required to deliver guaranteed QoS services. describes a proposed extension for supporting generic policy based admission control in RSVP. The extension included a specification of policy objects and a description on handling policy events. (January 2000). , "RSVP-TE: Extensions to RSVP for LSP Tunnels" (December 2001). , "Generalized Multi-Protocol Label Switching (GMPLS) Signaling Resource ReserVation Protocol-Traffic Engineering (RSVP-TE) Extensions" (January 2003). , "Procedures for Modifying the Resource reSerVation Protocol (RSVP)" (October 2004), describes current best practices and specifies procedures for modifying RSVP. , "A Resource Reservation Protocol (RSVP) Extension for the Reduction of Bandwidth of a Reservation Flow" (May 2006), extends RSVP to enable the bandwidth of an existing reservation to be reduced instead of tearing down the reservation. , "Node-ID Based Resource Reservation Protocol (RSVP) Hello: A Clarification Statement" (June 2006). Key concepts The two key concepts of RSVP reservation model are flowspec and filterspec. Flowspec RSVP reserves resources for a flow. A flow is identified by the destination address, the protocol identifier, and, optionally, the destination port. In Multiprotocol Label Switching (MPLS) a flow is defined as a label-switched path (LSP). For each flow, RSVP also identifies the particular quality of service (QoS) required by the flow. This QoS information is called a flowspec and RSVP passes the flowspec from the application to the hosts and routers along the path. Those systems then analyse the flowspec to accept and reserve the resources. A flowspec consists of: Service class Reservation spec - defines the QoS Traffic spec - describes the data flow Filterspec The filterspec defines the set of packets that shall be affected by a flowspec (i.e. the data packets to receive the QoS defined by the flowspec). A filterspec typically selects a subset of all the packets processed by a node. The selection can depend on any attribute of a packet (e.g. the sender IP address and port). The currently defined RSVP reservation styles are: Fixed filter - reserves resources for a specific flow. Shared explicit - reserves resources for several flows and all share the resources Wildcard filter - reserves resources for a general type of flow without specifying the flow; all flows share the resources An RSVP reservation request consists of a flowspec and a filterspec and the pair is called a flowdescriptor. The flowspec sets the parameters of the packet scheduler at a node and the filterspec sets the parameters at the packet classifier. Messages There are two primary types of messages: Path messages (path) The path message is sent from the sender host along the data path and stores the path state in each node along the path. The path state includes the IP address of the previous node, and some data objects: sender template to describe the format of the sender data in the form of a Filterspec sender tspec to describe the traffic characteristics of the data flow adspec that carries advertising data (see RFC 2210 for more details). Reservation messages (resv) The resv message is sent from the receiver to the sender host along the reverse data path. At each node the IP destination address of the resv message will change to the address of the next node on the reverse path and the IP source address to the address of the previous node address on the reverse path. The resv message includes the flowspec data object that identifies the resources that the flow needs. The data objects on RSVP messages can be transmitted in any order. For the complete list of RSVP messages and data objects see RFC 2205. Operation An RSVP host that needs to send a data flow with specific QoS will transmit an RSVP path message every 30 seconds that will travel along the unicast or multicast routes pre-established by the working routing protocol. If the path message arrives at a router that does not understand RSVP, that router forwards the message without interpreting the contents of the message and will not reserve resources for the flow. Those who want to listen to them send a corresponding resv (short for reserve) message which then traces the path back to the sender. The resv message contains a flowspec. The resv message also has a filterspec object; it defines the packets that will receive the requested QoS defined in the flowspec. A simple filter spec could be just the sender’s IP address and optionally its UDP or TCP port. When a router receives the RSVP resv message it will: Make a reservation based on the request parameters. Admission control processes the request parameters and can either instruct the packet classifier to correctly handle the selected subset of data packets or negotiate with the upper layer how the packet handling should be performed. If the cannot be supported, a reject message is sent to let the listener know. Forward the request upstream (in the direction of the sender). At each node the flowspec in the resv message can be modified by a forwarding node (e.g. in the case of a multicast flow reservation the reservations requests can be merged). The routers then store the nature of the flow and optionally set up policing according to the flowspec for it. If nothing is heard for a certain length of time the reservation will time out and will be canceled. This solves the problem if either the sender or the receiver crash or are shut down without first canceling the reservation. Other features Integrity RSVP messages are appended with a message digest created by combining the message contents and a shared key using a message digest algorithm (commonly MD5). The key can be distributed and confirmed using two message types: integrity challenge request and integrity challenge response. Error reporting When a node detects an error, an error message is generated with an error code and is propagated upstream on the reverse path to the sender. Information on RSVP flow Two types of diagnostic messages allow a network operator to request the RSVP state information on a specific flow. Diagnostic facility An extension to the standard which allows a user to collect information about the RSVP state along a path. RFCs References External links Internet architecture Internet protocols Transport layer protocols
Resource Reservation Protocol
[ "Technology" ]
1,908
[ "Internet architecture", "IT infrastructure" ]
618,227
https://en.wikipedia.org/wiki/Superpartner
In particle physics, a superpartner (also sparticle) is a class of hypothetical elementary particles predicted by supersymmetry, which, among other applications, is one of the well-studied ways to extend the standard model of high-energy physics. When considering extensions of the Standard Model, the s- prefix from sparticle is used to form names of superpartners of the Standard Model fermions (sfermions), e.g. the stop squark. The superpartners of Standard Model bosons have an -ino (bosinos) appended to their name, e.g. gluino, the set of all gauge superpartners are called the gauginos. Theoretical predictions According to the supersymmetry theory, each fermion should have a partner boson, the fermion's superpartner, and each boson should have a partner fermion. Exact unbroken supersymmetry would predict that a particle and its superpartners would have the same mass. No superpartners of the Standard Model particles have yet been found. This may indicate that supersymmetry is incorrect, or it may also be the result of the fact that supersymmetry is not an exact, unbroken symmetry of nature. If superpartners are found, their masses would indicate the scale at which supersymmetry is broken. For particles that are real scalars (such as an axion), there is a fermion superpartner as well as a second, real scalar field. For axions, these particles are often referred to as axinos and saxions. In extended supersymmetry there may be more than one superparticle for a given particle. For instance, with two copies of supersymmetry in four dimensions, a photon would have two fermion superpartners and a scalar superpartner. In zero dimensions it is possible to have supersymmetry, but no superpartners. However, this is the only situation where supersymmetry does not imply the existence of superpartners. Recreating superpartners If the supersymmetry theory is correct, it should be possible to recreate these particles in high-energy particle accelerators. Doing so will not be an easy task; these particles may have masses up to a thousand times greater than their corresponding "real" particles. Some researchers have hoped the Large Hadron Collider at CERN might produce evidence for the existence of superpartner particles. However, as of 2018, no such evidence has been found. See also Chargino Gluino – as a superpartner of the Gluon Gravitino – as a superpartner of the hypothetical graviton Higgsino – as a superpartner of the Higgs Field Neutralino References Supersymmetric quantum field theory Particle physics
Superpartner
[ "Physics" ]
610
[ "Supersymmetric quantum field theory", "Supersymmetry", "Symmetry", "Particle physics" ]
618,241
https://en.wikipedia.org/wiki/Absorber
In high energy physics experiments, an absorber is a block of material used to absorb some of the energy of an incident particle in an experiment. Absorbers can be made of a variety of materials, depending on the purpose; lead, tungsten and liquid hydrogen are common choices. Most absorbers are used as part of a particle detector; particle accelerators use absorbers to reduce the radiation damage on accelerator components. Other uses of the same word Absorbers are used in ionization cooling, as in the International Muon Ionization Cooling Experiment. In solar power, a high degree of efficiency is achieved by using black absorbers which reflect off much less of the incoming energy. In sunscreen formulations, ingredients which absorb UVA/UVB rays, such as avobenzone and octyl methoxycinnamate, are known as absorbers. They are contrasted with physical "blockers" of UV radiation such as titanium dioxide and zinc oxide. References Particle detectors Accelerator physics
Absorber
[ "Physics", "Technology", "Engineering" ]
200
[ "Applied and interdisciplinary physics", "Measuring instruments", "Particle detectors", "Experimental physics", "Particle physics", "Particle physics stubs", "Accelerator physics" ]
618,303
https://en.wikipedia.org/wiki/Maiden%20flight
The maiden flight, also known as first flight, of an aircraft is the first occasion on which it leaves the ground under its own power. The same term is also used for the first launch of rockets. In the early days of aviation it could be dangerous, because the exact handling characteristics of the aircraft were generally unknown. The maiden flight of a new type is almost invariably flown by a highly experienced test pilot. Maiden flights are usually accompanied by a chase plane, to verify items like altitude, airspeed, and general airworthiness. A maiden flight is only one stage in the development of an aircraft type. Unless the type is a pure research aircraft (such as the X-15), the aircraft must be tested extensively to ensure that it delivers the desired performance with an acceptable margin of safety. In the case of civilian aircraft, a new type must be certified by a governing agency (such as the Federal Aviation Administration in the United States) before it can enter operation. Notable maiden flights (aircraft) An incomplete list of maiden flights of notable aircraft types, organized by date, follows. June, 1875 – Thomas Moy's Aerial Steamer, London, England (pilotless, tethered) October 9, 1890 – Clément Ader – took off from Gretz-Armainvilliers, Ouest of Paris, France. August 14, 1901 – Gustave Whitehead from Leutershausen, Bavaria. May 15, 1902 – Lyman Gilmore – took off from Grass Valley, California. March 31, 1903 – Richard Pearse – took off from Waitohi Flat, Temuka, South Island, New Zealand. December 17, 1903 – Wright brothers Wright Flyer – First successful piloted and controlled heavier-than-air powered aircraft; flights took place four miles south of Kitty Hawk, North Carolina. March 18, 1906 – Traian Vuia, a Romanian inventor and engineer, who flew 11 meters in his self-named monoplane at Montesson near Paris, France. October 23, 1906 – Alberto Santos-Dumont 14-bis made a manned powered flight in Bagatelle Park, Paris, France, that was the first to be publicly witnessed by a crowd. July 4, 1908 – Glenn Curtiss flew the first pre-announced public flight in the United States of America of a heavier-than-air flying machine. He flew 5,080 feet, to win the Scientific American Trophy and its $2,500 purse (). December 22, 1916 – Sopwith Camel – this iconic biplane first took off from Brooklands, Weybridge, Surrey. July 28, 1935 – Boeing B-17 Flying Fortress – World War II American heavy bomber. December 17, 1935 – Douglas DC-3 – propeller-driven passenger and cargo aircraft of which more than 10,000 were produced. December 29, 1939 – Consolidated B-24 – World War II American heavy bomber. November 2, 1947 – Hughes H-4 Hercules – only flight of this oversized flying boat whose common name is Spruce Goose. July 27, 1949 – de Havilland Comet – first jet airliner. August 23, 1954 – Lockheed C-130 Hercules – military transport plane. May 27, 1955 – Sud Aviation Caravelle – first jet airliner with engines mounted in the tail. March 25, 1958 – Avro Canada CF-105 Arrow – Canadian supersonic fighter interceptor. First non-experimental aircraft designed and equipped with a fly-by-wire flight control system. April 25, 1962 – Lockheed A-12 – supersonic reconnaissance aircraft. June 29, 1962 – Vickers VC10 – first airliner with 4 engines mounted in the tail. April 9, 1967 – Boeing 737 – short-to-medium-range airliner. October 4, 1968 – Tupolev Tu-154 – Soviet/Russian airliner, still in operation. December 31, 1968 – Tupolev Tu-144 – Soviet supersonic airliner. February 9, 1969 – Boeing 747 – first widebody airliner. March 2, 1969 – Anglo-French Concorde – supersonic airliner. September 19, 1969 – Mil Mi-24 – Russian/Soviet-made helicopter used by many countries to this day. October 28, 1972 – Airbus A300 – first Airbus aircraft, short- to medium-range wide-body jet airliner. February 22, 1987 – Airbus A320 airliner – first civilian aircraft to have an all-digital fly-by-wire system. December 21, 1988 – Antonov An-225 Mriya – jet with the longest fuselage and wingspan and overall heaviest aircraft. June 12, 1994 – Boeing 777 – long-range airliner with the most powerful jet engines ever made. May 20, 2003 – Scaled Composites SpaceShipOne – The first commercial sub-orbital space craft. April 27, 2005 – Airbus A380 – double-decker jet airliner, currently largest capacity in the world, took off from Toulouse–Blagnac Airport. December 11, 2009 – Airbus A400M – military cargo plane, Airbus' first propeller plane. December 15, 2009 – Boeing 787 Dreamliner – first major widebody airliner to use non-metal composite materials for most of its construction. November 11, 2015 – Mitsubishi Regional Jet – Japanese twin-engine regional jet, the first designed and built in Japan, took off from Mitsubishi Heavy Industries, Tokyo. May 5, 2017 – Comac C919 – Chinese commercial aircraft. April 13, 2019 – Scaled Composites Stratolaunch – The world's largest airplane January 25, 2020 – Boeing 777X – The world's longest and largest twin-engine airliner. April 19, 2021 – Ingenuity – an unmanned robotic helicopter, first aircraft to fly on Mars. July 19, 2022 – KF-21 Boramae - Advanced multirole fighter designed by the Agency for Defense Development (ADD) and Korea Aerospace Industries (KAI). Notable maiden flights (rockets) October 3, 1942 – V-2 Rocket made its first successful test flight. The nose cone crossed the Karman line, widely considered the end of Earth's atmosphere, making it the first human-made object to reach space. August 3, 1953 – PGM-11 Redstone, designed by Wernher von Braun, was the US's first large ballistic missile. Launched from Cape Canaveral Air Force Station Launch Complex 4, it flew for 80 seconds until an engine failure caused it to crash into the sea. October 4, 1957 – Sputnik, first orbital rocket. December 22, 1960 – Vostok-K, first human-rated rocket (first manned flight April 12, 1961). November 9, 1967 – Saturn V, was used to launch humans to the Moon. April 12, 1981 – Space Shuttle, first partially reusable launch system, largest payload at the time of its maiden flight. December 21, 2004 – Delta IV Heavy, largest payload at the time of its maiden flight. February 6, 2018 – Falcon Heavy, largest payload at the time of its maiden flight, partially reusable. November 16, 2022 - Space Launch System block 1, carried Artemis 1. April 20, 2023 - Starship, currently the most powerful launch vehicle. January 8, 2024 - Vulcan Centaur See also Flight test References Aerospace engineering
Maiden flight
[ "Engineering" ]
1,486
[ "Aerospace engineering" ]
618,347
https://en.wikipedia.org/wiki/E.%20M.%20Antoniadi
Eugène Michel Antoniadi (Greek: Ευγένιος Αντωνιάδης; 1 March 1870 – 10 February 1944) was a Greek-French astronomer. He is known for creating the Antoniadi scale as well as for his observations of the planets, and was a major opponent of the notion of Martian canals. He created some very detailed maps of Mars and many features on the planet are still known by the names he suggested. He also created the first map of Mercury, though it turned out to be incorrect. Biography Antoniadi was born in Istanbul (Constantinople) but spent most of his adult life in France after being invited there by Camille Flammarion. He became a Fellow of the Royal Astronomical Society on 10 February 1899, and in 1890 he became one of the founding members of the British Astronomical Association (BAA). In 1892, he joined the BAA's Mars Section and became that section's Director in 1896. He became a member of the Société astronomique de France (SAF) in 1891. Flammarion hired Antoniadi to work as an assistant astronomer in his private observatory in Juvisy-sur-Orge in 1893. Antoniadi worked there for nine years. In 1902, he resigned from both the Juvisy observatory and from SAF. Antoniadi rejoined SAF in 1909. That same year, Henri Deslandres, Director of the Meudon Observeratory, provided him with access to the Grande Lunette (83-cm Great Refractor) He became a highly reputed observer of Mars, and at first supported the notion of Martian canals, but after using the 83 centimeter telescope at Meudon Observatory during the 1909 opposition of Mars, he came to the conclusion that canals were an optical illusion. He also observed Venus and Mercury. He made the first map of Mercury, but his maps were flawed by his incorrect assumption that Mercury had synchronous rotation with the Sun. The first standard nomenclature for Martian albedo features was introduced by the International Astronomical Union (IAU) when they adopted 128 names from the 1929 map of Antoniadi named La Planète Mars. He is also famed for creating the Antoniadi scale of seeing, which is commonly used by amateur astronomers. He was also a strong chess player. His best result was equal first with Frank Marshall in a tournament in Paris in 1907, a point ahead of Savielly Tartakower. He died in Paris, aged almost 74. Name His full name was Eugène Michel Antoniadi (), however he was also known as Eugenios Antoniadis. His name is also sometimes given as Eugène Michael Antoniadi or even (incorrectly) as Eugène Marie Antoniadi. Awards and honors 1925 - Prix Jules Janssen from the Société astronomique de France. 1926 – Prix Guzman of 2,500 Francs from the Académie des Sciences. 1932 - Prix La Caille from the Académie des Sciences. 1970 - Antoniadi crater on the Moon named in his honor by the International Astronomical Union. 1973 - Antoniadi crater on Mars named in his honor by the International Astronomical Union. 1976 - Antoniadi Dorsum wrinkle ridge on Mercury named in his honor by the International Astronomical Union. Publications Antoniadi was a prolific writer of articles and books (the Astrophysics Data System lists nearly 230 that he authored or co-authored). The subjects included astronomy, history, and architecture. He frequently wrote articles for L'Astronomie of the Société astronomique de France, Astronomische Nachrichten, and the Monthly Notices of the Royal Astronomical Society, among others. Notable works include: Sur une Anomalie de la phase dichotome de la planète Vénus (Paris: Gauthier-Villars, (s. d.)). La planète Mars, 1659-1929 (Paris: Hermann & Cie, 1930). La Planète Mercure et la rotation des satellites. Etude basée sur les résultats obtenus avec la grande lunette de l'observatoire de Meudon (Paris: Gauthier-Villars, 1934). See also Demetrios Eginitis References Bibliography Further reading External links Edward Winter, A Chessplaying Astronomer (2002) Pictures of Mars kept at the Library of Paris Observatory 1870 births 1944 deaths 20th-century French astronomers 19th-century Greek scientists 19th-century Greek astronomers 20th-century Greek scientists 20th-century Greek astronomers Emigrants from the Ottoman Empire to Greece Constantinopolitan Greeks Scientists from Istanbul French chess players Astrophotographers Astronomers from the Ottoman Empire Chess players from Istanbul
E. M. Antoniadi
[ "Astronomy" ]
947
[ "People associated with astronomy", "Astrophotographers" ]
618,578
https://en.wikipedia.org/wiki/Fathom%20Five%20National%20Marine%20Park
Fathom Five National Marine Park is a National Marine Conservation Area in the Georgian Bay part of Lake Huron, Ontario, Canada, that seeks to protect and display shipwrecks and lighthouses, and conserve freshwater ecosystems. Parks Canada has management plans for the aquatic and terrestrial ecosystems, with a multi-action plan for species that are at risk, including endemic species, the Monarch butterfly, the eastern ribbonsnake, and the eastern whip-poor-will. The aquatic ecosystems in the park are also of particular interest. Many fish, shellfish, amphibians, and eels are an attraction for naturalists in the park. Much of this wildlife is accessible to scuba divers and snorkellers in the park. The many shipwrecks make the park a popular scuba diving destination, and glass bottom boat tours leave Tobermory regularly, allowing tourists to see the shipwrecks without having to get wet. Additionally, there are three main popular hiking trails found within Fathom Five National Marine Park that provides visitors with views of old growth forests and the Georgian Bay. The Saugeen Ojibway Peoples have inhabited the Bruce Peninsula and the area that is now Fathom Five National Marine Park for thousands of years. This land provided for their communities and their people with the plethora of wildlife and plant life. They provide the local knowledge about Lake Huron and its ecological value to the reserve, park, and their overall livelihood. Parks Canada and Saugeen Ojibway People's collaboration is said to yield a benefit to both parties with regard to overall ecosystem knowledge. Many visitors camp at nearby Bruce Peninsula National Park and use the park as a base to explore Fathom Five and the surrounding area during the day. Fathom Five also contains numerous islands, notably Flowerpot Island, which has rough camping facilities, marked trails, and its namesake flowerpots, outlying stacks of escarpment cliff that stand a short distance from the island, most with vegetation (including trees) still growing on them. The park was established on 20 July 1987 using the area of the Fathom Five Provincial Park and the western portion of the Georgian Bay Islands National Park. The park represented a pioneering departure for the national park system, which had centred on land-based conservation until then. Its designation as a National Marine Park foresaw the creation of others, though nomenclature for such units would morph into National Marine Conservation Areas, leaving Fathom Five as the only National Marine Park. Despite its unique name, it is categorized as an NMCA and is deemed the first one in the country. Visitors' centre In 2006, a new visitors' centre opened to serve Fathom Five National Marine Park and the Bruce Peninsula National Park. Designed by Andrew Frontini of Shore Tilbe Irwin + Partners, the CAD $7.82 million centre, approached by a boardwalk, features an information centre, reception area, exhibit hall and theatre. A 20-metre viewing tower was also constructed to provide visitors with aerial views of the surrounding park and Georgian Bay. The centre was designed with environmental sustainability in mind, receiving $224,000 from the Federal House in Order initiative for implementation of innovative greenhouse gas reduction technology. Recreation With an annual visitation number of 490,388 from 2019-2021, Fathom Five National Marine Park is a popular destination among locals and tourists. The park has three main trails, which range in duration from five minutes to two hours. The Bruce Trail to Little Dunks Bay is approximately two kilometres long and provides visitors with a panoramic view of Little Dunks Bay and Georgian Bay. The Bruce Trail Burnt Point Loop has the longest hike of the three, encompassing 4.8 km, which passes through cedar forests and provides a stunning view of Georgian Bay. Visitors can embark on the shortest hike that is less than half a kilometre in length, passing by the visitor centre on their way to Tobermory Harbour. Park management Management plan The management plans for the Fathom Five National Park was made in 1998. The park was created to protect the longevity of the Georgian Bay marine biodiversity and environment. The aquatic ecosystems management was created to study the structure of the ecosystem and resources, protect species and habitats, and identify the impact of nonnative species and make management plans to take action if they negatively impact native species. The fish management plan was created to monitor the populations, and allow sustainable harvest through commercial and sport fishing. The terrestrial ecosystems management plan was created to monitor the islands’ biogeography and to and reduce human impact on the environment. This is done by preventing new species from being introduced and limiting public access to areas. Additionally, management requires environmental impact assessments to be done prior to any activities or development. Management progress The management progress was last reported by Parks Canada in 2010. The goals to conserve and monitor aquatic ecosystems is approximately 50% complete. The coastal ecosystems' water quality, water level, fish populations, and connectivity are in good condition. The island ecosystems' habitat and connectivity is in fair condition, and they are still developing the offshore and social indicators. The goals to preserve the terrestrial ecosystems are being met, and is in fair condition. The goal of having environmental impact assessments is also being followed prior to activities. Threatened and endangered species A multi-species action plan to conserve threatened and endangered species was created by Parks Canada to be implemented in Fathom Five National Park and Bruce Peninsula National Park. The plan includes COSEWIC’s (Committee on the Status of Endangered Wildlife in Canada) identification of the species threat status, and plans to recover the population size and distribution of the species. There are endemic species included in the plan, such as Dwarf Lake Iris (Iris lacurstris) and Lakeside Daisy (Tetraneuris  herbacea). The Dwarf Lake Iris's status is of special concern, as it is only found in the Great Lakes basin, with one of its locations being Lake Huron. It is a perennial plant with blue or purple petals which blooms between mid-May and early June. The Lakeside Daisy status is also of special concern, as it is likewise only found near the Great Lakes. It is a perennial herb with yellow ray petals and blooms between May and early June. Monarch butterfly The monarch butterfly (Danaus plexippus) is a species of butterfly that is currently listed as a species of special concern in the province of Ontario. This migratory butterfly is found in Fathom Five Marine Park, as well as other parts of Southeast Canada and the Northeast United States during its breeding season in the summer. Upon breeding, the monarch butterflies embark on a mass migration of approximately 4,500 kilometres to their final resting place in Central Mexico. As a species of special concern, the monarch butterfly is neither threatened nor endangered. As a result of habitat loss and the use of pesticides and herbicides, the monarch butterfly's natural habitat has been dramatically impacted. The monarch butterfly is a globally threatened species, and its numbers have declined dramatically throughout the past few decades, from 10 million butterflies in 1980 to 1914 butterflies in 2021. Massassauga rattlesnake The massassauga rattle snake (Sistrurus catenatus) is a species of snake listed as endangered under the Species at Risk Act (SARA). This snake has a long, grayish-brown body with semi round spots throughout its body, and it ranges in size from 50–70 cm long. The species is found in the Fathom Five Marine Park, in habitats such as tall grass, bogs, marshes, shorelines, and forests. In addition to habitat loss caused by human expansion, these snakes are also at risk of being killed by motor vehicles or ill-intentioned humans. There are approximately 10,000 adult massasauga rattle snakes found throughout Eastern Ontario and Quebec; however, a substantial portion of this population can be found within the Fathom Five Marine Park and the Bruce Peninsula. Both the Massasauga - Great Lakes / St. Lawrence population and the Massasauga - Carolinian population are experiencing steady declines in population numbers. Eastern ribbon snake The eastern ribbon snake (Thamnophis saurita) is a species of snake that is listed as a "special concern" that is likely to become endangered if proper precautions aren't taken. On its sides and back, the snake has three yellow stripes that easily distinguish it from other snakes. The Fathom Five Marine Park is home to this species of snake, which is normally found in environments near water. The snake is threatened by habitat loss as a result of human development. In addition, the eastern ribbon snake relies heavily on the ability to hunt amphibians as a result, the eastern ribbon snake is experiencing a decline in food availability due to habitat loss and degradation. Currently, there are an estimated 1,000-3,000 adult eastern ribbon snakes inhabiting Ontario, and their numbers are steadily declining. Eastern whip-poor-will The eastern whip-poor-will (Caprimulgus vociferus) is currently listed as threatened under the Species at Risk Act (SARA), and if proper measures are not taken it may become endangered. Easter whip-poor-wills can be found in the Fathom Five Marine Park. They are distinguished by their medium size and brown and grey feathers that provide them with excellent camouflage so that they can blend in with the surroundings. The eastern whip-poor-will is generally found in open woodlands with mixed conifers and deciduous trees. Threats to the eastern whip-poor-will are directly caused by the loss and degradation of their habitat. From 1968-2007, the number of eastern whip-poor-wills has decreased by nearly 75% of its original population in Canada, and its population is gradually decreasing at a rate of 3.2% per year. Climate change There is a very real threat associated with climate change on a global scale, but especially within Canada. According to current projections, the province of Ontario will experience an increase in average temperatures of 2.6-2.7 degrees Celsius by 2030 and 5.9-7.4 degrees Celsius by 2080. A further consequence of climate change will be an increase in precipitation by 4.5%-7.1% in Ontario and a possible increase of 3.2%-17.5% by 2080. It is anticipated that climate change could have dramatic effects on species such as the monarch butterfly, the massassauga rattlesnake, the eastern ribbon snake, and the eastern whip-poor-will, which are already on the endangered list. Increasing temperatures and precipitation will lead to more frequent flooding, droughts, and extreme weather events. Due to these impacts, there will be a drastic decrease in viable food sources for monarch butterflies, such as milkweed. It is anticipated that the flooding will negatively impact the landscapes in which massassauga rattlesnakes, eastern ribbon snakes, and eastern whip-poor-wills rely heavily on for shelter, food, and protection. Aquatic wildlife Native aquatic wildlife Lake Huron is home to 139 native fish species, many of which are found in Fathom Five Provincial park. Some examples include sculpins, gizzard shad, shiners, and ciscoes. These fish sustain populations of larger predatory species such as pike, muskellunge, large and smallmouth bass, brook trout, and walleye. These native species are dispersed throughout the great lakes watershed. Lake Huron is also home to eight native turtle species, including the spotted turtle (Clemmys guttata), Blanding’s turtle (Emydoidea blandingii), spiny softshell turtle (Apalone spinifera), northern map turtle (Graptemys geographica), eastern musk turtle (Sternotherus odoratus), snapping turtle (Chelydra serpentina), midland painted turtle (Chrysemys picta marginata), and wood turtle (Glyptemys insculpta). Out of these species, two are listed as endangered, two are listed "threatened", and three species are of special concern. The reduction of coastal wetlands has greatly impacted turtles in Lake Huron, including the Bruce Peninsula. Fathom Five National park is home to several wetlands. These wetlands are critical habitat to sensitive species such as turtles, black terns, king rails, herons, Black crest night herons and other species of special concern. Non-native aquatic wildlife Lake Huron is home to several introduced and invasive species. Pacific salmon were introduced to Lake Huron; specifically, Chinook, coho and pink salmon were intentionally introduced by sport fishermen. Additionally, invasive species introduced via ballast water, man-made canals, aquaculture, and the pet trade have established large populations within the lakes. Lampreys, alewives, and quagga mussels are the most common examples of invasive species in the Great Lakes. Invasive species have affected the lake ecosystem considerably. Quagga mussels are filter feeders, and filter water through their siphons in order to trap algae and plankton. These mussels are so prevalent that their filtration has drastically changed the clarity of the water, allowing algae to grow on rock structures on the lake bed where it would not previously be present. Predatory fish have also been affected greatly by invasive species. The clarity of the water, created by quagga mussels, causes ambush predators to be less successful in ambushing prey. Keystone native species such as lake trout, muskellunge, and pike have been greatly affected by this change. Large fish species have also been affected by the sea lamprey. Lampreys are a parasitic predator, and attach themselves to large fish and feed on the blood of their prey. Lampreys are native to the Great Lakes; Silver, chestnut, American brook, and northern brook lamprey are native to streams and rivers in the watershed of the Great Lakes, including Lake Huron. Native lampreys are not large enough to have a significant effect on the fish they prey on, however invasive sea lampreys are much larger, and fish that they prey on are much more prone to die as they are not used to such large parasites. It was estimated that only one in seven fish preyed on by sea lampreys would survive. First Nations Fathom Five National Marine Park is part of the traditional unceded territory of Saugeen Ojibway people. Oral history dates the presence of Saugeen Ojibway peoples around 5480 BCE. The peninsula is a spiritual destination for many Ojibway Nations, who would travel to the peninsula to partake in potlatches and ceremonies throughout the seasons. The traditional territory of the Saugeen Ojibway included the modern day towns of Collingwood, Arthur, Alliston, and Goderich, the watersheds of the Saugeen river, the Sauble river, the Wasaga river and the islands surrounding the Bruce peninsula. The Saugeen people speak a dialect of the Algonquin language. Food security Food security for the Saugeen Ojibway people has been an ongoing political issue. All the major fisheries are located on the Saugeen Ojibway people's region and it is their main source of food. The local fisheries have been dominated by big corporations. This food resource needs to have legal access by Saugeen Ojibway people granted by the government. Prior to European arrival the Saugeen Ojibway people's territory extended as far as Southern Ontario. This includes extending into southern Ontario, 500 km of shoreline and of Lake Huron, and harvesting rights on of a hunting reserve. Commercial food markets do exist around the region and are a 25 minute drive from the reserve. Though this creates a challenge for those with no access to a motor vehicle. The older demographic of the Saugeen Ojibway people expressed that there is a decline in the Lake Huron's whitefish population. The whitefish is symbolic of cultural and generational ceremonies for the Saugeen Ojibway people. It is a symbol of a successful harvest and the Saugeen Ojibway people have a ceremony where the "chief" summons the whitefish and appreciate the lake for providing them with source of food and livelihood. These are age old rituals that have been practiced since the 1800s where the Saugeen Ojibway people surrendered their land to the British crown. Parks Canada and Saugeen Ojibway people Parks Canada is a Federal agency that specializes in protection and conservation of national parks throughout Canada. The entity was formed to ensure the preservation of ecological indicators and species. The Giigoonyang (Fishes) project collaboration between Saugeen Ojibway people and Parks Canada. The collaboration is designed to combine indigenous knowledge about the land area with western technology. Researchers will use this to monitor and analyze fisheries data to forecast population growth or decline. This research is essential as it ensures food security for the Saugeen Ojibway people territory. Since it is their primary source that they rely on, this collaboration will benefit both parties involved. Since Parks Canada is a Federal agency, it will allow the Saugeen Ojibway people to make necessary progress in their legal demands for their food security and territory. A Federal agency is more likely to implement effective change in comparison to a provincial entity due to the hierarchal structure of the government agencies in Canada. The secondary goal of Parks Canada is to fill the knowledge gap they have with regards to Fathom Five National Marine Park's lake systems. Specific to fish migration and ecosystems that directly affect the fisheries industry. The main aim is to be able to create sustainable fishing practices in order to ensure Lake Huron's fish population. Shipwrecks The park is home to several shipwrecks, many of which are used for scuba diving and some shallower ones are used for snorkelling. The park also has three non-shipwreck dive sites, these are Dunks Point, Big Tub Lighthouse Point and The Anchor. See also National Parks of Canada List of National Parks of Canada References External links Official site Friends of Fathom Five National Marine Conservation Areas Marine parks of Canada Ontario Parks in Bruce County Protected areas established in 1987 Dark-sky preserves in Canada 1987 establishments in Ontario Georgian Bay
Fathom Five National Marine Park
[ "Astronomy" ]
3,730
[ "Dark-sky preserves in Canada", "Dark-sky preserves" ]
618,579
https://en.wikipedia.org/wiki/Glass-bottom%20boat
A glass-bottom boat is a boat with sections of glass, panoramic bottom glass or other suitable transparent material, below the waterline allowing passengers to observe the underwater environment from within the boat. The view through the glass bottom is better than simply looking into the water from above, because one does not have to look through optically erratic surface disturbances. The effect is similar to that achieved by a diving mask, while the passengers are able to stay dry and out of the water. Use Glass-bottom boats are used for giving tours, as they are usually designed to allow the maximum number of tourists to view out the glass bottom. Glass-bottom boats are in use in many seaside tourist destinations as well as lake towns. Typical tours in these boats include views of underwater flora and fauna, reefs, shipwrecks, and other underwater sights. History The glass-bottom boat was invented in 1878 by two men, Hullam Jones and Philip Morrell, in Marion County, Florida. Jones outfitted a dugout canoe with a glass viewing box at the bottom, which allowed tourists to view the clear waters of Silver Springs, Florida. Eventually, the spring was purchased by Col. W.M. Davidson and Carl Ray, who developed a gasoline-powered glass-bottom boat in 1924. See also AquaDom References Boat types Glass applications Glass architecture Tourist activities
Glass-bottom boat
[ "Materials_science", "Engineering" ]
274
[ "Glass architecture", "Glass engineering and science" ]
618,584
https://en.wikipedia.org/wiki/Computational%20group%20theory
In mathematics, computational group theory is the study of groups by means of computers. It is concerned with designing and analysing algorithms and data structures to compute information about groups. The subject has attracted interest because for many interesting groups (including most of the sporadic groups) it is impractical to perform calculations by hand. Important algorithms in computational group theory include: the Schreier–Sims algorithm for finding the order of a permutation group the Todd–Coxeter algorithm and Knuth–Bendix algorithm for coset enumeration the product-replacement algorithm for finding random elements of a group Two important computer algebra systems (CAS) used for group theory are GAP and Magma. Historically, other systems such as CAS (for character theory) and Cayley (a predecessor of Magma) were important. Some achievements of the field include: complete enumeration of all finite groups of order less than 2000 computation of representations for all the sporadic groups See also Black box group References A survey of the subject by Ákos Seress from Ohio State University, expanded from an article that appeared in the Notices of the American Mathematical Society is available online. There is also a survey by Charles Sims from Rutgers University and an older survey by Joachim Neubüser from RWTH Aachen. There are three books covering various parts of the subject: Derek F. Holt, Bettina Eick, Eamonn A. O'Brien, "Handbook of computational group theory", Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton, Florida, 2005. Charles C. Sims, "Computation with Finitely-presented Groups", Encyclopedia of Mathematics and its Applications, vol 48, Cambridge University Press, Cambridge, 1994. Ákos Seress, "Permutation group algorithms", Cambridge Tracts in Mathematics, vol. 152, Cambridge University Press, Cambridge, 2003. . Computational fields of study
Computational group theory
[ "Technology" ]
389
[ "Computational fields of study", "Computing and society" ]
618,731
https://en.wikipedia.org/wiki/Muffler
A muffler (North American and Australian English) or silencer (British English) is a device for reducing the noise emitted by the exhaust of an internal combustion engine—especially a noise-deadening device forming part of the exhaust system of an automobile. Operation Mufflers are installed within the exhaust system of most internal combustion engines. Mufflers are engineered as an acoustic device to reduce the loudness of the sound pressure created by the engine by acoustic quieting. Sound reduction techniques used in mufflers include: reactive silencing, resistive silencing, absorptive silencing, and shell damping. The noise of the hot exhaust gas exiting the engine can be abated by a series of passages and chambers lined with roving fiberglass insulation and/or resonating chambers harmonically tuned to cause destructive interference, wherein opposite sound waves cancel each other out. The operation of an internal combustion engine produces distinct pulses of exhaust gas that exit through the exhaust pipes and the muffler. For example, a four-cylinder engine will have four high-pressure pulses for each operating cycle, a six-cylinder engine will emit six high-pressure pulses, and so on. These pulses are separated by a low-pressure between them which functions as a scavenging mechanism for the next exhaust cycle from the cylinder. The exhaust system needs negative pressure waves so they help empty the cylinder of gases. The more distinct separate pulses in the exhaust system, the more positive the exhaust flow, and the more efficiently the exhaust gas is scavenged from each cylinder. The design of the exhaust systems and mufflers must compromise effective exhaust gas extraction and exhaust gas pressure, engine fuel efficiency and power, as well as noise suppression. A side effect of noise reduction is the restriction of the exhaust gas flow, which creates back pressure, which can decrease engine efficiency. This is because the engine exhaust must share the same complex exit pathway built inside the muffler as the sound pressure that the muffler is designed to mitigate. However, having some back pressure helps. Higher backpressure can also help nitrous oxides (NOx) emission reduction in some engines. When engine performance is the main concern, the exhaust pipes and muffler should be large enough to facilitate breathing as well as small enough to create a high exhaust flow. The objective of a muffled high-performance exhaust system depends on two independent factors: "the pressure wave tuning from length/diameter selection, and minimizing backpressure by selecting mufflers of suitable flow capacity for the application." Some aftermarket mufflers claim to increase engine output and/or reduce fuel consumption by slightly reduced back pressure. This usually entails less noise reduction (i.e., more noise). Greater muffler flow may increase engine power, but excess muffler flow capability provides no additional benefits and can be more expensive as well as being noisier. Regulations On May 18, 1905, the state of Oregon passed a law that required vehicles to have "a light, a muffler, and efficient brakes". The legality of altering a motor vehicle's original equipment exhaust system varies by jurisdiction; in many developed countries such as the United States, Canada, and Australia, such modifications are highly regulated or strictly prohibited. See also Noise control Hush kit Hush house Soundproofing Anechoic chamber Vibration isolation Shock absorber Cushioning Damped wave Damping ratio Detuner Sound attenuator Supressor List of auto parts References External links Vehicle parts Noise control Exhaust systems 1897 introductions fi:Pakoputkisto#Äänenvaimentimet
Muffler
[ "Technology" ]
743
[ "Vehicle parts", "Components" ]
618,819
https://en.wikipedia.org/wiki/Triclopyr
Triclopyr (3,5,6-trichloro-2-pyridinyloxyacetic acid) is an organic compound in the pyridine group that is used as a systemic foliar herbicide and fungicide. Uses Triclopyr is a selective weedkiller used to control dicotyledonous weeds (i.e. broadleaf plants) while leaving monocotyledonous plants (mostly bulbs, grasses and conifers) unaffected, or to control rust fungus on soybean crops. In the USA, it is sold under the trade names Garlon, Remedy, Turflon, Weed-B-Gon (purple label), Brush-B-Gon among others, and in the UK as SBK Brushwood Killer. It is a major ingredient in Confront, which was withdrawn from most uses because of concerns about compost contamination from the other major ingredient, clopyralid. Environmental effects Triclopyr breaks down in soil with a half-life between 30 and 90 days. It degrades rapidly in water, and remains active in decaying vegetation for about 3 months. The compound is slightly toxic to ducks (LD50 = 1698 mg/kg) and quail (LD50 = 3000 mg/kg). It has been found nontoxic to bees and marginally toxic to fish (rainbow trout LC50 (96 hr) = 117 ppm). Garlon's fact sheet for their triclopyr ester product indicates that triclopyr is highly toxic to fish, aquatic plants, and aquatic invertebrates, and should never be used in waterways, wetlands, or other sensitive habitats. This is only for the triclopyr ester product, not for the triclopyr amine product. References External links archived Triclopyr Technical Fact Sheet – National Pesticide Information Center Triclopyr General Fact Sheet – National Pesticide Information Center Triclopyr Pesticide Information Profile – Extension Toxicology Network Auxinic herbicides Chloropyridines Carboxylic acids Ethers Fungicides Pyridines
Triclopyr
[ "Chemistry", "Biology" ]
443
[ "Fungicides", "Carboxylic acids", "Functional groups", "Organic compounds", "Ethers", "Biocides" ]
619,049
https://en.wikipedia.org/wiki/Rookery
A rookery is a colony breeding rooks, and more broadly a colony of several types of breeding animals, generally gregarious birds. Coming from the nesting habits of rooks, the term is used for corvids and the breeding grounds of colony-forming seabirds, marine mammals (true seals or sea lions), and even some turtles. Rooks (northern-European and central-Asian members of the crow family) have multiple nests in prominent colonies at the tops of trees. Paleontological evidence points to the existence of rookery-like colonies in the pterosaur Pterodaustro. The term rookery was also borrowed as a name for dense slum housing in nineteenth-century cities, especially in London. See also Auca Mahuevo, for a titanosaurid sauropod dinosaur rookery Bird colony Heronry Rook shooting References Birds Reproductive ecology
Rookery
[ "Biology" ]
182
[ "Reproductive ecology", "Behavior", "Animals", "Reproduction", "Birds" ]
619,053
https://en.wikipedia.org/wiki/Distributed%20transaction
A distributed transaction operates within a distributed environment, typically involving multiple nodes across a network depending on the location of the data. A key aspect of distributed transactions is atomicity, which ensures that the transaction is completed in its entirety or not executed at all. It's essential to note that distributed transactions are not limited to databases. The Open Group, a vendor consortium, proposed the X/Open Distributed Transaction Processing Model (X/Open XA), which became a de facto standard for the behavior of transaction model components. Databases are common transactional resources and, often, transactions span a couple of such databases. In this case, a distributed transaction can be seen as a database transaction that must be synchronized (or provide ACID properties) among multiple participating databases which are distributed among different physical locations. The isolation property (the I of ACID) poses a special challenge for multi database transactions, since the (global) serializability property could be violated, even if each database provides it (see also global serializability). In practice most commercial database systems use strong strict two phase locking (SS2PL) for concurrency control, which ensures global serializability, if all the participating databases employ it. A common algorithm for ensuring correct completion of a distributed transaction is the two-phase commit (2PC). This algorithm is usually applied for updates able to commit in a short period of time, ranging from couple of milliseconds to couple of minutes. There are also long-lived distributed transactions, for example a transaction to book a trip, which consists of booking a flight, a rental car and a hotel. Since booking the flight might take up to a day to get a confirmation, two-phase commit is not applicable here, it will lock the resources for this long. In this case more sophisticated techniques that involve multiple undo levels are used. The way you can undo the hotel booking by calling a desk and cancelling the reservation, a system can be designed to undo certain operations (unless they are irreversibly finished). In practice, long-lived distributed transactions are implemented in systems based on web services. Usually these transactions utilize principles of compensating transactions, Optimism and Isolation Without Locking. The X/Open standard does not cover long-lived distributed transactions. Several technologies, including Jakarta Enterprise Beans and Microsoft Transaction Server fully support distributed transaction standards. Synchronization In event driven architectures, distributed transactions can be synchronized through using request-response paradigm and it can be implemented in two ways: Creating two separate queues: one for requests and the other for replies. The event producer must wait until it receives the response. Creating one dedicated ephemeral queue for each request. See also Java Transaction API Enduro/X open-source X/Open XA and XATMI implementation References Further reading Gerhard Weikum, Gottfried Vossen, Transactional information systems: theory, algorithms, and the practice of concurrency control and recovery, Morgan Kaufmann, 2002, Data management Transaction processing
Distributed transaction
[ "Technology" ]
606
[ "Data management", "Data" ]
619,064
https://en.wikipedia.org/wiki/G0%20phase
{{DISPLAYTITLE:G0 phase}} The G0 phase describes a cellular state outside of the replicative cell cycle. Classically, cells were thought to enter G0 primarily due to environmental factors, like nutrient deprivation, that limited the resources necessary for proliferation. Thus it was thought of as a resting phase. G0 is now known to take different forms and occur for multiple reasons. For example, most adult neuronal cells, among the most metabolically active cells in the body, are fully differentiated and reside in a terminal G0 phase. Neurons reside in this state, not because of stochastic or limited nutrient supply, but as a part of their developmental program. G0 was first suggested as a cell state based on early cell cycle studies. When the first studies defined the four phases of the cell cycle using radioactive labeling techniques, it was discovered that not all cells in a population proliferate at similar rates. A population's "growth fraction" – or the fraction of the population that was growing – was actively proliferating, but other cells existed in a non-proliferative state. Some of these non-proliferating cells could respond to extrinsic stimuli and proliferate by re-entering the cell cycle. Early contrasting views either considered non-proliferating cells to simply be in an extended G1 phase or in a cell cycle phase distinct from G1 – termed G0. Subsequent research pointed to a restriction point (R-point) in G1 where cells can enter G0 before the R-point but are committed to mitosis after the R-point. These early studies provided evidence for the existence of a G0 state to which access is restricted. These cells that do not divide further exit G1 phase to enter an inactive stage called quiescent stage. Diversity of G0 states Three G0 states exist and can be categorized as either reversible (quiescent) or irreversible (senescent and differentiated). Each of these three states can be entered from the G1 phase before the cell commits to the next round of the cell cycle. Quiescence refers to a reversible G0 state where subpopulations of cells reside in a 'quiescent' state before entering the cell cycle after activation in response to extrinsic signals. Quiescent cells are often identified by low RNA content, lack of cell proliferation markers, and increased label retention indicating low cell turnover. Senescence is distinct from quiescence because senescence is an irreversible state that cells enter in response to DNA damage or degradation that would make a cell's progeny nonviable. Such DNA damage can occur from telomere shortening over many cell divisions as well as reactive oxygen species (ROS) exposure, oncogene activation, and cell-cell fusion. While senescent cells can no longer replicate, they remain able to perform many normal cellular functions. Senescence is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. In contrast to cellular senescence, quiescence is not a reactive event but part of the core programming of several different cell types. Finally, differentiated cells are stem cells that have progressed through a differentiation program to reach a mature – terminally differentiated – state. Differentiated cells continue to stay in G0 and perform their main functions indefinitely. Characteristics of quiescent stem cells Transcriptomes The transcriptomes of several types of quiescent stem cells, such as hematopoietic, muscle, and hair follicle, have been characterized through high-throughput techniques, such as microarray and RNA sequencing. Although variations exist in their individual transcriptomes, most quiescent tissue stem cells share a common pattern of gene expression that involves downregulation of cell cycle progression genes, such as cyclin A2, cyclin B1, cyclin E2, and survivin, and upregulation of genes involved in the regulation of transcription and stem cell fate, such as FOXO3 and EZH1. Downregulation of mitochondrial cytochrome C also reflects the low metabolic state of quiescent stem cells. Epigenetic Many quiescent stem cells, particularly adult stem cells, also share similar epigenetic patterns. For example, H3K4me3 and H3K27me3, are two major histone methylation patterns that form a bivalent domain and are located near transcription initiation sites. These epigenetic markers have been found to regulate lineage decisions in embryonic stem cells as well as control quiescence in hair follicle and muscle stem cells via chromatin modification. Regulation of quiescence Cell cycle regulators Functional tumor suppressor genes, particularly p53 and Rb gene, are required to maintain stem cell quiescence and prevent exhaustion of the progenitor cell pool through excessive divisions. For example, deletion of all three components of the Rb family of proteins has been shown to halt quiescence in hematopoietic stem cells. Lack of p53 has been shown to prevent differentiation of these stem cells due to the cells' inability to exit the cell cycle into the G0 phase. In addition to p53 and Rb, cyclin dependent kinase inhibitors (CKIs), such as p21, p27, and p57, are also important for maintaining quiescence. In mouse hematopoietic stem cells, knockout of p57 and p27 leads to G0 exit through nuclear import of cyclin D1 and subsequent phosphorylation of Rb. Finally, the Notch signaling pathway has been shown to play an important role in maintenance of quiescence. Post-transcriptional regulation Post-transcriptional regulation of gene expression via miRNA synthesis has been shown to play an equally important role in the maintenance of stem cell quiescence. miRNA strands bind to the 3′ untranslated region (3′ UTR) of target mRNAs, preventing their translation into functional proteins. The length of the 3′ UTR of a gene determines its ability to bind to miRNA strands, thereby allowing regulation of quiescence. Some examples of miRNA's in stem cells include miR-126, which controls the PI3K/AKT/mTOR pathway in hematopoietic stem cells, miR-489, which suppresses the DEK oncogene in muscle stem cells, and miR-31, which regulates Myf5 in muscle stem cells. miRNA sequestration of mRNA within ribonucleoprotein complexes allows quiescent cells to store the mRNA necessary for quick entry into the G1 phase. Response to stress Stem cells that have been quiescent for a long time often face various environmental stressors, such as oxidative stress. However, several mechanisms allow these cells to respond to such stressors. For example, the FOXO transcription factors respond to the presence of reactive oxygen species (ROS) while HIF1A and LKB1 respond to hypoxic conditions. In hematopoietic stem cells, autophagy is induced to respond to metabolic stress. Examples of reversible G0 phase Tissue stem cells Stem cells are cells with the unique ability to produce differentiated daughter cells and to preserve their stem cell identity through self-renewal. In mammals, most adult tissues contain tissue-specific stem cells that reside in the tissue and proliferate to maintain homeostasis for the lifespan of the organism. These cells can undergo immense proliferation in response to tissue damage before differentiating and engaging in regeneration. Some tissue stem cells exist in a reversible, quiescent state indefinitely until being activated by external stimuli. Many different types of tissue stem cells exist, including muscle stem cells (MuSCs), neural stem cells (NSCs), intestinal stem cells (ISCs), and many others. Stem cell quiescence has been recently suggested to be composed of two distinct functional phases, G0 and an 'alert' phase termed GAlert. Stem cells are believed to actively and reversibly transition between these phases to respond to injury stimuli and seem to gain enhanced tissue regenerative function in GAlert. Thus, transition into GAlert has been proposed as an adaptive response that enables stem cells to rapidly respond to injury or stress by priming them for cell cycle entry. In muscle stem cells, mTORC1 activity has been identified to control the transition from G0 into GAlert along with signaling through the HGF receptor cMet. Mature hepatocytes While a reversible quiescent state is perhaps most important for tissue stem cells to respond quickly to stimuli and maintain proper homeostasis and regeneration, reversible G0 phases can be found in non-stem cells such as mature hepatocytes. Hepatocytes are typically quiescent in normal livers but undergo limited replication (less than 2 cell divisions) during liver regeneration after partial hepatectomy. However, in certain cases, hepatocytes can experience immense proliferation (more than 70 cell divisions) indicating that their proliferation capacity is not hampered by existing in a reversible quiescent state. Examples of irreversible G0 phase Senescent cells Often associated with aging and age-related diseases in vivo, senescent cells can be found in many renewable tissues, including the stroma, vasculature, hematopoietic system, and many epithelial organs. Resulting from accumulation over many cell divisions, senescence is often seen in age-associated degenerative phenotypes. Senescent fibroblasts in models of breast epithelial cell function have been found to disrupt milk protein production due to secretion of matrix metalloproteinases. Similarly, senescent pulmonary artery smooth muscle cells caused nearby smooth muscle cells to proliferate and migrate, perhaps contributing to hypertrophy of pulmonary arteries and eventually pulmonary hypertension. Differentiated muscle During skeletal myogenesis, cycling progenitor cells known as myoblasts differentiate and fuse together into non-cycling muscle cells called myocytes that remain in a terminal G0 phase. As a result, the fibers that make up skeletal muscle (myofibers) are cells with multiple nuclei, referred to as myonuclei, since each myonucleus originated from a single myoblast. Skeletal muscle cells continue indefinitely to provide contractile force through simultaneous contractions of cellular structures called sarcomeres. Importantly, these cells are kept in a terminal G0 phase since disruption of muscle fiber structure after myofiber formation would prevent proper transmission of force through the length of the muscle. Muscle growth can be stimulated by growth or injury and involves the recruitment of muscle stem cells – also known as satellite cells – out of a reversible quiescent state. These stem cells differentiate and fuse to generate new muscle fibers both in parallel and in series to increase force generation capacity. Cardiac muscle is also formed through myogenesis but instead of recruiting stem cells to fuse and form new cells, heart muscle cells – known as cardiomyocytes – simply increase in size as the heart grows larger. Similarly to skeletal muscle, if cardiomyocytes had to continue dividing to add muscle tissue the contractile structures necessary for heart function would be disrupted. Differentiated bone Of the four major types of bone cells, osteocytes are the most common and also exist in a terminal G0 phase. Osteocytes arise from osteoblasts that are trapped within a self-secreted matrix. While osteocytes also have reduced synthetic activity, they still serve bone functions besides generating structure. Osteocytes work through various mechanosensory mechanisms to assist in the routine turnover over bony matrix. Differentiated nerve Outside of a few neurogenic niches in the brain, most neurons are fully differentiated and reside in a terminal G0 phase. These fully differentiated neurons form synapses where electrical signals are transmitted by axons to the dendrites of nearby neurons. In this G0 state, neurons continue functioning until senescence or apoptosis. Numerous studies have reported accumulation of DNA damage with age, particularly oxidative damage, in the mammalian brain. Mechanism of G0 entry Role of Rim15 Rim15 was first discovered to play a critical role in initiating meiosis in diploid yeast cells. Under conditions of low glucose and nitrogen, which are key nutrients for the survival of yeast, diploid yeast cells initiate meiosis through the activation of early meiotic-specific genes (EMGs). The expression of EMGs is regulated by Ume6. Ume6 recruits the histone deacetylases, Rpd3 and Sin3, to repress EMG expression when glucose and nitrogen levels are high, and it recruits the EMG transcription factor Ime1 when glucose and nitrogen levels are low. Rim15, named for its role in the regulation of an EMG called IME2, displaces Rpd3 and Sin3, thereby allowing Ume6 to bring Ime1 to the promoters of EMGs for meiosis initiation. In addition to playing a role in meiosis initiation, Rim15 has also been shown to be a critical effector for yeast cell entry into G0 in the presence of stress. Signals from several different nutrient signaling pathways converge on Rim15, which activates the transcription factors, Gis1, Msn2, and Msn4. Gis1 binds to and activates promoters containing post-diauxic growth shift (PDS) elements while Msn2 and Msn4 bind to and activate promoters containing stress-response elements (STREs). Although it is not clear how Rim15 activates Gis1 and Msn2/4, there is some speculation that it may directly phosphorylate them or be involved in chromatin remodeling. Rim15 has also been found to contain a PAS domain at its N terminal, making it a newly discovered member of the PAS kinase family. The PAS domain is a regulatory unit of the Rim15 protein that may play a role in sensing oxidative stress in yeast. Nutrient signaling pathways Glucose Yeast grows exponentially through fermentation of glucose. When glucose levels drop, yeast shift from fermentation to cellular respiration, metabolizing the fermentative products from their exponential growth phase. This shift is known as the diauxic shift after which yeast enter G0. When glucose levels in the surroundings are high, the production of cAMP through the RAS-cAMP-PKA pathway (a cAMP-dependent pathway) is elevated, causing protein kinase A (PKA) to inhibit its downstream target Rim15 and allow cell proliferation. When glucose levels drop, cAMP production declines, lifting PKA's inhibition of Rim15 and allowing the yeast cell to enter G0. Nitrogen In addition to glucose, the presence of nitrogen is crucial for yeast proliferation. Under low nitrogen conditions, Rim15 is activated to promote cell cycle arrest through inactivation of the protein kinases TORC1 and Sch9. While TORC1 and Sch9 belong to two separate pathways, namely the TOR and Fermentable Growth Medium induced pathways respectively, both protein kinases act to promote cytoplasmic retention of Rim15. Under normal conditions, Rim15 is anchored to the cytoplasmic 14-3-3 protein, Bmh2, via phosphorylation of its Thr1075. TORC1 inactivates certain phosphatases in the cytoplasm, keeping Rim15 anchored to Bmh2, while it is thought that Sch9 promotes Rim15 cytoplasmic retention through phosphorylation of another 14-3-3 binding site close to Thr1075. When extracellular nitrogen is low, TORC1 and Sch9 are inactivated, allowing dephosphorylation of Rim15 and its subsequent transport to the nucleus, where it can activate transcription factors involved in promoting cell entry into G0. It has also been found that Rim15 promotes its own export from the nucleus through autophosphorylation. Phosphate Yeast cells respond to low extracellular phosphate levels by activating genes that are involved in the production and upregulation of inorganic phosphate. The PHO pathway is involved in the regulation of phosphate levels. Under normal conditions, the yeast cyclin-dependent kinase complex, Pho80-Pho85, inactivates the Pho4 transcription factor through phosphorylation. However, when phosphate levels drop, Pho81 inhibits Pho80-Pho85, allowing Pho4 to be active. When phosphate is abundant, Pho80-Pho85 also inhibits the nuclear pool of Rim 15 by promoting phosphorylation of its Thr1075 Bmh2 binding site. Thus, Pho80-Pho85 acts in concert with Sch9 and TORC1 to promote cytoplasmic retention of Rim15 under normal conditions. Mechanism of G0 exit Cyclin C/Cdk3 and Rb The transition from G1 to S phase is promoted by the inactivation of Rb through its progressive hyperphosphorylation by the Cyclin D/Cdk4 and Cyclin E/Cdk2 complexes in late G1. An early observation that loss of Rb promoted cell cycle re-entry in G0 cells suggested that Rb is also essential in regulating the G0 to G1 transition in quiescent cells. Further observations revealed that levels of cyclin C mRNA are highest when human cells exit G0, suggesting that cyclin C may be involved in Rb phosphorylation to promote cell cycle re-entry of G0 arrested cells. Immunoprecipitation kinase assays revealed that cyclin C has Rb kinase activity. Furthermore, unlike cyclins D and E, cyclin C's Rb kinase activity is highest during early G1 and lowest during late G1 and S phases, suggesting that it may be involved in the G0 to G1 transition. The use of fluorescence-activated cell sorting to identify G0 cells, which are characterized by a high DNA to RNA ratio relative to G1 cells, confirmed the suspicion that cyclin C promotes G0 exit as repression of endogenous cyclin C by RNAi in mammalian cells increased the proportion of cells arrested in G0. Further experiments involving mutation of Rb at specific phosphorylation sites showed that cyclin C phosphorylation of Rb at S807/811 is necessary for G0 exit. It remains unclear, however, whether this phosphorylation pattern is sufficient for G0 exit. Finally, co-immunoprecipitation assays revealed that cyclin-dependent kinase 3 (cdk3) promotes G0 exit by forming a complex with cyclin C to phosphorylate Rb at S807/811. Interestingly, S807/811 are also targets of cyclin D/cdk4 phosphorylation during the G1 to S transition. This might suggest a possible compensation of cdk3 activity by cdk4, especially in light of the observation that G0 exit is only delayed, and not permanently inhibited, in cells lacking cdk3 but functional in cdk4. Despite the overlap of phosphorylation targets, it seems that cdk3 is still necessary for the most effective transition from G0 to G1. Rb and G0 exit Studies suggest that Rb repression of the E2F family of transcription factors regulates the G0 to G1 transition just as it does the G1 to S transition. Activating E2F complexes are associated with the recruitment of histone acetyltransferases, which activate gene expression necessary for G1 entry, while E2F4 complexes recruit histone deacetylases, which repress gene expression. Phosphorylation of Rb by Cdk complexes allows its dissociation from E2F transcription factors and the subsequent expression of genes necessary for G0 exit. Other members of the Rb pocket protein family, such as p107 and p130, have also been found to be involved in G0 arrest. p130 levels are elevated in G0 and have been found to associate with E2F-4 complexes to repress transcription of E2F target genes. Meanwhile, p107 has been found to rescue the cell arrest phenotype after loss of Rb even though p107 is expressed at comparatively low levels in G0 cells. Taken together, these findings suggest that Rb repression of E2F transcription factors promotes cell arrest while phosphorylation of Rb leads to G0 exit via derepression of E2F target genes. In addition to its regulation of E2F, Rb has also been shown to suppress RNA polymerase I and RNA polymerase III, which are involved in rRNA synthesis. Thus, phosphorylation of Rb also allows activation of rRNA synthesis, which is crucial for protein synthesis upon entry into G1. References Cell cycle
G0 phase
[ "Biology" ]
4,361
[ "Cell cycle", "Cellular processes" ]
619,137
https://en.wikipedia.org/wiki/Origin%20of%20replication
The origin of replication (also called the replication origin) is a particular sequence in a genome at which replication is initiated. Propagation of the genetic material between generations requires timely and accurate duplication of DNA by semiconservative replication prior to cell division to ensure each daughter cell receives the full complement of chromosomes. This can either involve the replication of DNA in living organisms such as prokaryotes and eukaryotes, or that of DNA or RNA in viruses, such as double-stranded RNA viruses. Synthesis of daughter strands starts at discrete sites, termed replication origins, and proceeds in a bidirectional manner until all genomic DNA is replicated. Despite the fundamental nature of these events, organisms have evolved surprisingly divergent strategies that control replication onset. Although the specific replication origin organization structure and recognition varies from species to species, some common characteristics are shared. Features A key prerequisite for DNA replication is that it must occur with extremely high fidelity and efficiency exactly once per cell cycle to prevent the accumulation of genetic alterations with potentially deleterious consequences for cell survival and organismal viability. Incomplete, erroneous, or untimely DNA replication events can give rise to mutations, chromosomal polyploidy or aneuploidy, and gene copy number variations, each of which in turn can lead to diseases, including cancer. To ensure complete and accurate duplication of the entire genome and the correct flow of genetic information to progeny cells, all DNA replication events are not only tightly regulated with cell cycle cues but are also coordinated with other cellular events such as transcription and DNA repair. Additionally, origin sequences commonly have high AT-content across all kingdoms, since repeats of adenine and thymine are easier to separate because their base stacking interactions are not as strong as those of guanine and cytosine. DNA replication is divided into different stages. During initiation, the replication machineries – termed replisomes – are assembled on DNA in a bidirectional fashion. These assembly loci constitute the start sites of DNA replication or replication origins. In the elongation phase, replisomes travel in opposite directions with the replication forks, unwinding the DNA helix and synthesizing complementary daughter DNA strands using both parental strands as templates. Once replication is complete, specific termination events lead to the disassembly of replisomes. As long as the entire genome is duplicated before cell division, one might assume that the location of replication start sites does not matter; yet, it has been shown that many organisms use preferred genomic regions as origins. The necessity to regulate origin location likely arises from the need to coordinate DNA replication with other processes that act on the shared chromatin template to avoid DNA strand breaks and DNA damage. Replicon model More than five decades ago, Jacob, Brenner, and Cuzin proposed the replicon hypothesis to explain the regulation of chromosomal DNA synthesis in E. coli. The model postulates that a diffusible, trans-acting factor, a so-called initiator, interacts with a cis-acting DNA element, the replicator, to promote replication onset at a nearby origin. Once bound to replicators, initiators (often with the help of co-loader proteins) deposit replicative helicases onto DNA, which subsequently drive the recruitment of additional replisome components and the assembly of the entire replication machinery. The replicator thereby specifies the location of replication initiation events, and the chromosome region that is replicated from a single origin or initiation event is defined as the replicon. A fundamental feature of the replicon hypothesis is that it relies on positive regulation to control DNA replication onset, which can explain many experimental observations in bacterial and phage systems. For example, it accounts for the failure of extrachromosomal DNAs without origins to replicate when introduced into host cells. It further rationalizes plasmid incompatibilities in E. coli, where certain plasmids destabilize each other's inheritance due to competition for the same molecular initiation machinery. By contrast, a model of negative regulation (analogous to the replicon-operator model for transcription) fails to explain the above findings. Nonetheless, research subsequent to Jacob's, Brenner's and Cuzin's proposal of the replicon model has discovered many additional layers of replication control in bacteria and eukaryotes that comprise both positive and negative regulatory elements, highlighting both the complexity and the importance of restricting DNA replication temporally and spatially. The concept of the replicator as a genetic entity has proven very useful in the quest to identify replicator DNA sequences and initiator proteins in prokaryotes, and to some extent also in eukaryotes, although the organization and complexity of replicators differ considerably between the domains of life. While bacterial genomes typically contain a single replicator that is specified by consensus DNA sequence elements and that controls replication of the entire chromosome, most eukaryotic replicators – with the exception of budding yeast – are not defined at the level of DNA sequence; instead, they appear to be specified combinatorially by local DNA structural and chromatin cues. Eukaryotic chromosomes are also much larger than their bacterial counterparts, raising the need for initiating DNA synthesis from many origins simultaneously to ensure timely replication of the entire genome. Additionally, many more replicative helicases are loaded than activated to initiate replication in a given cell cycle. The context-driven definition of replicators and selection of origins suggests a relaxed replicon model in eukaryotic systems that allows for flexibility in the DNA replication program. Although replicators and origins can be spaced physically apart on chromosomes, they often co-localize or are located in close proximity; for simplicity, we will thus refer to both elements as ‘origins’ throughout this review. Taken together, the discovery and isolation of origin sequences in various organisms represents a significant milestone towards gaining mechanistic understanding of replication initiation. In addition, these accomplishments had profound biotechnological implications for the development of shuttle vectors that can be propagated in bacterial, yeast and mammalian cells. Bacterial Most bacterial chromosomes are circular and contain a single origin of chromosomal replication (oriC). Bacterial oriC regions are surprisingly diverse in size (ranging from 250 bp to 2 kbp), sequence, and organization; nonetheless, their ability to drive replication onset typically depends on sequence-specific readout of consensus DNA elements by the bacterial initiator, a protein called DnaA. Origins in bacteria are either continuous or bipartite and contain three functional elements that control origin activity: conserved DNA repeats that are specifically recognized by DnaA (called DnaA-boxes), an AT-rich DNA unwinding element (DUE), and binding sites for proteins that help regulate replication initiation. Interactions of DnaA both with the double-stranded (ds) DnaA-box regions and with single-stranded (ss) DNA in the DUE are important for origin activation and are mediated by different domains in the initiator protein: a Helix-turn-helix (HTH) DNA binding element and an ATPase associated with various cellular activities (AAA+) domain, respectively. While the sequence, number, and arrangement of origin-associated DnaA-boxes vary throughout the bacterial kingdom, their specific positioning and spacing in a given species are critical for oriC function and for productive initiation complex formation. Among bacteria, E. coli is a particularly powerful model system to study the organization, recognition, and activation mechanism of replication origins. E. coli oriC comprises an approximately ~260 bp region containing four types of initiator binding elements that differ in their affinities for DnaA and their dependencies on the co-factor ATP. DnaA-boxes R1, R2, and R4 constitute high-affinity sites that are bound by the HTH domain of DnaA irrespective of the nucleotide-binding state of the initiator. By contrast, the I, τ, and C-sites, which are interspersed between the R-sites, are low-affinity DnaA-boxes and associate preferentially with ATP-bound DnaA, although ADP-DnaA can substitute for ATP-DnaA under certain conditions. Binding of the HTH domains to the high- and low-affinity DnaA recognition elements promotes ATP-dependent higher-order oligomerization of DnaA's AAA+ modules into a right-handed filament that wraps duplex DNA around its outer surface, thereby generating superhelical torsion that facilitates melting of the adjacent AT-rich DUE. DNA strand separation is additionally aided by direct interactions of DnaA's AAA+ ATPase domain with triplet repeats, so-called DnaA-trios, in the proximal DUE region. The engagement of single-stranded trinucleotide segments by the initiator filament stretches DNA and stabilizes the initiation bubble by preventing reannealing. The DnaA-trio origin element is conserved in many bacterial species, indicating it is a key element for origin function. After melting, the DUE provides an entry site for the E. coli replicative helicase DnaB, which is deposited onto each of the single DNA strands by its loader protein DnaC. Although the different DNA binding activities of DnaA have been extensively studied biochemically and various apo, ssDNA-, or dsDNA-bound structures have been determined, the exact architecture of the higher-order DnaA-oriC initiation assembly remains unclear. Two models have been proposed to explain the organization of essential origin elements and DnaA-mediated oriC melting. The two-state model assumes a continuous DnaA filament that switches from a dsDNA binding mode (the organizing complex) to an ssDNA binding mode in the DUE (the melting complex). By contrast, in the loop-back model, the DNA is sharply bent in oriC and folds back onto the initiator filament so that DnaA protomers simultaneously engage double- and single-stranded DNA regions. Elucidating how exactly oriC DNA is organized by DnaA remains thus an important task for future studies. Insights into initiation complex architecture will help explain not only how origin DNA is melted, but also how a replicative helicase is loaded directionally onto each of the exposed single DNA strands in the unwound DUE, and how these events are aided by interactions of the helicase with the initiator and specific loader proteins. Archaeal Archaeal replication origins share some but not all of the organizational features of bacterial oriC. Unlike bacteria, Archaea often initiate replication from multiple origins per chromosome (one to four have been reported); yet, archaeal origins also bear specialized sequence regions that control origin function. These elements include both DNA sequence-specific origin recognition boxes (ORBs or miniORBs) and an AT-rich DUE that is flanked by one or several ORB regions. ORB elements display a considerable degree of diversity in terms of their number, arrangement, and sequence, both among different archaeal species and among different origins in a single species. An additional degree of complexity is introduced by the initiator, Orc1/Cdc6 in archaea, which binds to ORB regions. Archaeal genomes typically encode multiple paralogs of Orc1/Cdc6 that vary substantially in their affinities for distinct ORB elements and that differentially contribute to origin activities. In Sulfolobus solfataricus, for example, three chromosomal origins have been mapped (oriC1, oriC2, and oriC3), and biochemical studies have revealed complex binding patterns of initiators at these sites. The cognate initiator for oriC1 is Orc1-1, which associates with several ORBs at this origin. OriC2 and oriC3 are bound by both Orc1-1 and Orc1-3. Conversely, a third paralog, Orc1-2, footprints at all three origins but has been postulated to negatively regulate replication initiation. Additionally, the WhiP protein, an initiator unrelated to Orc1/Cdc6, has been shown to bind all origins as well and to drive origin activity of oriC3 in the closely related Sulfolobus islandicus. Because archaeal origins often contain several adjacent ORB elements, multiple Orc1/Cdc6 paralogs can be simultaneously recruited to an origin and oligomerize in some instances; however, in contrast to bacterial DnaA, formation of a higher-order initiator assembly does not appear to be a general prerequisite for origin function in the archaeal domain. Structural studies have provided insights into how archaeal Orc1/Cdc6 recognizes ORB elements and remodels origin DNA. Orc1/Cdc6 paralogs are two-domain proteins and are composed of a AAA+ ATPase module fused to a C-terminal winged-helix fold. DNA-complexed structures of Orc1/Cdc6 revealed that ORBs are bound by an Orc1/Cdc6 monomer despite the presence of inverted repeat sequences within ORB elements. Both the ATPase and winged-helix regions interact with the DNA duplex but contact the palindromic ORB repeat sequence asymmetrically, which orients Orc1/Cdc6 in a specific direction on the repeat. Interestingly, the DUE-flanking ORB or miniORB elements often have opposite polarities, which predicts that the AAA+ lid subdomains and the winged-helix domains of Orc1/Cdc6 are positioned on either side of the DUE in a manner where they face each other. Since both regions of Orc1/Cdc6 associate with a minichromosome maintenance (MCM) replicative helicase, this specific arrangement of ORB elements and Orc1/Cdc6 is likely important for loading two MCM complexes symmetrically onto the DUE. Surprisingly, while the ORB DNA sequence determines the directionality of Orc1/Cdc6 binding, the initiator makes relatively few sequence-specific contacts with DNA. However, Orc1/Cdc6 severely underwinds and bends DNA, suggesting that it relies on a mix of both DNA sequence and context-dependent DNA structural features to recognize origins. Notably, base pairing is maintained in the distorted DNA duplex upon Orc1/Cdc6 binding in the crystal structures, whereas biochemical studies have yielded contradictory findings as to whether archaeal initiators can melt DNA similarly to bacterial DnaA. Although the evolutionary kinship of archaeal and eukaryotic initiators and replicative helicases indicates that archaeal MCM is likely loaded onto duplex DNA (see next section), the temporal order of origin melting and helicase loading, as well as the mechanism for origin DNA melting, in archaeal systems remains therefore to be clearly established. Likewise, how exactly the MCM helicase is loaded onto DNA needs to be addressed in future studies. Eukaryotic Origin organization, specification, and activation in eukaryotes are more complex than in bacterial or archaeal domains and significantly deviate from the paradigm established for prokaryotic replication initiation. The large genome sizes of eukaryotic cells, which range from 12 Mbp in S. cerevisiae to more than 100 Gbp in some plants, necessitates that DNA replication starts at several hundred (in budding yeast) to tens of thousands (in humans) origins to complete DNA replication of all chromosomes during each cell cycle. With the exception of S. cerevisiae and related Saccharomycotina species, eukaryotic origins do not contain consensus DNA sequence elements but their location is influenced by contextual cues such as local DNA topology, DNA structural features, and chromatin environment. Eukaryotic origin function relies on a conserved initiator protein complex to load replicative helicases onto DNA during the late M and G1 phases of the cell cycle, a step known as origin licensing. In contrast to their bacterial counterparts, replicative helicases in eukaryotes are loaded onto origin duplex DNA in an inactive, double-hexameric form and only a subset of them (10-20% in mammalian cells) is activated during any given S phase, events that are referred to as origin firing. The location of active eukaryotic origins is therefore determined on at least two different levels, origin licensing to mark all potential origins, and origin firing to select a subset that permits assembly of the replication machinery and initiation of DNA synthesis. The extra licensed origins serve as backup and are activated only upon slowing or stalling of nearby replication forks, ensuring that DNA replication can be completed when cells encounter replication stress. In the absence of stress, firing of extra origins is suppressed by a replication-associated signaling mechanism. Together, the excess of licensed origins and the tight cell cycle control of origin licensing and firing embody two important strategies to prevent under- and overreplication and to maintain the integrity of eukaryotic genomes. Early studies in S. cerevisiae indicated that replication origins in eukaryotes might be recognized in a DNA-sequence-specific manner analogously to those in prokaryotes. In budding yeast, the search for genetic replicators lead to the identification of autonomously replicating sequences (ARS) that support efficient DNA replication initiation of extrachromosomal DNA. These ARS regions are approximately 100-200 bp long and exhibit a multipartite organization, containing A, B1, B2, and sometimes B3 elements that together are essential for origin function. The A element encompasses the conserved 11 bp ARS consensus sequence (ACS), which, in conjunction with the B1 element, constitutes the primary binding site for the heterohexameric origin recognition complex (ORC), the eukaryotic replication initiator. Within ORC, five subunits are predicated on conserved AAA+ ATPase and winged-helix folds and co-assemble into a pentameric ring that encircles DNA. In budding yeast ORC, DNA binding elements in the ATPase and winged-helix domains, as well as adjacent basic patch regions in some of the ORC subunits, are positioned in the central pore of the ORC ring such that they aid the DNA-sequence-specific recognition of the ACS in an ATP-dependent manner. By contrast, the roles of the B2 and B3 elements are less clear. The B2 region is similar to the ACS in sequence and has been suggested to function as a second ORC binding site under certain conditions, or as a binding site for the replicative helicase core. Conversely, the B3 element recruits the transcription factor Abf1, albeit B3 is not found at all budding yeast origins and Abf1 binding does not appear to be strictly essential for origin function. Origin recognition in eukaryotes other than S. cerevisiae or its close relatives does not conform to the sequence-specific read-out of conserved origin DNA elements. Pursuits to isolate specific chromosomal replicator sequences more generally in eukaryotic species, either genetically or by genome-wide mapping of initiator binding or replication start sites, have failed to identify clear consensus sequences at origins. Thus, sequence-specific DNA-initiator interactions in budding yeast signify a specialized mode for origin recognition in this system rather than an archetypal mode for origin specification across the eukaryotic domain. Nonetheless, DNA replication does initiate at discrete sites that are not randomly distributed across eukaryotic genomes, arguing that alternative means determine the chromosomal location of origins in these systems. These mechanisms involve a complex interplay between DNA accessibility, nucleotide sequence skew (both AT-richness and CpG islands have been linked to origins), Nucleosome positioning, epigenetic features, DNA topology and certain DNA structural features (e.g., G4 motifs), as well as regulatory proteins and transcriptional interference. Importantly, origin properties vary not only between different origins in an organism and among species, but some can also change during development and cell differentiation. The chorion locus in Drosophila follicle cells constitutes a well-established example for spatial and developmental control of initiation events. This region undergoes DNA-replication-dependent gene amplification at a defined stage during oogenesis and relies on the timely and specific activation of chorion origins, which in turn is regulated by origin-specific cis-elements and several protein factors, including the Myb complex, E2F1, and E2F2. This combinatorial specification and multifactorial regulation of metazoan origins has complicated the identification of unifying features that determine the location of replication start sites across eukaryotes more generally. To facilitate replication initiation and origin recognition, ORC assemblies from various species have evolved specialized auxiliary domains that are thought to aid initiator targeting to chromosomal origins or chromosomes in general. For example, the Orc4 subunit in S. pombe ORC contains several AT-hooks that preferentially bind AT-rich DNA, while in metazoan (animal) ORC the TFIIB-like domain of Orc6 is thought to perform a similar function. Metazoan Orc1 proteins also harbor a bromo-adjacent homology (BAH) domain that interacts with H4K20me2-nucleosomes. Particularly in mammalian cells, H4K20 methylation has been reported to be required for efficient replication initiation, and the Orc1's BAH domain facilitates ORC association with chromosomes and Epstein-Barr virus origin-dependent replication. Therefore, it is intriguing to speculate that both observations are mechanistically linked at least in a subset of metazoa, but this possibility needs to be further explored in future studies. In addition to the recognition of certain DNA or epigenetic features, ORC also associates directly or indirectly with several partner proteins that could aid initiator recruitment, including LRWD1, PHIP (or DCAF14), HMGA1a, among others. Interestingly, Drosophila ORC, like its budding yeast counterpart, bends DNA and negative supercoiling has been reported to enhance DNA binding of this complex, suggesting that DNA shape and malleability might influence the location of ORC binding sites across metazoan genomes. A molecular understanding for how ORC's DNA binding regions might support the read out of structural properties of the DNA duplex in metazoans rather than of specific DNA sequences as in S. cerevisiae awaits high-resolution structural information of DNA-bound metazoan initiator assemblies. Likewise, whether and how different epigenetic factors contribute to initiator recruitment in metazoan systems is poorly defined and is an important question that needs to be addressed in more detail. Once recruited to origins, ORC and its co-factors Cdc6 and Cdt1 drive the deposition of the minichromosome maintenance 2-7 (Mcm2-7) complex onto DNA. Like the archaeal replicative helicase core, Mcm2-7 is loaded as a head-to-head double hexamer onto DNA to license origins. In S-phase, Dbf4-dependent kinase (DDK) and Cyclin-dependent kinase (CDK) phosphorylate several Mcm2-7 subunits and additional initiation factors to promote the recruitment of the helicase co-activators Cdc45 and GINS, DNA melting, and ultimately bidirectional replisome assembly at a subset of the licensed origins. In both yeast and metazoans, origins are free or depleted of nucleosomes, a property that is crucial for Mcm2-7 loading, indicating that chromatin state at origins regulates not only initiator recruitment but also helicase loading. A permissive chromatin environment is further important for origin activation and has been implicated in regulating both origin efficiency and the timing of origin firing. Euchromatic origins typically contain active chromatin marks, replicate early, and are more efficient than late-replicating, heterochromatic origins, which conversely are characterized by repressive marks. Not surprisingly, several chromatin remodelers and chromatin-modifying enzymes have been found to associate with origins and certain initiation factors, but how their activities impact different replication initiation events remains largely obscure. Remarkably, cis-acting “early replication control elements” (ECREs) have recently also been identified to help regulate replication timing and to influence 3D genome architecture in mammalian cells. Understanding the molecular and biochemical mechanisms that orchestrate this complex interplay between 3D genome organization, local and higher-order chromatin structure, and replication initiation is an exciting topic for further studies. Why have metazoan replication origins diverged from the DNA sequence-specific recognition paradigm that determines replication start sites in prokaryotes and budding yeast? Observations that metazoan origins often co-localize with promoter regions in Drosophila and mammalian cells and that replication-transcription conflicts due to collisions of the underlying molecular machineries can lead to DNA damage suggest that proper coordination of transcription and replication is important for maintaining genome stability. Recent findings also point to a more direct role of transcription in influencing the location of origins, either by inhibiting Mcm2-7 loading or by repositioning of loaded Mcm2-7 on chromosomes. Sequence-independent (but not necessarily random) initiator binding to DNA additionally allows for flexibility in specifying helicase loading sites and, together with transcriptional interference and the variability in activation efficiencies of licensed origins, likely determines origin location and contributes to the co-regulation of DNA replication and transcriptional programs during development and cell fate transitions. Computational modeling of initiation events in S. pombe, as well as the identification of cell-type specific and developmentally-regulated origins in metazoans, are in agreement with this notion. However, a large degree of flexibility in origin choice also exists among different cells within a single population, albeit the molecular mechanisms that lead to the heterogeneity in origin usage remain ill-defined. Mapping origins in single cells in metazoan systems and correlating these initiation events with single-cell gene expression and chromatin status will be important to elucidate whether origin choice is purely stochastic or controlled in a defined manner. Viral Viruses often possess a single origin of replication. A variety of proteins have been described as being involved in viral replication. For instance, Polyoma viruses utilize host cell DNA polymerases, which attach to a viral origin of replication if the T antigen is present. Variations Although DNA replication is essential for genetic inheritance, defined, site-specific replication origins are technically not a requirement for genome duplication as long as all chromosomes are copied in their entirety to maintain gene copy numbers. Certain bacteriophages and viruses, for example, can initiate DNA replication by homologous recombination independent of dedicated origins. Likewise, the archaeon Haloferax volcanii uses recombination-dependent initiation to duplicate its genome when its endogenous origins are deleted. Similar non-canonical initiation events through break-induced or transcription-initiated replication have been reported in E. coli and S. cerevisiae. Nonetheless, despite the ability of cells to sustain viability under these exceptional circumstances, origin-dependent initiation is a common strategy universally adopted across different domains of life. In addition, detailed studies of replication initiation have focused on a limited number of model systems. The extensively studied fungi and metazoa are both members of the opisthokont supergroup and exemplify only a small fraction of the evolutionary landscape in the eukaryotic domain. Comparably few efforts have been directed at other eukaryotic model systems, such as kinetoplastids or tetrahymena. Surprisingly, these studies have revealed interesting differences both in origin properties and in initiator composition compared to yeast and metazoans. See also OriDB the DNA Replication Origin Database Origin of transfer References Further reading External links Ori-Finder, an online software for prediction of bacterial and archaeal oriCs DNA replication
Origin of replication
[ "Biology" ]
5,863
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
619,178
https://en.wikipedia.org/wiki/Pre-replication%20complex
A pre-replication complex (pre-RC) is a protein complex that forms at the origin of replication during the initiation step of DNA replication. Formation of the pre-RC is required for DNA replication to occur. Complete and faithful replication of the genome ensures that each daughter cell will carry the same genetic information as the parent cell. Accordingly, formation of the pre-RC is a very important part of the cell cycle. Components As organisms evolved and became increasingly more complex, so did their pre-RCs. The following is a summary of the components of the pre-RC amongst the different domains of life. In bacteria, the main component of the pre-RC is DnaA. The pre-RC is complete when DnaA occupies all of its binding sites within the bacterial origin of replication (oriC). The particular sites on the oriC that DnaA binds to determines if the cell has a bORC (bacterial Origin Recognition Complex) or a pre-RC. The archaeal pre-RC is very different from the bacterial pre-RC and can serve as a simplified model of the eukaryotic pre-RC. It is composed of a single origin recognition complex (ORC) protein, Cdc6/ORC1, and a homohexamer of the minichromosome maintenance (MCM) protein. Sulfolobus islandicus also uses a Cdt1 homologue to recognize one of its replication origins. The eukaryotic pre-RC is the most complex and highly regulated pre-RC. In most eukaryotes it is composed of six ORC proteins (ORC1-6), Cdc6, Cdt1, and a heterohexamer of the six MCM proteins (MCM2-7). The MCM heterohexamer arguably arose via MCM gene duplication events and subsequent divergent evolution. The pre-RC of Schizosaccharomyces pombe (S. pombe) is notably different from that of other eukaryotes; Cdc6 is replaced by the homologous Cdc18 protein. Sap1 is also included in the S. pombe pre-RC because it is required for Cdc18 binding. The pre-RC of Xenopus laevis (X. laevis) also has an additional protein, MCM9, which helps load the MCM heterohexamer onto the origin of replication. The structure of the ORC, MCM, as well as the intermediate ORC-Cdc6-Cdt1-Mcm2-7 (OCCM) complex has been resolved. Recognition of the origin of replication Recognition of the origin of replication is a critical first step in the formation of the pre-RC. In different domains of life this process is accomplished differently. In prokaryotes, origin recognition is accomplished by DnaA. DnaA binds tightly to a 9-base pair consensus sequence in oriC; 5' – TTATCCACA – 3'. There are 5 such 9-bp sequences (R1-R5) and 4 non-consensus sequences (I1-I4) within oriC that DnaA binds with differential affinity. DnaA binds R4, R1, and R2 with high affinity and R5, I1, I2, I3, and R3 with lesser affinity. In vivo, it has been observed that the DnaA binding to recognition sites occurs in the order: R1, R2, then R4, which forms the bORC. Afterwards, the other lower affinity, 9 bp recognition sites bind to DnaA, which forms the pre-RC. Archaea have 1–3 origins of replication. The origins are generally AT-rich tracts that vary based on the archaeal species. The singular archaeal ORC protein recognizes the AT-rich tracts and binds DNA in an ATP-dependent fashion. Eukaryotes typically have multiple origins of replication; at least one per chromosome. Saccharomyces cerevisiae (S. cerevisiae) is the only known eukaryote with a defined initiation sequence TTTTTATG/ATTTA/T. This initiation sequence is recognized by ORC1-5. ORC6 is not known to bind DNA in S. cerevisiae. Initiation sequences in S. pombe and higher eukaryotes are not well defined. However, the initiation sequences are generally either AT-rich or exhibit bent or curved DNA topology. The ORC4 protein is known to bind the AT-rich portion of the origin of replication in S. pombe using AT hook motifs. The mechanism of origin recognition in higher eukaryotes is not well understood but it is thought that the ORC1-6 proteins depend on unusual DNA topology for binding. Loading Assembly of the pre-replication complex only occurs during late M phase and early G1 phase of the cell cycle when cyclin-dependent kinase (CDK) activity is low. This timing and other regulatory mechanisms ensure that DNA replication will only occur once per cell cycle. Assembly of the pre-RC relies on prior origin recognition, either by DnaA in prokaryotes or by ORC in archaea and eukaryotes. The pre-RC of prokaryotes is complete when DnaA occupies all possible binding sites within the oriC. DnaA can only bind to the low affinity sites on the oriC once the protein fis is removed from the oriC. Removal of fis, the protein IHF (integrated host factor) binds to a site between R1 and R2, which allows DnaA to bind to the low affinity sites on the oriC. This completes the pre-RC. The pre-RC of archaea requires ORC binding of the origin. After this, Cdc6 and the MCM homohexameric complex bind in a sequential fashion. Eukaryotes have the most complex pre-RC. After ORC1-6 bind the origin of replication, Cdc6 is recruited. Cdc6 recruits the licensing factor Cdt1 and MCM2-7. Cdt1 binding and ATP hydrolysis by the ORC and Cdc6 load MCM2-7 onto DNA. There is a stoichiometric excess of the MCM proteins over the ORC and Cdc6 proteins, indicating that there may be multiple MCM heterohexamers bound to each origin of replication. Initiation of replication After the pre-RC is formed it must be activated and the replisome assembled in order for DNA replication to occur. In prokaryotes, DnaA hydrolyzes ATP in order to unwind DNA at the oriC. This denatured region is accessible to the DnaB helicase and DnaC helicase loader. Single-strand binding proteins stabilize the newly formed replication bubble and interact with the DnaG primase. DnaG recruits the replicative DNA polymerase III, and replication begins. In eukaryotes, MCM heterohexamer is phosphorylated by CDC7 and CDK, which displaces Cdc6 and recruits MCM10. MCM10 cooperates with MCM2-7 in the recruitment of Cdc45. Cdc45 then recruits key components of the replisome; the replicative DNA polymerase α and its primase. DNA replication can then begin. Prevention of pre-replication complex re-assembly During each cell cycle, it is important that the genome be completely replicated once and only once. Formation of the pre-replication complex during late M and early G1 phase is required for genome replication, but after the genome has been replicated the pre-RC must not form again until the next cell cycle. In prokaryotes, various studies have demonstrated that the pre-RC is a complex that is only present for a fraction of the cell cycle. Once a cellular division occurs, the pre-RC must revert back to the bORC to ensure that only one round of DNA replication occurs during division. In E. coli, there are 11 GATC sites in the oriC that undergo hemimethylation during DNA replication. The protein SeqA binds to these sites preventing remethylation and blocking the binding of DnaA to low affinity sites for approximately one third of the cell cycle. However, SeqA does not block DnaA from binding to the R1, R2, and R4 sites. Thus, the bORC is reset and is prepared to undergo another conversion to the pre-RC. In S. cerevisiae, CDKs prevent formation of the replication complex during late G1, S, and G2 phases by excluding MCM2-7 and Cdt1 from the nucleus, targeting Cdc6 for degradation by the proteasome, and dissociating ORC1-6 from chromatin via phosphorylation. Prevention of re-replication in S. pombe is slightly different; Cdt1 is degraded by the proteasome instead of merely being excluded from the nucleus. Proteolytic regulation of Cdt1 is shared by higher eukaryotes including Caenorhabditis elegans, Drosophila melanogaster, X. laevis, and mammals. Metazoans have a fourth mechanism to prevent re-replication; during S and G2 geminin binds to Cdt1 and inhibits Cdt1 from loading MCM2-7 onto the origin of replication. Meier-Gorlin syndrome Defects in components of the eukaryotic replication complex are known to cause Meier-Gorlin syndrome, which is characterized by dwarfism, absent or hypoplastic patellae, small ears, impaired pre- and post-natal growth, and microcephaly. Known mutations are in the ORC1, ORC4, ORC6, CDT1, and CDC6 genes. The disease phenotype probably originates from reduced ability of cells to proliferate, leading to cell number, and general growth failure. References DNA replication
Pre-replication complex
[ "Biology" ]
2,069
[ "Genetics techniques", "DNA replication", "Molecular genetics" ]
619,185
https://en.wikipedia.org/wiki/Bruce%20Perens%27%20Open%20Source%20Series
The Bruce Perens' Open Source Series was a series of books edited by Bruce Perens as series editor and published by Prentice Hall PTR. Principal topics were Linux and other open-source software. These books were intended for professional software developers, system and network administrators, and power users. The series was published between 2002 and 2006; there were 24 titles in the series. Each book was licensed under the Open Publication License and was made available as a free download several months after publication. It was the first book series to be published under an open content license. References Books about free software Book series Books about Linux System administration Open Publication License-licensed works Open content
Bruce Perens' Open Source Series
[ "Technology" ]
134
[ "Computing stubs", "Information systems", "Computer book stubs", "System administration" ]
619,195
https://en.wikipedia.org/wiki/Walter%20McCrone
Walter Cox McCrone Jr. (June 9, 1916July 10, 2002) was an American chemist who worked extensively on applications of polarized light microscopy and is sometimes characterized as the "father of modern microscopy". He was also an expert in electron microscopy, crystallography, ultra-microanalysis, and particle identification. In 1960 he founded the McCrone Research Institute, a non-profit educational and research organization for microscopy based in Chicago. McCrone's crystallographic work on polymorphism and its pharmaceutical applications played a central role in the subsequent development of the field. To the general public, McCrone was best known for his work in forensic science, especially his analyses of the Vinland Map and the Shroud of Turin. In 2000, he received the American Chemical Society's National Award in Analytical Chemistry. Biography Walter McCrone was born in Wilmington, Delaware, but he grew up mostly in New York State. His father was a civil engineer in charge of one of the first DuPont plants to manufacture cellophane. McCrone received a bachelor's degree in chemistry from Cornell University in 1938 and a Ph.D. in organic chemistry from the same institution in 1942. From 1942 to 1944 he was a post-doctoral researcher at Cornell. In 1944, McCrone published a detailed study on The Microscopic Examination of High Explosives and Boosters. In 1944, McCrone began to work as a microscopist and materials scientist at the Armour Research Foundation, now the Illinois Institute of Technology (IIT) Research Institute. He was also a professor at IIT and served as assistant chairman of its Chemistry and Chemical Engineering Department. In 1948, McCrone and IIT electron microscopist Charles F. Tufts organized the first of the meetings that are now the International Microscopy Conference (Inter/Micro). Among the speakers at the first conference was Nobel laureate Frits Zernike. In 1956, McCrone left IIT and founded an analytical consulting firm, McCrone Associates, which is now located in Westmont, Illinois. In 1960, he established the McCrone Research Institute, a nonprofit organization for teaching and research in microscopy and crystallography, based in Chicago. In 1979, he retired from McCrone Associates in order to dedicate himself to teaching full time. The proceeds from his work as a consulting chemist allowed McCrone to endow the Émile M. Chamot Professorship of Chemistry at Cornell, named in honor of McCrone's university mentor. According to chemist and forensic scientist John A. Reffner, "during McCrone's life, he taught microscopy to more students than anyone else in history". For more than thirty years McCrone edited and published The Microscope, an international quarterly journal of microscopy that had been established in 1937 by the British microscopist Arthur L. E. Barron. McCrone also wrote more than 400 technical articles along with sixteen books or chapters. He is credited with expanding the usefulness of the optical microscopy to chemists, who had previously regarded it as primarily a tool for biologists. One of his publications was the Particle Atlas, first published in 1967, which provided an exhaustive description of small particles and how to identify them with the aid of a microscope. That work became widely used in forensic laboratories. The Particle Atlas, which was written in collaboration with other staff members of McCrone Associates, appeared in a six-volume second edition in 1973. In 1992, it became available in CD-ROM. Walter McCrone served on the board of directors and as president of the Ada S. McKinley Community Services, a nonprofit social services agency in Chicago. He died of congestive heart failure at his home in Chicago, at the age of 86. From 1957 until his death in 2002, he was married to Lucy B. McCrone, née Beman. The two had met while she was working as an analytical chemist for the management consulting firm Arthur D. Little, in Cambridge, Massachusetts. After their marriage, Lucy McCrone worked as a chemical microanalyst for McCrone Associates in Chicago and was co-founder and director of the McCrone Research Institute until 1984. Polymorphism In the 1950s and 1960s, McCrone conducted extensive research on the microscopic characterization of polymorphs, which he defined as materials that are "different in crystal structure but identical in the liquid or vapor states". He investigated the difference in the properties of polymorphs of medications, co-authoring with John Haleblian a review article on "the pharmaceutical application of polymorphism", published in 1969. McCrone's work on polymorphism exerted a strong influence upon the scientific career of Joel Bernstein. Vinland Map The Vinland Map appears to be a 15th-century mappa mundi showing a landmass in the Atlantic Ocean, directly south-west of Greenland, labelled Vinlanda Insula ("Isle of Vinland"). It first came to light in 1957 and was acquired by Yale University in 1964. The map's authenticity would have demonstrated the awareness of European catographers of a part of the American continent, before the voyages of Christopher Columbus. McCrone, already reputed for his expertise in authenticating ancient documents and works of art, was asked by Yale to analyze the map in 1972. In 1974, he published evidence that the ink of the map contained synthetic anatase (a form of titanium dioxide), a substance not used as a pigment until the 1920s. McCrone detected the anatase in the yellow ink that the forger used to simulate the natural discoloration that appears over long periods of time around lines drawn on parchment in medieval iron gall ink. McCrone's work on the Vinland Map led to a protracted controversy, with other researchers continuing to argue for the document's authenticity and discounting the presence of anatase as insignificant. In 2021, Raymond Clemens, the curator of early books and manuscripts at Yale's Beinecke Rare Book & Manuscript Library where the map is housed, declared that it had been conclusively shown to be a fake. That judgment was largely based on the presence of synthetic anatase in the ink, as first identified by McCrone. Shroud of Turin As a result of McCrone's work on the Vinland Map, British author and researcher Ian Wilson approached McCrone in 1974 about the possibility of scientifically analyzing the Shroud of Turin, a length of linen cloth that has been venerated for centuries as the burial shroud of Jesus upon which his image is miraculously imprinted. This led to McCrone's involvement with the Shroud of Turin Research Project (STURP). In 1977, a team of scientists affiliated with STURP proposed a barrage of tests to be carried out on the Shroud. With permission from the Archbishop of Turin, Cardinal Anastasio Ballestrero, STURP researchers conducted tests over a period of five days in October 1978, also using adhesive tape to obtain samples of the fibers from various parts on the Shroud's surface. Based on his microscopic and chemical analysis of the tape samples obtained by STURP, McCrone concluded that the image on the Shroud was painted with a dilute pigment of red ochre in a collagen tempera (i.e., gelatin) medium, using a technique similar to the grisaille employed in the 14th century by Simone Martini and other European artists. McCrone also found that the "bloodstains" in the image had been highlighted with vermilion (a bright red pigment made from mercury sulfide), also in a collagen tempera medium. McCrone reported that no actual blood was present in the samples taken from the Shroud. McCrone's results were rejected by other members of STURP and McCrone resigned from STURP in June 1980. Two other members of STURP, John Heller and Alan Adler, published their own analysis concluding that Shroud did show traces of blood. Other STURP members also disputed McCrone's conclusion that the Shroud image was painted, finding that physical analyses excluded the presence of pigments in sufficient quantities to account for the visible image. McCrone continued to defend his results and to insist that polarized light microscopy, in which he was the only expert among the original members of STURP, was the correct technique to apply to the study of the Shroud. In 1983, he confidently predicted that radiocarbon dating of the Shroud's linen would show that it had been made shortly before the first historically recorded exhibition of the Shroud in 1356. The results of the 1988 radiocarbon dating of the Shroud vindicated McCrone's microscopic and chemical analyses. McCrone re-stated and summarized his evidence that the Shroud was painted in an article published in 1990 in the journal Accounts of Chemical Research. He later wrote a book on the subject, Judgment Day for the Shroud of Turin, published in 1996 by the McCrone Research Institute's Microscope Publications and re-issued in 1999 by Prometheus Books. In 2000, the American Chemical Society presented McCrone with its National Award in Analytical Chemistry for his work on the Shroud and for "his enduring patience for the defense of his methodologies". Other investigations McCrone's work as a microscopist first attracted widespread public attention when he helped exonerate Lloyd Eldon Miller, a cabdriver who had been sentenced to death for the 1955 murder of an 8-year-old girl in Canton, Illinois. McCrone was able to show that the stains in a pair of undershorts that the prosecution had presented to the jury as blood were actually red paint. Miller's conviction was overturned by the US Supreme Court in 1967. In later life, McCrone microscopically examined the physical evidence (hairs, fibers, blood, etc.) that led to the conviction of Wayne Williams as the Atlanta child killer. That work earned him the 1982 Certificate of Merit from the Forensic Sciences Foundation. On occasion, McCrone was given hair samples of famous people to analyze. Based on such analysis, he rejected the hypothesis that Napoleon had been poisoned with arsenic, but concluded that Beethoven had suffered from lead poisoning. Posthumous recognition The executive council of the Committee for Skeptical Inquiry (CSI) voted in April 2011 to include Walter McCrone in its "Pantheon of Skeptics". The Pantheon of Skeptics commemorates deceased distinguished fellows of CSI and their exceptional contributions to the cause of scientific skepticism. References External links McCrone Research Institute website McCrone Associates website 1916 births 2002 deaths Analytical chemists 20th-century American chemists Microscopists Illinois Institute of Technology faculty Cornell University alumni Researchers of the Shroud of Turin
Walter McCrone
[ "Chemistry" ]
2,163
[ "Analytical chemists", "Microscopists", "Microscopy" ]
619,197
https://en.wikipedia.org/wiki/Cyclin
Cyclins are proteins that control the progression of a cell through the cell cycle by activating cyclin-dependent kinases (CDK). Etymology Cyclins were originally discovered by R. Timothy Hunt in 1982 while studying the cell cycle of sea urchins. In an interview for "The Life Scientific", which aired on December 13, 2011 and was hosted by Jim Al-Khalili, R. Timothy Hunt explained that the name "cyclin" was originally named after his hobby cycling. It was only after the naming did its importance in the cell cycle become apparent. As it was appropriate the name stuck. R. Timothy Hunt: "By the way, the name cyclin, which I coined, was really a joke, it's because I liked cycling so much at the time, but they did come and go in the cell..." Function Cyclins were originally named because their concentration varies in a cyclical fashion during the cell cycle. (Note that the cyclins are now classified according to their conserved cyclin box structure, and not all these cyclins alter in level through the cell cycle.) The oscillations of the cyclins, namely fluctuations in cyclin gene expression and destruction by the ubiquitin mediated proteasome pathway, induce oscillations in Cdk activity to drive the cell cycle. A cyclin forms a complex with Cdk, which begins to activate, but the complete activation requires phosphorylation as well. Complex formation results in activation of the Cdk active site. Cyclins themselves have no enzymatic activity but have binding sites for some substrates and target the Cdks to specific subcellular locations. Cyclins, when bound with the dependent kinases, such as the p34/cdc2/cdk1 protein, form the maturation-promoting factor. MPFs activate other proteins through phosphorylation. These phosphorylated proteins, in turn, are responsible for specific events during cell division such as microtubule formation and chromatin remodeling. Cyclins can be divided into four classes based on their behaviour in the cell cycle of vertebrate somatic cells and yeast cells: G1 cyclins, G1/S cyclins, S cyclins, and M cyclins. This division is useful when talking about most cell cycles, but it is not universal as some cyclins have different functions or timing in different cell types. G1/S Cyclins rise in late G1 and fall in early S phase. The Cdk- G1/S cyclin complex begins to induce the initial processes of DNA replication, primarily by arresting systems that prevent S phase Cdk activity in G1. The cyclins also promote other activities to progress the cell cycle, such as centrosome duplication in vertebrates or spindle pole body in yeast. The rise in presence of G1/S cyclins is paralleled by a rise in S cyclins. G1 cyclins do not behave like the other cyclins, in that the concentrations increase gradually (with no oscillation), throughout the cell cycle based on cell growth and the external growth-regulatory signals. The presence of G cyclins coordinate cell growth with the entry to a new cell cycle. S cyclins bind to Cdk and the complex directly induces DNA replication. The levels of S cyclins remain high, not only throughout S phase, but through G2 and early mitosis as well to promote early events in mitosis. M cyclin concentrations rise as the cell begins to enter mitosis and the concentrations peak at metaphase. Cell changes in the cell cycle like the assembly of mitotic spindles and alignment of sister-chromatids along the spindles are induced by M cyclin- Cdk complexes. The destruction of M cyclins during metaphase and anaphase, after the Spindle Assembly Checkpoint is satisfied, causes the exit of mitosis and cytokinesis. Expression of cyclins detected immunocytochemically in individual cells in relation to cellular DNA content (cell cycle phase), or in relation to initiation and termination of DNA replication during S-phase, can be measured by flow cytometry. Kaposi sarcoma herpesvirus (KSHV) encodes a D-type cyclin (ORF72) that binds CDK6 and is likely to contribute to KSHV-related cancers. Domain structure Cyclins are generally very different from each other in primary structure, or amino acid sequence. However, all members of the cyclin family are similar in 100 amino acids that make up the cyclin box. Cyclins contain two domains of a similar all-α fold, the first located at the N-terminus and the second at the C-terminus. All cyclins are believed to contain a similar tertiary structure of two compact domains of 5 α helices. The first of which is the conserved cyclin box, outside of which cyclins are divergent. For example, the amino-terminal regions of S and M cyclins contain short destruction-box motifs that target these proteins for proteolysis in mitosis. Types There are several different cyclins that are active in different parts of the cell cycle and that cause the Cdk to phosphorylate different substrates. There are also several "orphan" cyclins for which no Cdk partner has been identified. For example, cyclin F is an orphan cyclin that is essential for G2/M transition. A study in C. elegans revealed the specific roles of mitotic cyclins. Notably, recent studies have shown that cyclin A creates a cellular environment that promotes microtubule detachment from kinetochores in prometaphase to ensure efficient error correction and faithful chromosome segregation. Cells must separate their chromosomes precisely, an event that relies on the bi-oriented attachment of chromosomes to spindle microtubules through specialized structures called kinetochores. In the early phases of division, there are numerous errors in how kinetochores bind to spindle microtubules. The unstable attachments promote the correction of errors by causing a constant detachment, realignment and reattachment of microtubules from kinetochores in the cells as they try to find the correct attachment. Protein cyclin A governs this process by keeping the process going until the errors are eliminated. In normal cells, persistent cyclin A expression prevents the stabilization of microtubules bound to kinetochores even in cells with aligned chromosomes. As levels of cyclin A decline, microtubule attachments become stable, allowing the chromosomes to be divided correctly as cell division proceeds. In contrast, in cyclin A-deficient cells, microtubule attachments are prematurely stabilized. Consequently, these cells may fail to correct errors, leading to higher rates of chromosome mis-segregation. Main groups There are two main groups of cyclins: G1/S cyclins – essential for the control of the cell cycle at the G1/S transition, Cyclin A / CDK2 – active in S phase. Cyclin D / CDK4, Cyclin D / CDK6, and Cyclin E / CDK2 – regulates transition from G1 to S phase. G2/M cyclins – essential for the control of the cell cycle at the G2/M transition (mitosis). G2/M cyclins accumulate steadily during G2 and are abruptly destroyed as cells exit from mitosis (at the end of the M-phase). Cyclin B / CDK1 – regulates progression from G2 to M phase. Subtypes The specific cyclin subtypes along with their corresponding CDK (in brackets) are: Other proteins containing this domain In addition, the following human protein contains a cyclin domain: CNTD1 History Leland H. Hartwell, R. Timothy Hunt, and Paul M. Nurse won the 2001 Nobel Prize in Physiology or Medicine for their discovery of cyclin and cyclin-dependent kinase. References Further reading External links Cell cycle Proteins Cell cycle regulators
Cyclin
[ "Chemistry", "Biology" ]
1,724
[ "Biomolecules by chemical classification", "Signal transduction", "Cellular processes", "Molecular biology", "Proteins", "Cell cycle", "Cell cycle regulators" ]
619,200
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase
Cyclin-dependent kinases (CDKs) are a predominant group of serine/threonine protein kinases involved in the regulation of the cell cycle and its progression, ensuring the integrity and functionality of cellular machinery. These regulatory enzymes play a crucial role in the regulation of eukaryotic cell cycle and transcription, as well as DNA repair, metabolism, and epigenetic regulation, in response to several extracellular and intracellular signals. They are present in all known eukaryotes, and their regulatory function in the cell cycle has been evolutionarily conserved. The catalytic activities of CDKs are regulated by interactions with CDK inhibitors (CKIs) and regulatory subunits known as cyclins. Cyclins have no enzymatic activity themselves, but they become active once they bind to CDKs. Without cyclin, CDK is less active than in the cyclin-CDK heterodimer complex. CDKs phosphorylate proteins on serine (S) or threonine (T) residues. The specificity of CDKs for their substrates is defined by the S/T-P-X-K/R sequence, where S/T is the phosphorylation site, P is proline, X is any amino acid, and the sequence ends with lysine (K) or arginine (R). This motif ensures CDKs accurately target and modify proteins, crucial for regulating cell cycle and other functions. Deregulation of the CDK activity is linked to various pathologies, including cancer, neurodegenerative diseases, and stroke. Evolutionary history CDKs were initially identified through studies in model organisms such as yeasts and frogs, underscoring their pivotal role in cell cycle progression. These enzymes operate by forming complexes with cyclins, whose levels fluctuate throughout the cell cycle, thereby ensuring timely cell cycle transitions. Over the years, the understanding of CDKs has expanded beyond cell division to include roles in gene transcription integration of cellular signals. The evolutionary journey of CDKs has led to a diverse family with specific members dedicated to cell cycle phases or transcriptional control. For instance, budding yeast expresses six distinct CDKs, with some binding multiple cyclins for cell cycle control and others binding with a single cyclin for transcription regulation. In humans, the expansion to 20 CDKs and 29 cyclins illustrates their complex regulatory roles. Key CDKs such as CDK1 are indispensable for cell cycle control, while others like CDK2 and CDK3 are not. Moreover, transcriptional CDKs, such as CDK7 in humans, play crucial roles in initiating transcription by phosphorylating RNA polymerase II (RNAPII), indicating the intricate link between cell cycle regulation and transcriptional management. This evolutionary expansion from simple regulators to multifunctional enzymes underscores the critical importance of CDKs in the complex regulatory networks of eukaryotic cells. Notable people In 2001, the scientists Leland H. Hartwell, Tim Hunt and Sir Paul M. Nurse were awarded the Nobel Prize in Physiology or Medicine for their discovery of key regulators of the cell cycle. Leland H. Hartwell (b. 1929 ): Through studies of yeast in 1971, Heartwell identified crucial genes for cell division, outlining the cell cycle's stages and essential checkpoints to prevent cancerous cell division. Tim Hunt (b. 1943): Through studies of sea urchins in the 1980s, Hunt discovered the role of cyclins in the regulation of cell cycle phases through their cyclical synthesis and degradation. Sir Paul M. Nurse (b. 1949): In the mid-1970s, Nurse's studies uncovered the cdc2 gene in fission yeast, which is crucial for the progression of the cell cycle from G1 to S phase and from G2 to M phase. In 1987, he identified the corresponding gene in humans, CDK1, highlighting the conservation of cell cycle control mechanisms across species. CDKs and cyclins in the cell cycle CDK is one of the estimated 800 human protein kinases. CDKs have low molecular weight, and they are known to be inactive by themselves. They are characterized by their dependency on the regulatory subunit, cyclin. The activation of CDKs also requires post-translational modifications involving phosphorylation reactions. This phosphorylation typically occurs on a specific threonine residue, leading to a conformational change in the CDK that enhances its kinase activity. The activation forms a cyclin-CDK complex which phosphorylates specific regulatory proteins that are required to initiate steps in the cell-cycle.In human cells, the CDK family comprises 20 different members that play a crucial role in the regulation of the cell cycle and transcription. These are usually separated into cell-cycle CDKs, which regulate cell-cycle transitions and cell division, and transcriptional CDKs, which mediate gene transcription. CDK1, CDK2, CDK3, CDK4, CDK6, and CDK7 are directly related to the regulation of cell-cycle events, while CDK7 – 11 are associated with transcriptional regulation. Different cyclin-CDK complexes regulate different phases of the cell cycle, known as G0/G1, S, G2, and M phases, featuring several checkpoints to maintain genomic stability and ensure accurate DNA replication. Cyclin-CDK complexes of earlier cell-cycle phase help activate cyclin-CDK complexes in later phase. CDK structure and activation Cyclin-dependent kinases (CDKs) mainly consist of a two-lobed configuration, which is characteristic of all kinases in general. CDKs have specific features in their structure that play a major role in their function and regulation. N-terminal lobe (N-lobe): In this part, the inhibitory element known as the glycine-rich G-loop is located. The inhibitory element is found within the beta-sheets in this N-terminal lobe. Additionally, there is a helix known as the C-helix. This helix contains the PSTAIRE sequence in CDK1. This region plays a crucial role in regulating the binding between cyclin-dependent kinases (CDKs) and cyclins. C-terminal lobe (C-lobe): This part contains α-helices and the activation segment, which extends from the DFG motif (D145 in CDK2) to the APE motif (E172 in CDK2). This segment also includes a phosphorylation-sensitive residue (T160 in CDK2) in the so-called T-loop. The activation segment in the C-lobe serves as a platform for the binding of the phospho-acceptor Ser/Thr region of substrates. Cyclin binding The active site, or ATP-binding site, in all kinases is a cleft located between a smaller amino-terminal lobe and a larger carboxy-terminal lobe. Research on the structure of human CDK2 has shown that CDKs have a specially adapted ATP-binding site that can be regulated through the binding of cyclin. Phosphorylation by CDK-activating kinase (CAK) at Thr160 in the T-loop helps to increase the complex's activity. Without cyclin, a flexible loop known as the activation loop or T-loop blocks the cleft, and the positioning of several key amino acids is not optimal for ATP binding. With cyclin, two alpha helices change position to enable ATP binding. One of them, the L12 helix located just before the T-loop in the primary sequence, is transformed into a beta strand and helps to reorganize the T-loop so that it no longer blocks the active site. The other alpha helix, known as the PSTAIRE helix, is reorganized and helps to change the position of the key amino acids in the active site. There's considerable specificity in which cyclin binds to CDK. Furthermore, the cyclin binding determines the specificity of the cyclin-CDK complex for certain substrates, highlighting the importance of distinct activation pathways that confer cyclin-binding specificity on CDK1. This illustrates the complexity and fine-tuning in the regulation of the cell cycle through selective binding and activation of CDKs by their respective cyclins. Cyclins can directly bind the substrate or localize the CDK to a subcellular area where the substrate is found. The RXL-binding site  was crucial in revealing how CDKs selectively enhance activity toward specific substrates by facilitating substrate docking. Substrate specificity of S cyclins is imparted by the hydrophobic batch, which has affinity for substrate proteins that contain a hydrophobic RXL (or Cy) motif. Cyclin B1 and B2 can localize CDK1 to the nucleus and the Golgi, respectively, through a localization sequence outside the CDK-binding region. Phosphorylation To achieve full kinase activity, an activating phosphorylation on a threonine adjacent to the CDK's active site is required. The identity of the CDK-activating kinase (CAK) that carries out this phosphorylation varies among different model organisms. The timing of this phosphorylation also varies; in mammalian cells, the activating phosphorylation occurs after cyclin binding, while in yeast cells, it occurs before cyclin binding. CAK activity is not regulated by known cell cycle pathways, and it is the cyclin binding that is the limiting step for CDK activation. Unlike activating phosphorylation, CDK inhibitory phosphorylation is crucial for cell cycle regulation. Various kinases and phosphatases control their phosphorylation state. For instance, the activity of CDK1 is controlled by the balance between  WEE1 kinases, Myt1 kinases, and the phosphorylation of  Cdc25c phosphatases. Wee1, a kinase preserved across all eukaryotes, phosphorylates CDK1 at Tyr 15. Myt1 can phosphorylate both the threonine (Thr 14) and the tyrosine (Tyr 15). The dephosphorylation is performed by Cdc25c phosphatases, by removing the phosphate groups from both the threonine and the tyrosine.  This inhibitory phosphorylation helps preventing cell-cycle progression in response to events like DNA damage. The phosphorylation does not significantly alter the CDK structure, but reduces its affinity to the substrate, thereby inhibiting its activity. For the cell cycle to progress, the inhibitory phosphate groups must be removed by the Cdc25 phosphatases to reactivate the CDKs. CDK inhibitors A cyclin-dependent kinase inhibitor (CKI) is a protein that interacts with a cyclin-CDK complex to inhibit kinase activity, often during G1 phase or in response to external signals or DNA damage. In animal cells, two primary CKI families exist: the INK4 family (p16, p15, p18, p19) and the CIP/KIP family  (p21, p27, p57). The INK4 family proteins specifically bind to and inhibit CDK4 and CDK6 by D-type cyclins or by CAK, while the CIP/KIP family prevent the activation of CDK-cyclin heterodimers, disrupting both cyclin binding and kinase activity. These inhibitors have a KID (kinase inhibitory domain) at the N-terminus, facilitating their attachment to cyclins and CDKs. Their primary function occurs in the nucleus, supported by a C-terminal sequence that enables their nuclear translocation. In yeast and Drosophila, CKIs are strong inhibitors of S- and M-CDK, but do not inhibit G1/S-CDKs. During G1, high levels of CKIs prevent cell cycle events from occurring out of order, but do not prevent transition through the Start checkpoint, which is initiated through G1/S-CDKs. Once the cell cycle is initiated, phosphorylation by early G1/S-CDKs leads to destruction of CKIs, relieving inhibition on later cell cycle transitions. In mammalian cells, the CKI regulation works differently. Mammalian protein p27 (Dacapo in Drosophila) inhibits G1/S- and S-CDKs but does not inhibit S- and M-CDKs. Ligand-based inhibition methods involve the use of small molecules or ligands that specifically bind to CDK2, which is a crucial regulator of the cell cycle. The ligands bind to the active site of CDK2, thereby blocking its activity. These inhibitors can either mimic the structure of ATP, competing for the active site and preventing protein phosphorylation needed for cell cycle progression, or bind to allosteric sites, altering the structure of CDK2 to decrease its efficiency. CDK subunits (CKS) CDKs are essential for the control and regulation of the cell cycle. They are associated with small regulatory subunits regulatory subunits (CKSs). In mammalian cells, two CKSs are known: CKS1 and CKS2. These proteins are necessary for the proper functioning of CDKs, although their exact functions are not yet fully known. An interaction occurs between CKS1 and the carboxy-terminal lobe of CDKs, where they bind together. This binding increases the affinity of the cyclin-CDK complex for its substrates, especially those with multiple phosphorylation sites, thus contributing the promotion of cell proliferation. Non-cyclin activators Viral cyclins Viruses can encode proteins with sequence homology to cyclins. One much-studied example is K-cyclin (or v-cyclin) from Kaposi sarcoma herpes virus (see Kaposi's sarcoma), which activates CDK6. The vCyclin-CDK6 complex promotes an accelerated transition from G1 to S phase in the cell by phosphorylating pRb and releasing E2F. This leads to the removal of inhibition on Cyclin E–CDK2's enzymatic activity. It is shown that vCyclin contributes to promoting transformation and tumorigenesis, mainly through its effect on p27 pSer10 phosphorylation and cytoplasmic sequestration. CDK5 activators Two protein types, p35 and p39, responsible for increasing the activity of CDK5 during neuronal differentiation in postnatal development. p35 and p39 play a crucial role in a unique mechanism for regulating CDK5 activity in neuronal development and network formation. The activation of CDK with these cofactors (p35 and p39) does not require phosphorylation of the activation loop, which is different from the traditional activation of many other kinases. This highlights the importance of activating CDK5 activity, which is critical for proper neuronal development, dendritic spine and synapse formation, as well as in response to epileptic events. RINGO/Speedy Proteins in the RINGO/Speedy group represent a standout bunch among proteins that don't share amino acid sequence homology with the cyclin family. They play a crucial role in activating CDKs. Originally identified in Xenopus, these proteins primarily bind to and activate CDK1 and CDK2, despite lacking homology to cyclins. What is particularly interesting, is that CDKs activated by RINGO/Speedy can phosphorylate different sites than those targeted by cyclin-activated CDKs, indicating a unique mode of action for these non-cyclin CDK activators. Medical significance CDKs and cancer The dysregulation of CDKs and cyclins disrupts the cell cycle coordination, which makes them involved in the pathogenesis of several diseases, mainly cancers. Thus, studies of cyclins and cyclin-dependent kinases (CDK) are essential for advancing the understanding of cancer characteristics. Research has shown that alterations in cyclins, CDKs, and CDK inhibitors (CKIs) are common in most cancers, involving chromosomal translocations, point mutations, insertions, deletions, gene overexpression, frame-shift mutations, missense mutations, or splicing errors. The dysregulation of the CDK4/6-RB pathway is a common feature in many cancers, often resulting from various mechanisms that inactivate the cyclin D-CDK4/6 complex. Several signals can lead to overexpression of cyclin D and enhance CDK4/6 activity, contributing toward tumorigenesis. Additionally, the CDK4/6-RB pathway interacts with the p53 signaling pathway via p21CIP1 transcription, which can inhibit both cyclin D-CDK4/6 and cyclin E-CDK2 complexes. Mutations in p53 can deactivate the G1 checkpoint, further promoting uncontrolled proliferation. CDK inhibitors and therapeutic potential Due to their central role in regulating cell cycle progression and cell proliferation, CDKs are considered ideal therapeutic targets for cancer. The following CDK4/6 inhibitors mark a significant advancement in cancer treatment, offering targeted therapies that are effective and have a manageable side effect profile. Palbociclib, one of the first CDK4/6 inhibitors approved by the FDA, has become essential in the treatment of HR+/HER2- advanced or metastatic breast cancer, often in combination with endocrine therapy. Ribociclib, demonstrating similar efficacy to palbociclib, is also approved for HR+/HER2- advanced breast cancer and offers benefits for a younger patient demographic. Abemaciclib stands out by being usable as monotherapy, in addition to combination treatment, for certain HR+/HER2- breast cancer patients. It has also shown effectiveness in treating patients with brain metastases. Trilaciclib has proven its value by improving patients' quality of life during cancer treatment by reducing the risk of chemotherapy-induced myelosuppression, a common side effect that can lead to treatment delays and dose reductions. Challenges and future potential Complications of developing a CDK drug include the fact that many CDKs are not involved in the cell cycle, but other processes such as transcription, neural physiology, and glucose homeostasis. More research is required, however, because disruption of the CDK-mediated pathway has potentially serious consequences; while CDK inhibitors seem promising, it has to be determined how side-effects can be limited so that only target cells are affected. As such diseases are currently treated with glucocorticoids. The comparison with glucocorticoids serves to illustrate the potential benefits of CDK inhibitors, assuming their side effects can be more narrowly targeted or minimized. See also Cell cycle Protein kinase Enzyme catalysis Enzyme inhibitor References External links Cell cycle regulators Protein families EC 2.7.11
Cyclin-dependent kinase
[ "Chemistry", "Biology" ]
4,033
[ "Protein families", "Cell cycle regulators", "Protein classification", "Signal transduction" ]
619,201
https://en.wikipedia.org/wiki/Nebulizer
In medicine, a nebulizer (American English) or nebuliser (British English) is a drug delivery device used to administer medication in the form of a mist inhaled into the lungs. Nebulizers are commonly used for the treatment of asthma, cystic fibrosis, COPD and other respiratory diseases or disorders. They use oxygen, compressed air or ultrasonic power to break up solutions and suspensions into small aerosol droplets that are inhaled from the mouthpiece of the device. An aerosol is a mixture of gas and solid or liquid particles. Medical uses Guidelines Various asthma guidelines, such as the Global Initiative for Asthma Guidelines [GINA], the British Guidelines on the management of Asthma, The Canadian Pediatric Asthma Consensus Guidelines, and United States Guidelines for Diagnosis and Treatment of Asthma each recommend metered dose inhalers in place of nebulizer-delivered therapies. The European Respiratory Society acknowledge that although nebulizers are used in hospitals and at home they suggest much of this use may not be evidence-based. Effectiveness Recent evidence shows that nebulizers are no more effective than metered-dose inhalers (MDIs) with spacers. An MDI with a spacer may offer advantages to children who have acute asthma. Those findings refer specifically to the treatment of asthma and not to the efficacy of nebulisers generally, as for COPD for example. For COPD, especially when assessing exacerbations or lung attacks, there is no evidence to indicate that MDI (with a spacer) delivered medicine is more effective than administration of the same medicine with a nebulizer. The European Respiratory Society highlighted a risk relating to droplet size reproducibility caused by selling nebulizer devices separately from nebulized solution. They found this practice could vary droplet size 10-fold or more by changing from an inefficient nebulizer system to a highly efficient one. Two advantages attributed to nebulizers, compared to MDIs with spacers (inhalers), are their ability to deliver larger dosages at a faster rate, especially in acute asthma; however, recent data suggests actual lung deposition rates are the same. In addition, another trial found that a MDI (with spacer) had a lower required dose for clinical result compared to a nebulizer. Beyond use in chronic lung disease, nebulizers may also be used to treat acute issues like the inhalation of toxic substances. One such example is the treatment of inhalation of toxic hydrofluoric acid (HF) vapors. Calcium gluconate is a first-line treatment for HF exposure to the skin. By using a nebulizer, calcium gluconate is delivered to the lungs as an aerosol to counteract the toxicity of inhaled HF vapors. Aerosol deposition The lung deposition characteristics and efficacy of an aerosol depend largely on the particle or droplet size. Generally, the smaller the particle the greater its chance of peripheral penetration and retention. However, for very fine particles below 0.5 μm in diameter there is a chance of avoiding deposition altogether and being exhaled. In 1966 the Task Group on Lung Dynamics, concerned mainly with the hazards of inhalation of environmental toxins, proposed a model for deposition of particles in the lung. This suggested that particles of more than 10 μm in diameter are most likely to deposit in the mouth and throat, for those of 5–10 μm diameter a transition from mouth to airway deposition occurs, and particles smaller than 5 μm in diameter deposit more frequently in the lower airways and are appropriate for pharmaceutical aerosols. Nebulizing processes have been modeled using computational fluid dynamics. Types Pneumatic Jet nebulizer The most commonly used nebulizers are jet nebulizers, which are also called "atomizers". Jet nebulizers are connected by tubing to a supply of compressed gas, usually compressed air or oxygen to flow at high velocity through a liquid medicine to turn it into an aerosol that is inhaled by the patient. Currently there seems to be a tendency among physicians to prefer prescription of a pressurized Metered Dose Inhaler (pMDI) for their patients, instead of a jet nebulizer that generates a lot more noise (often 60 dB during use) and is less portable due to a greater weight. However, jet nebulizers are commonly used in hospitals for patients who have difficulty using inhalers, such as in serious cases of respiratory disease, or severe asthma attacks. The main advantage of the jet nebulizer is related to its low operational cost. If the patient needs to inhale medicine on a daily basis the use of a pMDI can be rather expensive. Today several manufacturers have also managed to lower the weight of the jet nebulizer to just over half a kilogram (just under one and a half pounds), and therefore started to label it as a portable device. Compared to all the competing inhalers and nebulizers, the noise and heavy weight is still the biggest draw back of the jet nebulizer. Mechanical Soft mist inhaler The medical company Boehringer Ingelheim also invented a device named Respimat Soft Mist Inhaler in 1997. This new technology provides a metered dose to the user, as the liquid bottom of the inhaler is rotated clockwise 180 degrees by hand, adding a buildup tension into a spring around the flexible liquid container. When the user activates the bottom of the inhaler, the energy from the spring is released and imposes pressure on the flexible liquid container, causing liquid to spray out of 2 nozzles, thus forming a soft mist to be inhaled. The device features no gas propellant and no need for battery/power to operate. The average droplet size in the mist was measured to 5.8 micrometers, which could indicate some potential efficiency problems for the inhaled medicine to reach the lungs. Subsequent trials have proven this was not the case. Due to the very low velocity of the mist, the Soft Mist Inhaler in fact has a higher efficiency compared to a conventional pMDI. In 2000, arguments were launched towards the European Respiratory Society (ERS) to clarify/expand their definition of a nebulizer, as the new Soft Mist Inhaler in technical terms both could be classified as a "hand driven nebulizer" and a "hand driven pMDI". Electrical Ultrasonic wave nebulizer Ultrasonic wave nebulizers were invented in 1965 as a new type of portable nebulizer. The technology inside an ultrasonic wave nebulizer is to have an electronic oscillator generate a high frequency ultrasonic wave, which causes the mechanical vibration of a piezoelectric element. This vibrating element is in contact with a liquid reservoir and its high frequency vibration is sufficient to produce a vapor mist via ultrasonic atomization. As they create aerosols from ultrasonic vibration instead of using a heavy air compressor, they only have a weight around . Another advantage is that the ultrasonic vibration is almost silent. Examples of these more modern type of nebulizers are: Omron NE-U17 and Beurer Nebulizer IH30. Vibrating mesh technology A new significant innovation was made in the nebulizer market around 2005, with creation of the ultrasonic Vibrating Mesh Technology (VMT). With this technology a mesh/membrane with 1000–7000 laser drilled holes vibrates at the top of the liquid reservoir, and thereby pressures out a mist of very fine droplets through the holes. This technology is more efficient than having a vibrating piezoelectric element at the bottom of the liquid reservoir, and thereby shorter treatment times are also achieved. The old problems found with the ultrasonic wave nebulizer, having too much liquid waste and undesired heating of the medical liquid, have also been solved by the new vibrating mesh nebulizers. Available VMT nebulizers include: Pari eFlow, Respironics i-Neb, Beurer Nebulizer IH50, and Aerogen Aeroneb. As the price of the ultrasonic VMT nebulizers is higher than models using previous technologies, most manufacturers continue to also sell the classic jet nebulizers. Use and attachments Nebulizers accept their medicine in the form of a liquid solution, which is often loaded into the device upon use. Corticosteroids and bronchodilators such as salbutamol (albuterol USAN) are often used, and sometimes in combination with ipratropium. The reason these pharmaceuticals are inhaled instead of ingested is in order to target their effect to the respiratory tract, which speeds onset of action of the medicine and reduces side effects, compared to other alternative intake routes. Usually, the aerosolized medicine is inhaled through a tube-like mouthpiece, similar to that of an inhaler. The mouthpiece, however, is sometimes replaced with a face mask, similar to that used for inhaled anesthesia, for ease of use with young children or the elderly. Pediatric masks are often shaped like animals such as fish, dogs or dragons to make children less resistant to nebulizer treatments. Many nebulizer manufacturers also offer pacifier attachments for infants and toddlers. But mouthpieces are preferable if patients are able to use them since face-masks result in reduced lung delivery because of aerosol losses in the nose. After use with corticosteroid, it is theoretically possible for patients to develop a yeast infection in the mouth (thrush) or hoarseness of voice (dysphonia), although these conditions are clinically very rare. To avoid these adverse effects, some clinicians suggest that the person who used the nebulizer should rinse his or her mouth. This is not true for bronchodilators; however, patients may still wish to rinse their mouths due to the unpleasant taste of some bronchodilating drugs. History The first "powered" or pressurized inhaler was invented in France by Sales-Girons in 1858. This device used pressure to atomize the liquid medication. The pump handle is operated like a bicycle pump. When the pump is pulled up, it draws liquid from the reservoir, and upon the force of the user's hand, the liquid is pressurized through an atomizer, to be sprayed out for inhalation near the user's mouth. In 1864, the first steam-driven nebulizer was invented in Germany. This inhaler, known as "Siegle's steam spray inhaler", used the Venturi principle to atomize liquid medication, and this was the very beginning of nebulizer therapy. The importance of droplet size was not yet understood, so the efficacy of this first device was unfortunately mediocre for many of the medical compounds. The Siegle steam spray inhaler consisted of a spirit burner, which boiled water in the reservoir into steam that could then flow across the top and into a tube suspended in the pharmaceutical solution. The passage of steam drew the medicine into the vapor, and the patient inhaled this vapor through a mouthpiece made of glass. The first pneumatic nebulizer fed from an electrically driven gas (air) compressor was invented in the 1930s and called a Pneumostat. With this device, a medical liquid (typically epinephrine chloride, used as a bronchial muscle relaxant to reverse constriction). As an alternative to the expensive electrical nebulizer, many people in the 1930s continued to use the much more simple and cheap hand-driven nebulizer, known as the Parke-Davis Glaseptic. In 1956, a technology competing against the nebulizer was launched by Riker Laboratories (3M), in the form of pressurized metered-dose inhalers, with Medihaler-iso (isoprenaline) and Medihaler-epi (epinephrine) as the two first products. In these devices, the drug is cold-fill and delivered in exact doses through some special metering valves, driven by a gas propellant technology (i.e. Freon or a less environmentally damaging HFA). In 1964, a new type of electronic nebulizer was introduced: the "ultrasonic wave nebulizer". Today the nebulizing technology is not only used for medical purposes. Ultrasonic wave nebulizers are also used in humidifiers, to spray out water aerosols to moisten dry air in buildings. Some of the first models of electronic cigarettes featured an ultrasonic wave nebulizer (having a piezoelectric element vibrating and creating high-frequency ultrasound waves, to cause vibration and atomization of liquid nicotine) in combination with a vapouriser (built as a spray nozzle with an electric heating element). The most common type of electronic cigarettes currently sold, however, omit the ultrasonic wave nebulizer, as it was not found to be efficient enough for this kind of device. Instead, the electronic cigarettes now use an electric vaporizer, either in direct contact with the absorbent material in the "impregnated atomizer," or in combination with the nebulization technology related to a "spraying jet atomizer" (in the form of liquid droplets being out-sprayed by a high-speed air stream, that passes through some small venturi injection channels, drilled in a material absorbed with nicotine liquid). See also Heated humidified high-flow therapy Inhaler Humidifier Vaporizer List of medical inhalants Spray bottle References Aerosols Respiratory therapy Drug delivery devices Medical equipment Dosage forms
Nebulizer
[ "Chemistry", "Biology" ]
2,831
[ "Pharmacology", "Drug delivery devices", "Colloids", "Medical equipment", "Aerosols", "Medical technology" ]
619,215
https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%20complex
A cyclin-dependent kinase complex (CDKC, cyclin-CDK) is a protein complex formed by the association of an inactive catalytic subunit of a protein kinase, cyclin-dependent kinase (CDK), with a regulatory subunit, cyclin. Once cyclin-dependent kinases bind to cyclin, the formed complex is in an activated state. Substrate specificity of the activated complex is mainly established by the associated cyclin within the complex. Activity of CDKCs is controlled by phosphorylation of target proteins, as well as binding of inhibitory proteins. Structure and Regulation The structure of CDKs in complex with a cyclin subunits (CDKC) has long been a goal of structural and cellular biologists starting in the 1990s when the structure of unbound cyclin A was solved by Brown et al. and in the same year Jeffery et al. solved the structure of human cyclin A-CDK2 complex to 2.3 Angstrom resolution. Since this time, many CDK structures have been determined to higher resolution, including the structures of CDK2 and CDK2 bound to a variety of substrates, as seen in Figure 1. High resolution structures exist for approximately 25 CDK-cyclin complexes in total within the Protein Data Bank. Based on function, there are two general populations of CDK-cyclin complex structures, open and closed form. The difference between the forms lies within the binding of cyclin partners where closed form complexes have CDK-cyclin binding at both the C and N-termini of the activation loop of the CDK, whereas the open form partners bind only at the N-terminus. Open form structures correspond most often to those complexes involved in transcriptional regulation (CDK 8, 9, 12, and 13), while closed form CDK-cyclin complex are most often involved in cell cycle progression and regulation (CDK 1, 2, 6). These distinct roles, however, do not significantly differ with the sequence homology between the CDK components. In particular, among these known structures there appear to be four major conserved regions: a N-terminal Glycine-rich loop, a Hinge Region, an αC-helix, and a T-loop regulation site. Activation Loop The activation loop, also referred to as the T-loop, is the region of CDK (between the DFG and APE motifs in many CDK) that is enzymatically active when CDK is bound to its function-specific partner. In CDK-cyclin complexes, this activation region is composed of a conserved αL-12 Helix and contains a key phosphorylatable residue (usually Threonine for CDK-cyclin partners, but also includes Serine and Tyrosine) that mediates the enzymatic activity of the CDK. It is at this essential residue (T160 in CDK2 complexes, T177 in CDK6 complexes) that enzymatic ATP-phosphorylation of CDK-cyclin complexes by CAK (cyclin activating kinase, referring to the CDK7-Cyclin H complex in human cells) takes place. After the hydrolysis of ATP to phosphorylate at this site, these complexes are able to complete their intended function, the phosphorylation of cellular targets. It is important to note that in CDK 1, 2 and 6, the T-loop and a separate C-terminal region are the major sites of cyclin binding in the CDK, and which cyclins are bound to each of these CDK is mediated by the particular sequence of the activation site T-loop. These cyclin binding sites are the regions of highest variability in CDKs despite relatively high sequence homology surrounding the αL-12 Helix motif of this structural component. Glycine-rich region The glycine-rich loop (Gly-rich loop) as seen in residues 12-16 in CDK2 encodes a conserved GXGXXG motif across both yeast and animal models. The regulatory region is subject to differential phosphorylation at non-glycine residues within this motif, making this site subject to Wee1 and/or Myt1 inhibitory kinase phosphorylation and Cdc25 de-phosphorylation in mammals. This reversible phosphorylation at the Gly-rich loop in CDK2 occurs at Y15, where activity has been further studied. Study of this residue has shown that phosphorylation promotes a conformational change that prevents ATP and substrate binding by steric interference with these necessary binding sites in the activation loop of the CDK-cyclin complexes. This activity is aided by the notable flexibility that the Gly-rich loop has within the structure of most CDK allowing for its rotation toward the activation loop to have a significant effect on reducing substrate affinity without major changes in the overall CDK-cyclin complex structure. Hinge Region The conserved hinge region of CDK within eukaryotic cells acts as an essential bridge between the Gly-rich loop and the activation loop. CDK are characterized by a N-terminal lobe that is primarily twisted beta-sheet connected via this hinge region to an alpha helix dominated C-terminal lobe. In discussion of the T-loop and the Gly-rich loop, it is important to note that these regions, which must be able to spatially interact in order to carry out their biochemical functions, lie on opposite lobes of the CDK itself. Thus, this hinge region, which can vary in length slightly between CDK type and CDK-cyclin complex, connects essential regulatory regions of the CDK by connecting these lobes, and plays key roles in the resulting structure of CDK-cyclin complexes by properly orienting ATP for easy catalysis of phosphorylation reactions by the assembled complex. αC-Helix The αC-Helix region is highly conserved across many of the mammalian kinome (family of kinases). Its main responsibility is to maintain allosteric control of the kinase active site. This control manifests in CDK-cyclin complexes by specifically preventing CDK activity until its binds to its partner regulator (i.e. cyclin or other partner protein). This binding causes a conformational change in the αC-Helix region of the CDK and allows for it to be moved from the active site cleft and completes the initial process of T-loop activation. Given that this region is so conserved across the protein superfamily of kinases, this mechanism where the αC-Helix has been shown to fold out of the N-terminal lobe of the kinase, allowing for increased access to the αL-12 Helix that lies within the T-loop, is considered a potential target for drug development. The cell cycle Yeast cell cycle Although these complexes have a variety functions, CDKCs are most known for their role in the cell cycle. Initially, studies were conducted in Schizosaccharomyces pombe and Saccharomyces cerevisiae (yeast). S. pombe and S. cerevisiae are most known for their association with a single Cdk, Cdc2 and Cdc28 respectively, which complexes with several different cyclins. Depending on the cyclin, various portions of the cell cycle are affected. For example, in S. pombe, Cdc2 associates with Cdk13 to form the Cdk13-Cdc2 complex. In S. cerevisiae, the association of Cdc28 with cyclins, Cln1, Cln2, or Cln3, results in the transition from G1 phase to S phase. Once in the S phase, Cln1 and Cln2 dissociates with Cdc28 and complexes between Cdc28 and Clb5 or Clb6 are formed. In G2 phase, complexes formed from the association between Cdc28 and Clb1, Clb2, Clb3, or Clb4, results in the progression from G2 phase to M (Mitotic) phase. These complexes are present in early M phase as well. See Table 1 for a summary of yeast CDKCs. Table 1. CDKCs Associated with Cell Cycle Phases in Yeast From what is known about the complexes formed during each phase of the cell cycle in yeast, proposed models have emerged based on important phosphorylation sites and transcription factors involved. Mammalian cell cycle Using the information discovered through yeast cell cycle studies, significant progress has been made regarding the mammalian cell cycle. It has been determined that the cell cycles are similar and CDKCs, either directly or indirectly, affect the progression of the cell cycle. As previously mentioned, in yeast, only one cyclin-dependent kinase (CDK) is associated with several different cyclins. However, in mammalian cells, several different CDKs bind to various cyclins to form CDKCs. For instance, Cdk1 (also known as human Cdc2), the first human CDK to be identified, associates with cyclins A or B. CyclinA/B-Cdk1 complexes drive the transition between G2 phase and M phase, as well as early M phase. Another mammalian CDK, Cdk2, can form complexes with cyclins D1, D2, D3, E, or A. Cdk4 and Cdk6 interact with cyclins D1, D2, and D3. Studies have indicated that there is no difference between CDKCs cyclin D1-Cdk4/6, therefore, any unique properties can possibly be linked to substrate specificity or activation. While levels of CDKs remain fairly constant throughout the cell cycle, cyclin levels fluctuate. The fluctuation controls the activation of the cyclin-CDK complexes and ultimately the progression throughout the cycle. See Table 2 for a summary of mammalian cell CDKCs involved in the cell cycle. Table 2. CDKCs Associated with Cell Cycle Phases in Mammalian Cells G1 to S phase progression During late G1 phase, CDKCs bind and phosphorylate members of the retinoblastoma (Rb) protein family. Members of the Rb protein family are tumor suppressors, which prevent uncontrolled cell proliferation that would occur during tumor formation. However, pRbs are also thought to repress the genes required in order for the transition from G1 phase to S phase to occur. When the cell is ready to transition into the next phase, CDKCs, cyclin D1-Cdk4 and cyclin D1-Cdk6 phosphorylate pRB, followed by additional phosphorylation from the cyclin E-Cdk2 CDKC. Once phosphorylation occurs, transcription factors are then released to irreversibly inactivate pRB and progression into the S phase of the cell cycle ensues. The cyclin E-Cdk2 CDKC formed in the G1 phase then aids in the initiation of DNA replication during S phase. G2 to M phase progression At the end of S phase, cyclin A is associated with Cdk1 and Cdk2. During G2 phase, cyclin A is degraded, while cyclin B is synthesized and cyclin B-Cdk1 complexes form. Not only are cyclin B-Cdk1 complexes important for the transition into M phase, but these CDKCs play a role in the following regulatory and structural processes: Chromosomal condensation Fragmentation of Golgi network Breakdown of nuclear lamina Inactivation of the cyclin B-Cdk1 complex through the degradation of cyclin B is necessary for exit out of the M phase of the cell cycle. Other Even though the majority of the known CDKCs are involved in the cell cycle, not all kinase complexes function in this manner. Studies have shown other CDKCs, such as cyclin k-Cdk9 and cyclin T1-Cdk9, are involved in the replication stress response, and influence transcription. Additionally, cyclin H-Cdk7 complexes may play a role in meiosis in male germ cells, and has been shown to be involved in transcriptional activities as well. See also Cyclin Cyclin-dependent kinase Cyclin D/Cdk4 Cyclin E/Cdk2 References Cell cycle
Cyclin-dependent kinase complex
[ "Biology" ]
2,620
[ "Cell cycle", "Cellular processes" ]
619,275
https://en.wikipedia.org/wiki/Specificity%20factor
A specificity factor is an amino acid sequence that mediates target recognition in RNA polymerase. An example is the sigma subunit of the Escherichia coli RNA polymerase holoenzyme which regulates a binding σ subunit of molecular weight 70 kDa. Under some conditions, some of the 70-kDa subunits are replaced by one of the other, more-specific factors. For instance, when bacteria are subjected to heat stress, some of the 70-kDa subunits are replaced by a 32-kDa subunit; when bound to σ32, RNA polymerase is directed to a specialized set of promoters with a different consensus sequence. References Transcription factors DNA
Specificity factor
[ "Chemistry", "Biology" ]
137
[ "Gene expression", "Molecular biology stubs", "Signal transduction", "Induced stem cells", "Molecular biology", "Transcription factors" ]
619,320
https://en.wikipedia.org/wiki/Chemical%20ecology
Chemical ecology is a vast and interdisciplinary field utilizing biochemistry, biology, ecology, and organic chemistry for explaining observed interactions of living things and their environment through chemical compounds (e.g. ecosystem resilience and biodiversity). Early examples of the field trace back to experiments with the same plant genus in different environments, interaction of plants and butterflies, and the behavioral effect of catnip. Chemical ecologists seek to identify the specific molecules (i.e. semiochemicals) that function as signals mediating community or ecosystem processes and to understand the evolution of these signals. The chemicals behind such roles are typically small, readily-diffusible organic molecules that act over various distances that are dependent on the environment (i.e. terrestrial or aquatic) but can also include larger molecules and small peptides. In practice, chemical ecology relies on chromatographic techniques, such as thin-layer chromatography, high performance liquid chromatography, gas chromatography, mass spectrometry (MS), and absolute configuration utilizing nuclear magnetic resonance (NMR) to isolate and identify bioactive metabolites. To identify molecules with the sought-after activity, chemical ecologists often make use of bioassay-guided fractionation. Today, chemical ecologists also incorporate genetic and genomic techniques to understand the biosynthetic and signal transduction pathways underlying chemically mediated interactions. Plant, microbe and insect chemical ecology Plant, microbe, and insect chemical ecology focuses on the role of chemical cues and signals in mediating interactions with their abiotic (e.g. ability of some bacterium to reduce metals in the surrounding environment) and biotic environment (e.g. microorganisms, phytophagous insects, and pollinators). Cues allow for organisms to monitor interactions with the environment and to adjust accordingly through changes in chemical abundance as a response. Changes in compound abundance allows for defensive measures to be enacted, e.g. attracts for predators. Plant-insect interactions The chemical ecology of plant-insect interaction is a significant subfield of chemical ecology. In particular, plants and insects are often involved in a chemical evolutionary arms race. As plants develop chemical defenses to herbivory, insects which feed on them co-evolved to develop immunity to these poisons, and in some cases, repurpose these poisons for their own chemical defense against predators. For example, caterpillars of the monarch butterfly sequester cardenolide toxins from their milkweed host-plants and are able to use them as an anti-predator defense due to the toxin. Sequestering is a strategy of the co-evolution, feeding experiments on caterpillars of the monarch butterfly showed caterpillars which have not feed on milkweed do not pose similar toxins as those found to feed on milkweed. Whereas most insects are killed by cardenolides, which are potent inhibitors of the Na+/K+-ATPase, monarchs have evolved resistance to the toxin over their long evolutionary history with milkweeds. Other examples of sequestration include the tobacco hornworm Manduca sexta, which use nicotine sequestered from tobacco plants in predator defense; and the bella moth, which secretes a quinone-containing froth to deter predators obtained from feeding on Crotalaria plants as a caterpillar. Chemical ecologists also study chemical interactions involved in indirect defenses of plants, such as the attraction of predators and parasitoids through herbivore-induced volatile organic compounds (VOCs). Plant-microbe interactions Plant interactions with microorganisms are also mediated by chemistry. Both constitutive and induced secondary metabolites (specialized metabolites in modern terminology) are involved in plant defense against pathogens and chemical signals are also important in the establishment and maintenance of resource mutualisms. For example, both rhizobia and mycorrhizae depend on chemical signals, such as strigolactones and flavanoids exuded from plant roots, in order to find a suitable host. For microbes to gain access to the plant, they must be able to penetrate the layer of wax that forms a hydrophobic barrier on the plant's surface. Many plant-pathogenic microbes secrete enzymes that break down these cuticular waxes. Mutualistic microbes on the other hand may be granted access. For example, rhizobia secrete Nod factors that trigger the formation of an infection thread in receptive plants. The rhizobial symbionts can then travel through this infection thread to gain entrance to root cells. Mycorrhizae and other fungal endophytes may also benefit their host plants by producing antibiotics or other secondary metabolites/ specialized metabolites that ward off harmful fungi, bacteria and herbivores in the soil. Some entomopathogenic fungi can also form endophytic relationships with plants and may even transfer nitrogen directly to plants from insects they consume in the surrounding soil. Plant-plant interactions Allelopathy Allelopathy is a sub-field of chemical ecology which focuses on secondary/ specialized metabolites (known as allelochemicals) produced by plants or microorganisms that can inhibit the growth and formation of neighboring plants or microorganisms within the natural community. Many examples of allelopathic competition have been controversial due to the difficulty of positively demonstrating a causal link between allelopathic substances and plant performance under natural conditions, but it is widely accepted that phytochemicals are involved in competitive interactions between plants. One of the clearest examples of allelopathy is the production of juglone by walnut trees, whose strong competitive effects on neighboring plants were recognized in the ancient world as early as 36 BC. Allelopathic compounds have also become an interest in agriculture as an alternative to weed management over synthetic herbicides, e.g. wheat production. Plant-plant communication Plants communicate with each other through both airborne and below-ground chemical cues. For example, when damaged by an herbivore, many plants emit an altered bouquet of volatile organic compounds (VOCs). Various C6 fatty acids and alcohols (sometimes known as green leaf volatiles) are often emitted from damaged leaves, since they are break-down products of plant cell membranes. These compounds (familiar to many as the smell of freshly mown grass) can be perceived by neighboring plants where they may trigger the induction of plant defenses. It is debated to what extent this communication reflects a history of active selection due to mutual benefit as opposed to "eavesdropping" on cues unintentionally emitted by neighboring plants. Insect-reptile interactions Further information: predator-prey arms race Reptiles interaction also contribute to chemical ecology via bioaccumulation or neutralization of toxic compounds. Diablito poison frog (Oophaga sylvatica) which feeds on leaf litter arthropods sequesters the poison cardenolides with no self harm. Species of dart frogs have evolved in similar fashion to the insects they consume via modification to their Na+/K+-ATPase. Again, similar to the insects they prey upon, the dart frog physiology has changed to allow for secretion of toxic chemicals such as batrachotoxins found on the skin of certain neotropical dendrobatid frogs. Modification to the Na+/K+-ATPase illustrates a co-evolution based on a predator-prey arms race where each must keep evolving to survive. Another example is the interactions between the horned lizards (Phrynosoma spp.) and harvester ants (Pogonomyrmex spp.). Horned lizards evolution has shown the blood contains a factor that metabolizes toxins produced by harvester ants. The metabolized poison is broken down and used in a specialized blood squirting defensive mechanism to defend the horned lizard against predators. Insect-mammal interactions Further information: benzoquinone and millipede Mammals such as lemurs and monkeys demonstrate application of chemicals similar to how humans use chemicals for pest management and medical use. These applications vary from prevention of internal and external parasites or pathogens, decrease likelihood of infection, increase reproductive function, reducing inflammation, social cues, and more. Red-fronted lemurs (Eulemur ruffrons) have evolved two pathways, preventive measure for avoiding bioaccumulation, allowing for the modification of 2-methyl-1,4-benzoquinone and 2-methoxy-3-methyl-1,4-benzoquinone a secretion of Spirostreptidae millipedes shown to inhibit certain bacterial species. Red-fronted lemurs have also been observed in rubbing the secretion on their fur similar to capuchins, this action uses benzoquinone compounds as repellent for various insects such as ticks and mosquitoes. Marine chemical ecology Defense Many marine organisms use chemical defenses to deter predators. For example, some crustaceans and mesograzers, such as the Pseudamphithoides incurvaria, use toxic algae and seaweeds as a shield against predation by covering their bodies in these plants. These plants produce phycotoxins, diterpenes such as pachydictyol-A and dictyol-E, which have been shown to deter predators. Demonstrating this symbiotic relationship are cyanobacteria and shrimp. The snapping shrimp Alpheus frontalis has been observed in utilizing Moorena bouillonii, a cyanobacterium, for shelter and food. M. bouillonii produces compounds that are toxic to other marine organisms and coral but its relationship with A. frontalis demonstrates the use of M. bouillonii as a deterrent and shelter to protect A. frontalis. Other marine organisms produce chemicals endogenously to defend themselves. For example, the finless sole (Pardachirus marmoratus) produces a toxin that paralyzes the jaws of would-be predators. Many zoanthids produce potent toxins, such as palytoxin, which is one of the most poisonous known substances. Some species of these zooanthids are very brightly colored, which may be indicative of aposematic defense. Another defensive measure that involves chemical ecology and marine ecology is the bobtail squid's light organ. The bobtail which is located in Hawaii contains a light organ that houses consumed bacterium Vibrio fischeri, V. fisheri bacterium utilizes quorum sensing to indicate expression for bioluminescence, for down welling light intensity. Down welling light intensity allows the bobtail squid to hind from predators by mimicking moonlight and starlight intensity as it hunts for prey at night. There is also development of blue crab defense, which utilizes sensory organs for detecting chemical changes in the water to alert them of predators. When alerted blue crabs will hide as a defensive strategy. The urine from predators acts as a selective pressure so that blue crabs which are more sensitive to chemical change are likely to survive. Those that survive increase the likelihood that offspring develop the same sensory organs for detection. However, there is a distinctive difference between development of the sensory organs, ability to process or accumulate toxins or bacteria compared to leaned or habit developing defensive strategy. Reproduction Many marine organisms use pheromones as chemical cues alerting possible mates that they are ready to reproduce. For example, male sea lampreys attract ovulating females by emitting a bile that can be detected many meters downstream. Other processes can be more complex, such as the mating habits of crabs. Due to the fact that female crabs can only mate during a short period after moults from her shell, female crabs produces pheromones before she begins to moult in order to attract a mate. Male crabs will detect these pheromones and defend their potential mate until she has finished molted. However, due to the cannibalistic tendencies of crabs, the female produces an additional pheromone to suppresses cannibalistic instincts in her male guardian. These pheromones are very potent—so much so that they can induce male crabs to try to copulate with rocks or sponges that have been coated in pheromone by researchers. Furthermore, compound structure plays a key role, e.g. crab pheromones are specialized to travel in aquatic vs terrestrial environments. Dominance Dominance among crustaceans is also mediated through chemical cues. When crustaceans fight to determine dominance they urinate into the water. Later, if they meet again, both individuals can recognize each other by pheromones contained in their urine, allowing them to avoid a fight, if dominance has already been established. When a lobster encounters the urine of another individual, it will act differently according to the perceived status of the urinator (e.g. more submissively when exposed to the urine of a more dominant crab, or more boldly when exposed to the urine of a subdominant individual). When individuals are unable to communicate through urine, fights may be longer and more unpredictable. Applications of chemical ecology Pest control Chemical ecology has been utilized in the development of sustainable pest control strategies. Semiochemicals (especially insect sex pheromones) are widely used in integrated pest management for surveillance, trapping and mating disruption of pest insects. Unlike conventional insecticides, pheromone-based methods of pest control are generally species-specific, non-toxic and extremely potent. In forestry, mass trapping has been used successfully to reduce tree mortality from bark beetle infestations in spruce and pine forests and from palm weevils in palm plantations. In Australia, pheromone base trapping use was implemented to off set the use of pesticides due to the residues left behind in sheep wool. In an aquatic system, a sex pheromone from the invasive sea lamprey has been registered by the United States Environmental Protection Agency for deployment in traps. A strategy has been developed in Kenya to protect cattle from trypanosomiasis spread by Tsetse fly by applying a mixture of repellent odors derived from a non-host animal, the waterbuck. The use of sex pheromones depends on various factors such as concentration, ability to sense the pheromone, temperature, mixture of the pheromone with other compounds, and medium in which the pheromone is delivered, e.g. aquatic vs terrestrial. The successful push-pull agricultural pest management system makes use of chemical cues from intercropped plants to sustainably increase agricultural yields. The efficacy of push-pull agriculture relies on multiple forms of chemical communication. Though the push-pull technique was invented as a strategy to control stem-boring moths, such as Chilo partellus, through the manipulation of volatile host-finding cues, it was later discovered that allelopathic substances exuded by the roots of Desmodium spp. also contribute to the suppression of the damaging parasitic weed, Striga. Drug development and biochemistry discoveries A large proportion of commercial drugs (e.g. aspirin, ivermectin, cyclosporin, taxol) are derived from natural products that evolved due to their involvement in ecological interactions. While it has been proposed that the study of natural history could contribute to the discovery of new drug leads, most drugs derived from natural products were not discovered due to prior knowledge of their ecological functions. However, many fundamental biological discoveries have been facilitated by the study of plant toxins. For example, the characterization of the nicotinic acetylcholine receptor, the first neurotransmitter receptor to be identified, ensued from investigations into the mechanisms of action of curare and nicotine. Similarly, the muscarinic acetylcholine receptor takes its name from the fungal toxin muscarine. Whereas aquatic fungi provide specialized metabolites of interest for antibiotics, e.g. pestalone. Pestalone inhibits the growth of marine bacteria, the pathway to produce pestalone by marine fungi was found not to work when the bacteria is absent. The compound pestalone was derived from a marine fungus located in the Bahamas Islands on the surface of a brown alga, Rosenvingea sp. Development in biotechnology focuses on aquatic fungi in the pursuit of new antibiotics was inspired by terrestrial fungi, the cross over is not equivalent as demands in aquatic fungi are not exact to their terrestrial counter parts. Some aquatic fungi have shown changes to sphingolipid production under stressors such as acid resulting in the inhibition of biofilm formation. Whereas, wild type fungi are known to upregulate production of azole resistance drugs such as multidrug resistance protein 1 (MDR1) and transporters Cdr1 and Cdr2 that act like pumps to remove the antifungal drugs. Sphingolipid and sterols are the majority of the lipid bilayer membrane in fungi, e.g Candida species, and assist in formation of biofilms. Understanding the mechanism is utilized for development of vaccine adjuvants. Biofilm production is initiated through quorum sensing. Example of quorum sensing are the LuxR and LuxI proteins that attribute to the bioluminescence in Vibrio fischeri, LuxI produces acyl homoserine lactones (AHL) that are received by LuxR of neighboring bacteria, a specific concentration of AHL triggers gene expression of bioluminescence. Some vaccine adjuvants focus on biofilm formation by aiming to disrupt the communication utilizing current knowledge on quorum sensing. History of chemical ecology After 1950 In 1959, Adolf Butenandt identified the first intraspecific chemical signal (bombykol) from the silk moth, Bombyx mori, with material obtained by grinding up 500,000 moths. The same year, Karlson and Lüscher proposed the term 'pheromone' to describe this type of signal. Also in 1959, Gottfried S. Fraenkel also published his landmark paper, "The Raison d'être of Secondary Plant Substances", arguing that plant secondary/ specialized metabolites are not metabolic waste products, but actually evolved to protect plants from consumers. Together, these papers marked the beginning of modern chemical ecology. In 1964, Paul R. Ehrlich and Peter H. Raven coauthored a paper proposing their influential theory of escape and radiate coevolution, which suggested that an evolutionary "arms-race" between plants and insects can explain the extreme diversification of plants and insects. The idea that plant metabolites could not only contribute to the survival of individual plants, but could also influence broad macroevolutionary patterns, would turn out to be highly influential. However, Tibor Jermy questioned the view of an evolutionary arms race between plants and their insect herbivores and proposed that the evolution of phytophagous insects followed and follows that of plants without major evolutionary feedback, i.e. without affecting plant evolution. He coined the term sequential evolution to describe plant-insect macroevolutionary patterns, which emphasizes that selection pressure exerted by insect attack on plants is weak or lacking. In the 1960s and 1970s, a number of plant biologists, ecologists, and entomologists expanded this line of research on the ecological roles of plant secondary/ specialized metabolites. During this period, Thomas Eisner and his close collaborator Jerrold Meinwald published a series seminal papers on chemical defenses in plants and insects. A number of other scientists at Cornell were also working on topics related to chemical ecology during this period, including Paul Feeny, Wendell L. Roelofs, Robert Whittaker and Richard B. Root. In 1968, the first course in chemical ecology was initiated at Cornell. In 1970, Eisner, Whittaker and the ant biologist William L. Brown, Jr. coined the terms allomone (to describe semiochemicals that benefit the emitter, but not the receiver) and kairomone (to describe semiochemicals that benefit the receiver only). Whittaker and Feeny published an influential review paper in Science the following year, summarizing the recent research on the ecological roles of chemical defenses in a wide variety of plants and animals and likely introducing Whittaker's new taxonomy of semiochemicals to a broader scientific audience. Around this time, Lincoln Brower also published a series of important ecological studies on monarch sequestration of cardenolides. Brower has been credited with popularizing the term "ecological chemistry" which appeared in the title of a paper he published in Science in 1968 and again the following year in an article he wrote for Scientific American, where the term also appeared on the front cover under an image of a giant bluejay towering over two monarch butterflies. The specialized Journal of Chemical Ecology was established in 1975, and the Journal Chemoecology was founded in 1990. In 1984, the International Society of Chemical Ecology was established and in 1996, the Max Planck Institute of Chemical Ecology was founded in Jena, Germany. See also Chemical defense Semiochemical Chemical ecologists May R. Berenbaum Lincoln Brower Thomas Eisner Jerrold Meinwald Wendell L. Roelofs Escape and radiate coevolution References Further reading Putnam, A. R. (1988). "Allelochemicals from Plants as Herbicides" Weed Technology. 2(4): 510–518. External links International Society of Chemical Ecology
Chemical ecology
[ "Chemistry", "Biology" ]
4,420
[ "Biochemistry", "Chemical ecology" ]
619,360
https://en.wikipedia.org/wiki/Food%20taster
A food taster is a person who ingests food that was prepared for someone else, to confirm it is safe to eat. One who tests drinks in this way is known as a cupbearer. The person to whom the food is to be served is usually an important person, such as a monarch or somebody under threat of assassination or harm. Role Food tasters have several functions: The safety of the food may be determined by observing whether or not the food taster subsequently becomes ill. However, food tasting is not effective against slow-acting poisons that take a long time to produce visible symptoms. The food taster may also prepare and serve food, so they can be even more diligent in preventing someone from poisoning the food. In the event the target falls ill or dies, the similar illness or death of the taster provides evidence of deliberate poisoning. Examples In ancient Rome, the duty was often given to a slave (termed the praegustator). Roman Emperor Claudius was allegedly killed by poison in AD 54, even though he had a food taster named Halotus. Tasters were sometimes coerced. Over history, presidents and royal families have hired food tasters or sacrifices, over fear of being poisoned. Queen Durdhara, the Mauryan empress, ate food that was prepared for her husband and died. Adolf Hitler's food taster Margot Wölk tried the food at 8:00 am every day, and, if she did not fall ill, the food would be sent to Hitler's military headquarters. President Vladimir Putin has hired a food taster who is part of his security staff to protect himself as well. In recent times, animals such as mice have been used to detect impurities in food produced for humans, such as during the 2008 Olympic Games in Beijing, China. In the United States, a number of recent presidents, including Barack Obama, George W. Bush, Bill Clinton, George H. W. Bush, and Ronald Reagan, have been known to employ food tasters. See also Food poisoning List of poisonings References External links Food services occupations Poisons Safety occupations
Food taster
[ "Environmental_science" ]
435
[ "Poisons", "Toxicology" ]
619,370
https://en.wikipedia.org/wiki/Luciferin
Luciferin () is a generic term for the light-emitting compound found in organisms that generate bioluminescence. Luciferins typically undergo an enzyme-catalyzed reaction with molecular oxygen. The resulting transformation, which usually involves breaking off a molecular fragment, produces an excited state intermediate that emits light upon decaying to its ground state. The term may refer to molecules that are substrates for both luciferases and photoproteins. Types Luciferins are a class of small-molecule substrates that react with oxygen in the presence of a luciferase (an enzyme) to release energy in the form of light. It is not known just how many types of luciferins there are, but some of the better-studied compounds are listed below. Because of the chemical diversity of luciferins, there is no clear unifying mechanism of action, except that all require molecular oxygen, The variety of luciferins and luciferases, their diverse reaction mechanisms and the scattered phylogenetic distribution indicate that many of them have arisen independently in the course of evolution. Firefly Firefly luciferin is the luciferin found in many Lampyridae species, such as P. pyralis. It is the substrate of beetle luciferases (EC 1.13.12.7) responsible for the characteristic yellow light emission from fireflies, though can cross-react to produce light with related enzymes from non-luminous species. The chemistry is unusual, as adenosine triphosphate (ATP) is required for light emission, in addition to molecular oxygen. Snail Latia luciferin is, in terms of chemistry, (E)-2-methyl-4-(2,6,6-trimethyl-1-cyclohex-1-yl)-1-buten-1-ol formate and is from the freshwater snail Latia neritoides. Bacterial Bacterial luciferin is two-component system consisting of flavin mononucleotide and a fatty aldehyde found in bioluminescent bacteria. Coelenterazine Coelenterazine is found in radiolarians, ctenophores, cnidarians, squid, brittle stars, copepods, chaetognaths, fish, and shrimp. It is the prosthetic group in the protein aequorin responsible for the blue light emission. Dinoflagellate Dinoflagellate luciferin is a chlorophyll derivative (i. e. a tetrapyrrole) and is found in some dinoflagellates, which are often responsible for the phenomenon of nighttime glowing waves (historically this was called phosphorescence, but is a misleading term). A very similar type of luciferin is found in some types of euphausiid shrimp. Vargulin Vargulin is found in certain ostracods and deep-sea fish, to be specific, Poricthys. Like the compound coelenterazine, it is an imidazopyrazinone and emits primarily blue light in the animals. Fungi Foxfire is the bioluminescence created by some species of fungi present in decaying wood. While there may be multiple different luciferins within the kingdom of fungi, 3-hydroxy hispidin was determined to be the luciferin in the fruiting bodies of several species of fungi, including Neonothopanus nambi, Omphalotus olearius, Omphalotus nidiformis, and Panellus stipticus. Usage in science Luciferin is widely used in science and medicine as a method of in vivo imaging, using living organisms to non-invasively detect images and in molecular imaging. The reaction between luciferin substrate paired with the receptor enzyme luciferase produces a catalytic reaction, generating bioluminescence. This reaction and the luminescence produced is useful for imaging such as detecting tumors from cancer or capable of measuring gene expression. References External links Biological pigments Bioluminescence Fluorescent dyes Luciferins
Luciferin
[ "Chemistry", "Biology" ]
815
[ "Luminescence", "Luciferins", "Biochemistry", "Biological pigments", "Bioluminescence", "Pigmentation" ]
619,485
https://en.wikipedia.org/wiki/Bascinet
The bascinet – also bassinet, basinet, or bazineto – was a Medieval European open-faced combat helmet. It evolved from a type of iron or steel skullcap, but had a more pointed apex to the skull, and it extended downwards at the rear and sides to afford protection for the neck. A mail curtain (aventail or camail) was usually attached to the lower edge of the helmet to protect the throat, neck and shoulders. A visor (face guard) was often employed from c. 1330 to protect the exposed face. Early in the fifteenth century, the camail began to be replaced by a plate metal gorget, giving rise to the so-called "great bascinet". Early development The first recorded reference to a bascinet, or bazineto, was in the Italian city of Padua in 1281, when it is described as being worn by infantry. It is believed that the bascinet evolved from a simple iron skullcap, known as the cervelliere, which was worn with a mail coif, as either the sole form of head protection or beneath a great helm. The bascinet is differentiated from the cervelliere by having a higher, pointed skull. By about 1330 the bascinet had been extended lower down the sides and back of the head. Within the next 20 years it had extended to the base of the neck and covered the cheeks. The bascinet appeared quite suddenly in the later 13th century and some authorities see it as being influenced by Byzantine or Middle-Eastern Muslim helmets. The bascinet, without a visor, continued to be worn underneath larger "great helms" (also termed heaumes). Protection for the throat, neck and face Camails or aventails Unlike the cervelliere, which was worn in conjunction with, often underneath, a complete hood of mail called the coif, early bascinets were typically worn with a neck and throat defence of mail that was attached to the lower edge of the helmet itself; this mail "curtain" was called a camail or aventail. The earliest camails were riveted directly to the edge of the helmet, however, beginning in the 1320s a detachable version replaced this type. The detachable aventail was attached to a leather band, which was in turn attached to the lower border of the bascinet by a series of staples called vervelles. Holes in the leather band were passed over the vervelles, and a waxed cord was passed through the holes in the vervelles to secure it. Bretache This illustration shows a bascinet with a type of detachable nasal (nose protector) called the bretache or bretèche made of sheet metal. The bretache was attached to the aventail at the chin, and it fastened to a hook or clamp on the brow of the helmet. According to Boeheim, this type of defence was prevalent in Germany, appearing around 1330 and fading from use around 1370. The bretache was also used in Italy; one of the first representations of it is on the equestrian statue of Cangrande I della Scala, who died in 1329. It is also shown on the tomb of Bernardino dei Baranzoni in the Museo Lapidario Estense in Modena, created c. 1345–50. An advantage of the bretache was that it could be worn under a great helm, but afforded some facial protection when the great helm was taken off. Use of the bretache preceded and overlapped with that of a new type of visor used with the bascinet, the "klappvisor" or "klappvisier". Visored bascinets The open-faced bascinet, even with the mail aventail, still left the exposed face vulnerable. However, from about 1330, the bascinet was often worn with a "face guard" or movable visor. The "klappvisor" or klappvisier was a type of visor employed on bascinets from around 1330–1340; this type of visor was hinged at a single point in the centre of the brow of the helmet skull. It was particularly favoured in Germany, but was also used in northern Italy where it is shown in a Crucifixion painted in the chapter hall of Santa Maria Novella in Florence, c. 1367. Its use in Italy seems to have ceased around 1380, but continued in Germany into the 15th century. The klappvisor has been characterised as being intermediate between the bretache nasal and the side pivoting visor. Sources disagree on the nature of the klappvisier. A minority, including De Vries and Smith, class all smaller visors, those that only cover the area of the face left exposed by the aventail, as klappvisiers, regardless of the construction of their hinge mechanism. However, they agree that klappvisiers, by their alternative definition of 'being of small size', preceded the larger forms of visor, which almost exclusively employed the double pivot, found in the latter part of the 14th century. The side-pivot mount, which used two pivots – one on each side of the helmet, is shown in funerary monuments and other pictorial or sculptural sources of the 1340s. One of the early depictions of a doubly pivoted visor on a bascinet is the funerary monument of Sir Hugh Hastings (d. 1347) in St. Mary's Church, Elsing, Norfolk, England. The pivots were connected to the visor by means of hinges to compensate for any lack of parallelism between the pivots. The hinges usually had a removable pin holding them together, this allowed the visor to be completely detached from the helmet, if desired. The side-pivot system was commonly seen in Italian armours. Hounskull Whether of the klappvisor or double-pivot type, the visors of the first half of the 14th century tended to be of a relatively flat profile with little projection from the face. They had eye-slits surrounded by a flange to help deflect weapon points. From around 1380 the visor, by this time considerably larger than earlier forms, was drawn out into a conical point like a muzzle or a beak, and was given the names "hounskull" (from the German hundsgugel – "hound's hood") or "pig-faced" (in modern parlance). The protruding muzzle gave better protection to the face from blows by offering a deflecting surface. It also improved ventilation, as its greater surface area allowed it to have more holes for air to pass through. Rounded visors From about 1410 the visor attached to bascinets lost its pointed hounskull shape, and became progressively more rounded. By 1435 it gave an "ape-like" profile to the helmet; by 1450 it formed a sector in the, by then, almost globular bascinet. Ventilation holes in the visor tended to become larger and more numerous. Later evolution of the helmet Between c. 1390 and 1410 the bascinet had an exaggeratedly tall skull with an acutely pointed profile – sometimes so severe as to have a near-vertical back. Ten years later both the skull of the helmet and the hinged visor started to become less angular and more rounded. Almost globular forms became common by c. 1450. As part of the same process the helmet became more close-fitting, and narrowed to follow the contours of the neck. Bevors and gorgets Around 1350, during the reign of John II, French bascinets began to be fitted with a hinged chin- or jaw-piece (bevor (sense 2), ), upon which the visor would be able to rest. The visor and bevor that closed flush with each other thus provided better protection against incoming sword blows. This type of defence augmented the camail rather than replaced it. The bascinet fitted with a camail was relatively heavy and most of the weight was supported directly by the head. Plate gorgets were introduced from c. 1400–1410, which replaced the camail and moved the weight of the throat and neck defences from the head to the shoulders. At the same time a plate covering the cheeks and lower face was introduced also called the bavière (contemporary usage was not precise). This bavière was directly attached by rivets to the skull of the bascinet. The combined skull and bavière could rotate within the upper part of the gorget, which overlapped them. A degree of freedom of movement was retained, but was probably less than had been the case with the mail camail. Great bascinet In the view of Oakeshott the replacement of the camail by a plate gorget gave rise to the form of helmet known as the "great bascinet". Many other scholars consider that the term should be reserved for bascinets where the skull, and baviere – if present, was fixed to the gorget, rendering the whole helmet immobile. Early gorgets were wide, copying the shape of the earlier aventail, however, with the narrowing of the neck opening the gorget plates had to be hinged to allow the helmet to be put on. Early great bascinets had the skull of the helmet riveted to the rear gorget plate, however, some later great bascinets had the skull forged in a single piece with the rear gorget plate. The gorget was often strapped to both the breast and backplate of the cuirass. In this late form the head was relieved of the entire weight of the helmet, which rested on the shoulders; however, the helmet was rendered totally immobile and the head of the wearer had only limited abilities to move inside it. Though very strongly constructed, this type of helmet imposed limitations on the wearer's vision and agility. Historic use Use with the great helm Bascinets, other than great bascinets, could be worn beneath a great helm. However, only those without face protection, or those with the close fitting bretache, could be worn in this manner. The great helm afforded a high degree of protection, but at the cost of very restricted vision and agility. The lighter types of bascinet gave less protection but allowed greater freedom of movement and better vision. The practicality of a man-at-arms being able to take off a great helm during a battle, if he wanted to continue fighting wearing just a bascinet, is unclear. By the mid 14th century the great helm was probably largely relegated to tournament use. However, Henry V of England is reputed to have worn a great helm over a bascinet at the Battle of Agincourt in 1415. He was recorded as receiving a blow to the head during the battle, which damaged his helmet; the double protection afforded by wearing two helmets may have saved his life. Later use By the middle of the 14th century, most knights had discarded the great helm altogether in favor of a fully visored bascinet. The bascinet, both with and without a visor, was the most common helmet worn in Europe during most of the 14th century and the first half of the 15th century, including during the Hundred Years' War. Contemporary illustrations show a majority of knights and men-at-arms wearing one of a few variants of the bascinet helmet. Indeed, so ubiquitous was the use of the helmet that "bascinet" became an alternative term for a man-at-arms. Though primarily associated with use by the "knightly" classes and other men-at-arms some infantry also made use of the lighter versions of this helmet. Regions where rich citizens were fielded as infantry, such as Italy, and other lands producing specialised professional infantry such as the English and Welsh longbowman probably saw the greatest use of bascinets by infantrymen. The basic design of the earlier, conical version of the helmet was intended to direct blows from weapons downward and away from the skull and face of the wearer. Later versions of the bascinet, especially the great bascinet, were designed to maximise coverage and therefore protection. In achieving this they sacrificed the mobility and comfort of the wearer; thus, ironically, returning to the situation that the wearers of the cumbersome great helm experienced and that the early bascinets were designed to overcome. It is thought that poorer men-at-arms continued to employ lighter bascinets with mail camails long after the richest had adopted plate gorgets. Decline in use Soon after 1450 the "great bascinet" was rapidly discarded for field use, being replaced by the armet and sallet, which were lighter helmets allowing greater freedom of movement for the wearer. However, a version of the great bascinet, usually with a cage-like visor, remained in use for foot combat in tournaments into the 16th century. Explanatory notes Citations General bibliography Bennett, Matthew (1991). Agincourt 1415: Triumph Against the Odds. Osprey Publishing. DeVries, Kelly and Smith, Robert Douglas (2007). Medieval Weapons: An Illustrated History of Their Impact. ABC-CLIO, Santa Barbara, CA. Gravett, Christopher (2008). Knight: Noble Warrior of England 1200–1600. Osprey Publishing. Gravett, Christopher (2002) English Medieval Knight, 1300–1400. Osprey Publishing. . Gravett, Christopher (1985). German Medieval Armies: 1300–1500. Osprey Publishing. Lucchini, Francesco (2011). "Face, Counterface, Counterfeit: The Lost Silver Visage of the Reliquary of St. Anthony's Jawbone". Published in Meaning in Motion: Semantics of Movement in Medieval Art and Architecture, edited by N. Zchomelidse and G. Freni. Princeton. Miller, Douglas (1979). The Swiss at War 1300–1500. Osprey Publishing. Nicolle, David (1983). Italian Medieval Armies: 1300–1500. Osprey Publishing. Nicolle, David (1999). Arms and Armour of the Crusading Era, 1050–1350: Western Europe and the Crusader States. Greenhill Books. Nicolle, David (July 1999). "Medieval Warfare: The Unfriendly Interface". The Journal of Military History, Vol. 63, No. 3. pp. 579–599. Published by: Society for Military History. Nicolle, David (1996). Knight of Outremer, 1187–1344. Osprey Publishing. Nicolle, David (2000). French Armies of the Hundred Years War. Osprey Publishing. Oakeshott, Ewart (1980). European Weapons and Armour: From the Renaissance to the Industrial Revolution. Lutterworth Press. Rothero, Christopher (1981). The Armies of Agincourt. Osprey Publishing. Singman, J.; and McLean, W. (1999). Daily Life in Chaucer's England. Greenwood Press. . External links Spotlight: The 14th Century Bascinet (myArmoury.com article) Battle of Agincourt from Olivier's Henry V YouTube – Laurence Olivier's film of Henry V. A depiction of Henry V wearing a bascinet with a bavier and a plate gorget, illustrating the mobility of the head and helmet within the gorget. It also shows the king's crown within an orle. Medieval helmets Western plate armour Metallic objects
Bascinet
[ "Physics" ]
3,272
[ "Metallic objects", "Physical objects", "Matter" ]
619,562
https://en.wikipedia.org/wiki/Endemism%20in%20the%20Hawaiian%20Islands
Located about 2,300 miles (3,680 km) from the nearest continental shore, the Hawaiian Islands are the most isolated group of islands on the planet. The plant and animal life of the Hawaiian archipelago is the result of early, very infrequent colonizations of arriving species and the slow evolution of those species—in isolation from the rest of the world's flora and fauna—over a period of at least 5 million years. As a consequence, Hawai'i is home to a large number of endemic species. The radiation of species described by Charles Darwin in the Galapagos Islands which was critical to the formulation of his theory of evolution is far exceeded in the more isolated Hawaiian Islands. The relatively short time that the existing main islands of the archipelago have been above the surface of the ocean (less than 10 million years) is only a fraction of time span over which biological colonization and evolution have occurred in the archipelago. High, volcanic islands have existed in the Pacific far longer, extending in a chain to the northwest; these once mountainous islands are now reduced to submerged banks and coral atolls. Midway Atoll, for example, formed as a volcanic island some 28 million years ago. Kure Atoll, a little further to the northwest, is near the Darwin point—defined as waters of a temperature that allows coral reef development to just keep up with isostatic sinking. And extending back in time before Kure, an even older chain of islands spreads northward nearly to the Aleutian Islands; these former islands, all north of the Darwin point, are now completely submerged as the Emperor Seamounts. The islands are well known for the environmental diversity that occurs on high mountains within a trade winds field. On a single island, the climate can differ around the coast from dry tropical (< 20 in or 500 mm annual rainfall) to wet tropical; and up the slopes from tropical rainforest (> 200 in or 5,000 mm per year) through a temperate climate into alpine conditions of cold and dry climate. The rainy climate impacts soil development, which largely determines ground permeability, which affects the distribution of streams, wetlands, and wet places. The distance and remoteness of the Hawaiian archipelago is a biological filter. Seeds or spores attached to a lost migrating bird's feather or an insect falling out of the high winds found a place to survive in the islands and whatever else was needed to reproduce. The narrowing of the gene pool meant that at the very beginning, the population of a colonizing species was a bit different from that of the remote contributing population. This list does not include species extinct in prehistoric times. Island formation Throughout time, the Hawaiian Islands formed linearly from northwest to the southeast. A study was conducted to determine the approximate ages of the Hawaiian Islands using K–Ar dating of the oldest found igneous rocks from each island. Kauai was determined to be about 5.1 million years old, Oahu about 3.7 million years old and the youngest island of Hawaii about 0.43 million years old. By determining the maximum age of the islands, inferences could be made about the maximum possible age of organisms inhabiting the island. The newly formed islands were able to accommodate growing populations, while the new environments were causing high rates of new adaptations. Human arrival Human contact, first by Polynesians and later by Europeans, has had a significant impact. Both the Polynesians and Europeans cleared native forests and introduced non-indigenous species for agriculture (or by accident), driving many endemic species to extinction. Fossil finds in caves, lava tubes, and sand dunes have revealed an avifauna that once had a native eagle, two raven-size crows, several bird-eating owls, and giant ducks known as moa-nalos. Around 861 species of plants have been introduced to the islands by humans since its discovery by Polynesian settlers, including crops such as taro and breadfruit. Today, many of the remaining endemic species of plants and animals in the Hawaiian Islands are considered endangered, and some critically so. Plant species are particularly at risk: out of a total of 2,690 plant species, 946 are non-indigenous with 800 of the native species listed as endangered. Terrestrial vertebrates Mammals Hawaiian hoary bat (a.k.a. ʻŌpeʻapeʻa) (Lasiurus semotus) - endangered Hawaiian monk seal (a.k.a. ʻIlio-holo-i-ka-uaua) (Neomonachus schauinslandi) - endangered Synemporion keana (a species of vesper bat) - extinct Birds Hawaiian duck (a.k.a. Koloa) (Anas wyvilliana) - endangered Laysan duck (Anas laysanensis) - critically endangered Nene (a.k.a. Hawaiian goose) (Branta sandvicensis) - near threatened Hawaiian petrel (Pterodroma sandwichensis) - endangered Newell's shearwater (a.k.a. Hawaiian shearwater or 'a'o) (Puffinus newelli) - critically endangered Hawaiian hawk (a.k.a. 'Io) (Buteo solitarius) - near threatened Laysan rail (Porzana palmeri) - extinct Hawaiian rail (Porzana sandwichensis) - extinct Hawaiian gallinule (Gallinula chloropus sanvicensis) Hawaiian coot (Fulica alai) - vulnerable Hawaiian stilt (Himantopus himantopus knudseni) Hawaiian black noddy (Anous minutus melanogenys) Pueo (Asio flammeus sandwichensis) - endangered Kauaʻi oʻo (Moho braccatus) - extinct Oʻahu ʻōʻō (Moho apicalis) - extinct Molokaʻi ʻōʻō (Moho bishopi) - extinct Hawaiʻi ʻōʻō (Moho nobilis) - extinct Kioea (Chaetoptila angustipluma) - extinct Kāmaʻo (Myadestes myadestinus) - extinct Puaiohi (Myadestes palmeri) - critically endangered Olomaʻo (Myadestes lanaiensis) - critically endangered/extinct ʻAmaui (Myadestes lanaiensis woahensis) - extinct ʻŌmaʻo (Myadestes obscurus) - vulnerable Millerbird (Acrocephalus familiaris) - critically endangered Hawaiʻi ʻelepaio (Chasiempis sandwichensis) - vulnerable Oʻahu ʻelepaio (Chasiempis ibidis) - endangered Kaua'i ʻelepaio (Chasiempis sclateri) - vulnerable Hawaiian crow (Corvus hawaiiensis) - extinct in the wild Laysan finch (Telespiza cantans) - vulnerable Nihoa finch (Telespiza ultima) - critically endangered Lesser koa finch (Rhodacanthus flaviceps) - extinct Greater koa finch (Rhodacanthus palmeri) - extinct Maui parrotbill (Pseudonestor xanthophrys) - critically endangered ʻŌʻū (Psittirostra psittacea) - critically endangered/extinct Palila (Loxioides bailleui) - critically endangered Lanaʻi hookbill (Dysmorodrepanis munroi) - extinct Kona grosbeak (Chlroidops kona) - extinct Common ʻamakihi (Hemignathus virens) - least concern Oʻahu ʻamakihi (Hemignathus flavus) - vulnerable Kauaʻi ʻamakihi (Hemignathus kauaiensis) - vulnerable Greater ʻamakihi (Hemignathus sagittirostris) - extinct Maui nukupuʻu (Hemignathus affinis) - critically endangered/extinct Kauaʻi nukupuʻu (Hemignathus hanapepe) - critically endangered/extinct Oʻahu nukupuʻu (Hemignathus lucidus) - extinct ʻAkiapolaʻau (Hemignathus munroi) - endangered ʻAnianiau (Magumma parva) - vulnerable Hawaiʻi ʻakialoa (Akialoa obscura) - extinct Kauaʻi ʻakialoa (Akialoa stejnegeri) - extinct Maui Nui ʻakialoa (Akialoa lanaiensis)- extinct Oahu ʻakialoa (Akialoa ellisiana) - extinct ʻAkekeʻe (Loxops caeruleirostris) - critically endangered Hawaiʻi ʻakepa (Loxops coccineus) - endangered Maui ʻakepa (Loxops ochraceus) - extinct Oʻahu ʻakepa (Loxops wolstenholmei) - extinct ʻAkikiki (Oreomystis bairdi) - critically endangered Hawaiʻi creeper (Oreomystis mana) - endangered Molokai creeper (Paroreomyza flammea) - extinct Oʻahu ʻalauahio (Paroreomyza maculata) - critically endangered/extinct Maui ʻalauahio (Paroreomyza montana) - endangered Lanaʻi ʻalauahio (Paroreomyza montana montana) - extinct ʻAkohekohe (a.k.a. Crested honeycreeper) (Palmeria dolei) - critically endangered Poʻouli (Melamprosops phaeosoma) - critically endangered/extinct ʻUla-ʻai-hawane (Ciridops anna) - extinct ʻIʻiwi (a.k.a. Scarlet honeycreeper) (Drepanis coccinea) - vulnerable Hawaiʻi mamo (Drepanis pacifica) - extinct Black mamo (Drepanis funerea) - extinct Laysan honeycreeper (Himantione fraithii) - extinct ʻApapane (Himantione sanguinea) - least concern Freshwater fishes None of Hawaii's native fish are entirely restricted to freshwater (all are either anadromous, or also found in brackish and marine water in their adult stage). ʻOʻopu nākea (Awaous stamineus) Āholehole (Kuhlia xenura) ʻOʻopu ʻalamoʻo (Lentipes concolor) - data deficient ʻOʻopu naniha (Stenogobius hawaiiensis) ‘O‘opu ‘akupa (Eleotris sandwicensis) - data deficient ʻOʻopu nōpili (Sicyopterus stimpsoni) - near threatened Terrestrial invertebrates Insects Hyposmocoma (a genus of moths, such as the snail-eating caterpillar) Agrotis (a genus of moths) Drosophila (a genus of flies) Campsicnemus mirabilis (an extinct species of fly) Campsicnemus brevipes (a species of fly) Paralopostega (a genus of moths) Mestolobes (a genus of moths) Hypena (a genus of moths) Orthomecyna (a genus of moths) Helicoverpa (a genus of moths) Scotorythra (a genus of moths) Genophantis (a genus of moths) Tritocleis (a genus of moths) Eurynogaster (a genus of flies) Kamehameha butterfly (a.k.a. Pulelehua) (Vanessa tameamea) Green Hawaiian blue (Udara blackburni) Longhead yellow-faced bee (a.k.a. the Hawaiian yellow-faced bee) (Hylaeus longiceps) Thaumatogryllus (a genus of crickets) Wēkiu bug (Nysius wekiuicola) Drosophila sharpi (a rare species of fly) Koʻolau spurwing long-legged fly (an extinct species of fly Lanai pomace fly (an extinct species of fly) Phyllococcus oahuensis (a species of mealybug) Megalagrion (a genus of damselfly) Clavicoccus (a genus of mealybug) Dryophthorus distinguendus (a species of weevil) Laysan weevil (an extinct species of weevil) Rhyncogonus bryani (an extinct species of weevil) Manduca blackburni (an endangered species of hawkmoth) Thyrocopa (a genus of moths) Caconemobius nori (a species of cricket) Caconemobius howarthi (a species of cricket) Caconemobius schauinslandi (a species of cricket) Caconemobius varius (a species of cricket) Crustaceans Atyoida bisulcata (a freshwater shrimp) Halocaridina (a genus of marine and brackish water shrimp) Hawaiian river shrimp (Macrobrachium grandimanus) Spiders Ariamnes makue (a species of spider) Happy face spider (Theridion grallator) Kauaʻi cave wolf spider (Adelocosa anops) - endangered Orsonwelles, a genus of 13 species, each endemic to a single island Nihoa (a genus of spiders) Lycosa hawaiiensis (a species of spider) Gastropods Gastropods are snails. Oahu tree snails (Achatinella) - threatened, several already extinct Auriculella (a genus of land snails) - threatened, several already extinct Amastra (a genus of land snails) - many species extinct Carelia (a genus of land snails) - entire genus extinct Erinna (a genus of freshwater snails) - one vulnerable species, the other possibly extinct Gulickia alexandri (a land snail) - critically endangered Newcombia (a genus of land snails) - threatened, one already extinct Neritina granosa (a freshwater snail) - vulnerable Perdicella (a genus of land snails) - threatened, several already extinct Partulina (a genus of land snails) - threatened, several already extinct Marine animals Marine fishes Cnidarians Finger coral (Porites compressa) Thick finger coral (Porites duerdeni) Brigham's coral (Porites brighami) Molokaʻi cauliflower coral (Pocillopora molokensis) Irregular rice coral (Montipora dilatata) Blue rice coral (Montipora flabellata) Sandpaper or Ringed rice coral (Montipora patula) Verril's lump coral (Psammocora verrilli) Serpentine cup coral (Eguchipsammia serpentina) Grand black coral (Antipathes grandis) Bicolor gorgonian (Acabaria bicolor) Small knob leather coral (Sinularia molokaiensis) Plants Ferns Pendant kihi fern wahine noho mauna Athyrium haleakalae hāpuʻu ʻiʻi or Hawaiian tree fern Pacific lacefern or pauoa Molokai twinsorus fern Microsorum spectrum Sadleiria spp Cibotium glaucum Polypodium pellucidum amaumau fern Sphenomeris chinensis Hawaiʻi silversword Hesperomannia - genus of fern Hualalai hau kuahiwi Kokia drynarioides Apiales Lapalapa (Cheirodendron platyphyllum) ʻŌlapa (Cheirodendron trigynum) Arecales Loulu (Pritchardia fan palms) Asparagales Asparagaceae Golden hala pepe (Dracaena aurea) Lanai hala pepe (Dracaena fernaldii) Waianae Range hala pepe (Dracaena forbesii) Royal hala pepe (Dracaena halapepe) Dracaena halemanuensis Hawai'i hala pepe (Dracaena konaensis) Maui hala pepe (Dracaena rockii) Asteliaceae Astelia argyrocoma Puaʻakuhinia (Astelia menziesiana) Pa'iniu (Astelia waialealae) Orchidaceae Hawai'i jewel orchid (Anoectochilus sandvicensis) Hawai'i widelip orchid (Liparis hawaiensis) Hawai'i bog orchid (Peristylus holochila) Asterales Campanulaceae Alula (Brighamia insignis) - critically endangered Schiedea - genus of plants Lobelia niihauensis - endangered Lobelia oahuensis - critically endangered Lobelia gaudichaudii - critically endangered Lobelia gloria-montis - critically endangered Lobelia kauaensis - critically endangered Lobelia villosa - critically endangered Lobelia dunbarii - critically endangered Lobelia grayana critically endangered Lobelia hillebrandii - critically endangered Lobelia hypoleuca - endangered Lobelia monostachya - critically endangered Lobelia remyi - endangered Lobelia yuccoides - critically endangered Clermontia pyrularia - critically endangered Cyanea konahuanuiensis - critically endangered Cyanea platyphylla - critically endangered Cyanea superba - extinct in the wild Cyanea truncata - critically endangered Clermontia peleana - critically endangered Brighamia insignis - critically endangered, possibly extinct in the wild Brighamia rockii - critically endangered Trematolobelia grandifolia - critically endangered Trematolobelia kauaiensis - critically endangered Trematolobelia macrostachys- critically endangered Trematolobelia singularis - critically endangered Clermontia calophylla - endangered Clermontia drepanomorpha - endangered Clermontia grandiflora - extinct Clermontia hawaiiensis Clermontia hanaulaensis - endangered Clermontia kakea Clermontia kohalae Clermontia lindseyana - endangered Clermontia micrantha Clermontia montis-loa Clermontia multiflora Clermontia oblongifolia - endangered Clermontia pallida Clermontia parviflora Clermontia persicifolia Clermontia samuelii - endangered Clermontia arborescens Clermontia clermontioides Clermontia fauriei Clermontia peleana - endangered Clermontia pyrularia - endangered Clermontia tuberculata - endangered Clermontia waimeae - endangered Asteraceae Greensword (Argyroxiphium grayanum) Hawaii silversword (Argyroxiphium sandwicense) ʻEke silversword (Argyroxiphium caliginis) Mauna Loa silversword (Argyroxiphium kauense) Argyroxiphium virescens Hawaiian iliau (Wilkesia gymnoxiphium) Dwarf iliau (Wilkesia hobdyi) Tree dubautia (Dubautia arborea) Keaau Valley dubautia (Dubautia herbstobatae) Bog dubautia (Dubautia imbricata) Kalalau rim dubautia (Dubautia kenwoodii) Small-headed dubautia (Dubautia microcephala) Wahiawa bog dubautia (Dubautia pauciflorula) Plantainleaf dubautia (Dubautia plantaginea) Net-veined dubautia (Dubautia reticulata) Wahiawa dubautia (Dubautia syndetica) Waiʻaleʻale dubautia (Dubautia waialealae) Koholapehu (Dubautia latifolia) Dubautia kalalauensis Helodeaster erici Helodeaster helenae Helodeaster maviensis Brassicales Capparis sandwichiana Caryophyllales Nototrichium divaricatum Nototrichium humile Nototrichium sandwicense Cornales Kanawao (Broussaisia arguta) Cucurbitales Hillebrandia sandwicensis Fabales Acacia koaia - vulnerable Māmane (Sophora chrysophylla) Gentianales Na'u (Gardenia brighamii) - critically endangered Pua ʻala (Brighamia rockii) - critically endangered Malvales Abutilon eremitopetalum Abutilon menziesii Abutilon sandwicense Gossypium tomentosum Hibiscadelphus - endemic genus Hibiscadelphus bombycinus Hibiscadelphus crucibracteatus Hibiscadelphus distans Hibiscadelphus giffardianus Hibiscadelphus hualalaiensis Hibiscadelphus ×puakuahiwi Hibiscadelphus wilderianus Hibiscadelphus woodii Hibiscus arnottianus Yellow hibiscus (Hibiscus brackenridgei) - endangered Hibiscus clayii Hibiscus hannerae Hibiscus immaculatus Hibiscus kahilii Hibiscus kokio Hibiscus punaluuensis Hibiscus saintjohnianus Hibiscus waimeae Kokia - endemic genus Kokia cookei Kokia drynarioides Kokia kauaiensis Waltheria pyrolifolia Myrtales ʻŌhiʻa lehua (Metrosideros polymorpha) Lehua mamo (Metrosideros macropus) Lehua papa (Metrosideros rugosa) Piperales Peperomia cookiana Rosales ʻĀkala (Rubus hawaiensis) ʻĀkalakala (Rubus macraei) Māmaki (Pipturus albidus) Fungi Pholiota peleae Rhodocollybia laulaha Mycena marasmielloides Hygrophoraceae Hygrocybe Glowing like the sun Hygrocybe lamalama Slippery like a fish Hygrocybe pakelo Pink rose in the mist or rain forest Hygrocybe noelokelani Hygrocybe hapuuae See also Canoe plants Endemic birds of Hawaii Hawaiian lobelioids List of fishes of the Coral Sea List of fish of Hawaii List of extinct animals of the Hawaiian Islands List of Hawaii birds List of invasive plant species in Hawaii List of animal species introduced to the Hawaiian Islands Peripatric speciation on the Hawaiian archipelago References Further reading External links Flora of the Hawaiian Islands from the Smithsonian Institution Hawaii Natural history of Hawaii Insular ecology
Endemism in the Hawaiian Islands
[ "Biology" ]
4,683
[ "Endemism", "Biodiversity" ]
619,602
https://en.wikipedia.org/wiki/Genitourinary%20system
The genitourinary system, or urogenital system, are the sex organs of the reproductive system and the organs of the urinary system. These are grouped together because of their proximity to each other, their common embryological origin and the use of common pathways. Because of this, the systems are sometimes imaged together. In placental mammals (including humans), the male urethra goes through and opens into the penis while the female urethra and vagina empty through the vulva. The term "apparatus urogenitalis" was used in Nomina Anatomica (under splanchnologia) but is not used in the current Terminologia Anatomica. Development The urinary and reproductive organs are developed from the intermediate mesoderm. The permanent organs of the adult are preceded by a set of structures that are purely embryonic and that, with the exception of the ducts, disappear almost entirely before the end of fetal life. These embryonic structures are on either side: the pronephros, the mesonephros and the metanephros of the kidney, and the Wolffian and Müllerian ducts of the sex organ. The pronephros disappears very early; the structural elements of the mesonephros mostly degenerate, but the gonad is developed in their place, with which the Wolffian duct remains as the duct in males, and the Müllerian as that of the female. Some of the tubules of the mesonephros form part of the permanent kidney. Structures Urethra Female Urethra The urethra of an adult human female is 3-4 cm long. The female urethra is located between the bladder neck to the external urethral orifice and is behind the symphysis pubis.The urethral wall is composed of an inner epithelial lining, a sub-mucosa layer containing vascular supply, a thin fascial layer, and two layers of smooth muscle. Male Urethra The urethra of an adult human male is 18-20 cm long. It has a diameter of 8-9 mm.The male urethra is divided into two sections. Disorders Disorders of the genitourinary system includes a range of disorders from those that are asymptomatic to those that manifest an array of signs and symptoms. Causes for these disorders include congenital anomalies, infectious diseases, trauma, or conditions that secondarily involve the urinary structure. To gain access to the body, pathogens can penetrate mucous membranes lining the genitourinary tract. Malformations Urogenital malformations include: Hypospadias Epispadias Labial fusion Varicocele As a medical specialty, genitourinary pathology is the subspecialty of surgical pathology which deals with the diagnosis and characterization of neoplastic and non-neoplastic diseases of the urinary tract, male genital tract and testes. However, medical disorders of the kidneys are generally within the expertise of renal pathologists. Genitourinary pathologists generally work closely with urologic surgeons. References External links
Genitourinary system
[ "Biology" ]
656
[ "Organ systems", "Genitourinary system" ]
619,604
https://en.wikipedia.org/wiki/Glyconeogenesis
Glyconeogenesis is the synthesis of glycogen without using glucose or other carbohydrates, instead using substances like proteins and fats. One example is the conversion of lactic acid to glycogen in the liver. References Glucose
Glyconeogenesis
[ "Chemistry" ]
55
[ "Biochemistry stubs", "Medicinal chemistry stubs", "Medicinal chemistry" ]
619,625
https://en.wikipedia.org/wiki/Lyudmila%20Chernykh
Lyudmila Ivanovna Chernykh (, June 13, 1935 in Shuya, Ivanovo Oblast – July 28, 2017) was a Ukrainian-Russian-Soviet astronomer, wife and colleague of Nikolai Stepanovich Chernykh, and a prolific discoverer of minor planets. Professional career In 1959 she graduated from Irkutsk State Pedagogical Institute (now Pedagogical Institute of Irkutsk State University). Between 1959 and 1963 she worked in the Time and Frequency Laboratory of the All-Union Research Institute of Physico-Technical and Radiotechnical Measurements in Irkutsk, where she did astrometrical observations for the Time Service. Between 1964 and 1998 she was a scientific worker at the Institute of Theoretical Astronomy of the USSR Academy of Sciences (Russian Academy of Science since 1991), working at the observation base of the institute at the Crimean Astrophysical Observatory (CrAO) in Nauchnyy settlement on the Crimean peninsula. In 1998 she was promoted to senior scientific worker at CrAO. The Minor Planet Center (MPC) credits her with the discovery of 267 numbered minor planets, which she made at CrAO between 1966 and 1992. Several of these discoveries she made in collaboration with her husband and with Tamara Smirnova. Honors The asteroid 2325 Chernykh, discovered in 1979 by Czech astronomer Antonín Mrkos, was named in her and her husband's honour. The official naming citation was published by the MPC on 1 June 1981 (). List of discovered minor planets Two of her notable discoveries are 2127 Tanya – named after Russian child diarist Tanya Savicheva, and 2212 Hephaistos, a near-Earth object of the Apollo group of asteroids. References External links Людмила Ивановна Черных Parajanov Asteroid discovered by L. Chernykh 1935 births 2017 deaths Discoverers of asteroids People from Shuya Russian women scientists Soviet astronomers Women astronomers Ukrainian astronomers
Lyudmila Chernykh
[ "Astronomy" ]
405
[ "Women astronomers", "Astronomers" ]
619,632
https://en.wikipedia.org/wiki/Transfection
Transfection is the process of deliberately introducing naked or purified nucleic acids into eukaryotic cells. It may also refer to other methods and cell types, although other terms are often preferred: "transformation" is typically used to describe non-viral DNA transfer in bacteria and non-animal eukaryotic cells, including plant cells. In animal cells, transfection is the preferred term, as the term "transformation" is also used to refer to a cell's progression to a cancerous state (carcinogenesis). Transduction is often used to describe virus-mediated gene transfer into prokaryotic cells. The word transfection is a portmanteau of the prefix trans- and the word "infection." Genetic material (such as supercoiled plasmid DNA or siRNA constructs), may be transfected. Transfection of animal cells typically involves opening transient pores or "holes" in the cell membrane to allow the uptake of material. Transfection can be carried out using calcium phosphate (i.e. tricalcium phosphate), by electroporation, by cell squeezing, or by mixing a cationic lipid with the material to produce liposomes that fuse with the cell membrane and deposit their cargo inside. Transfection can result in unexpected morphologies and abnormalities in target cells. Terminology The meaning of the term has evolved. The original meaning of transfection was "infection by transformation", i.e., introduction of genetic material, DNA or RNA, from a prokaryote-infecting virus or bacteriophage into cells, resulting in an infection. For work with bacterial and archaeal cells transfection retains its original meaning as a special case of transformation. Because the term transformation had another sense in animal cell biology (a genetic change allowing long-term propagation in culture, or acquisition of properties typical of cancer cells), the term transfection acquired, for animal cells, its present meaning of a change in cell properties caused by introduction of DNA. Methods There are various methods of introducing foreign DNA into a eukaryotic cell: some rely on physical treatment (electroporation, cell squeezing, nanoparticles, magnetofection); others rely on chemical materials or biological particles (viruses) that are used as carriers. There are many different methods of gene delivery developed for various types of cells and tissues, from bacterial to mammalian. Generally, the methods can be divided into three categories: physical, chemical, and biological. Physical methods include electroporation, microinjection, gene gun, impalefection, hydrostatic pressure, continuous infusion, and sonication. Chemicals include methods such as lipofection, which is a lipid-mediated DNA-transfection process utilizing liposome vectors. It can also include the use of polymeric gene carriers (polyplexes). Biological transfection is typically mediated by viruses, utilizing the ability of a virus to inject its DNA inside a host cell. A gene that is intended for delivery is packaged into a replication-deficient viral particle. Viruses used to date include retrovirus, lentivirus, adenovirus, adeno-associated virus, and herpes simplex virus. Physical methods Physical methods are the conceptually simplest, using some physical means to force the transfected material into the target cell's nucleus. The most widely used physical method is electroporation, where short electrical pulses disrupt the cell membrane, allowing the transfected nucleic acids to enter the cell. Other physical methods use different means to poke holes in the cell membrane: Sonoporation uses high-intensity ultrasound (attributed mainly to the cavitation of gas bubbles interacting with nearby cell membranes), optical transfection uses a highly focused laser to form a ~1 μm diameter hole. Several methods use tools that force the nucleic acid into the cell, namely: microinjection of nucleic acid with a fine needle; biolistic particle delivery, in which nucleic acid is attached to heavy metal particles (usually gold) and propelled into the cells at high speed; and magnetofection, where nucleic acids are attached to magnetic iron oxide particles and driven into the target cells by magnets. Hydrodynamic delivery is a method used in mice and rats, in which nucleic acids can be delivered to the liver by injecting a relatively large volume in the blood in less than 10 seconds; nearly all of the DNA is expressed in the liver by this procedure. Chemical methods Chemical-based transfection can be divided into several kinds: cyclodextrin, polymers, liposomes, or nanoparticles (with or without chemical or viral functionalization. See below). One of the cheapest methods uses calcium phosphate, originally discovered by F. L. Graham and A. J. van der Eb in 1973 (see also). HEPES-buffered saline solution (HeBS) containing phosphate ions is combined with a calcium chloride solution containing the DNA to be transfected. When the two are combined, a fine precipitate of the positively charged calcium and the negatively charged phosphate will form, binding the DNA to be transfected on its surface. The suspension of the precipitate is then added to the cells to be transfected (usually a cell culture grown in a monolayer). By a process not entirely understood, the cells take up some of the precipitate, and with it, the DNA. This process has been a preferred method of identifying many oncogenes. Another method is the use of cationic polymers such as DEAE-dextran or polyethylenimine (PEI). The negatively charged DNA binds to the polycation and the complex is taken up by the cell via endocytosis. Lipofection (or liposome transfection) is a technique used to inject genetic material into a cell by means of liposomes, which are vesicles that can easily merge with the cell membrane since they are both made of a phospholipid bilayer. Lipofection generally uses a positively charged (cationic) lipid (cationic liposomes or mixtures) to form an aggregate with the negatively charged (anionic) genetic material. This transfection technology performs the same tasks as other biochemical procedures utilizing polymers, DEAE-dextran, calcium phosphate, and electroporation. The efficiency of lipofection can be improved by treating transfected cells with a mild heat shock. Fugene is a series of widely used proprietary non-liposomal transfection reagents capable of directly transfecting a wide variety of cells with high efficiency and low toxicity. Dendrimer is a class of highly branched molecules based on various building blocks and synthesized through a convergent or a divergent method. These dendrimers bind the nucleic acids to form dendriplexes that then penetrate the cells. Viral methods DNA can also be introduced into cells using viruses as a carrier. In such cases, the technique is called transduction, and the cells are said to be transduced. Adenoviral vectors can be useful for viral transfection methods because they can transfer genes into a wide variety of human cells and have high transfer rates. Lentiviral vectors are also helpful due to their ability to transduce cells not currently undergoing mitosis. Protoplast fusion is a technique in which transformed bacterial cells are treated with lysozyme in order to remove the cell wall. Following this, fusogenic agents (e.g., Sendai virus, PEG, electroporation) are used in order to fuse the protoplast carrying the gene of interest with the target recipient cell. A major disadvantage of this method is that bacterial components are non-specifically introduced into the target cell as well. Stable and transient transfection Stable and transient transfection differ in their long term effects on a cell; a stably transfected cell will continuously express transfected DNA and pass it on to daughter cells, while a transiently transfected cell will express transfected DNA for a short amount of time and not pass it on to daughter cells. For some applications of transfection, it is sufficient if the transfected genetic material is only transiently expressed. Since the DNA introduced in the transfection process is usually not integrated into the nuclear genome, the foreign DNA will be diluted through mitosis or degraded. Cell lines expressing the Epstein–Barr virus (EBV) nuclear antigen 1 (EBNA1) or the SV40 large-T antigen allow episomal amplification of plasmids containing the viral EBV (293E) or SV40 (293T) origins of replication, greatly reducing the rate of dilution. If it is desired that the transfected gene actually remain in the genome of the cell and its daughter cells, a stable transfection must occur. To accomplish this, a marker gene is co-transfected, which gives the cell some selectable advantage, such as resistance towards a certain toxin. Some (very few) of the transfected cells will, by chance, have integrated the foreign genetic material into their genome. If the toxin is then added to the cell culture, only those few cells with the marker gene integrated into their genomes will be able to proliferate, while other cells will die. After applying this selective stress (selection pressure) for some time, only the cells with a stable transfection remain and can be cultivated further. Common agents for selecting stable transfection are: Geneticin, or G418, neutralized by the product of the neomycin resistance gene Puromycin Zeocin Hygromycin B Blasticidin S RNA transfection RNA can also be transfected into cells to transiently express its coded protein, or to study RNA decay kinetics. RNA transfection is often used in primary cells that do not divide. siRNAs can also be transfected to achieve RNA silencing (i.e. loss of RNA and protein from the targeted gene). This has become a major application in research to achieve "knock-down" of proteins of interests (e.g. Endothelin-1) with potential applications in gene therapy. Limitation of the silencing approach are the toxicity of the transfection for cells and potential "off-target" effects on the expression of other genes/proteins. RNA can be purified from cells after lysis or synthesized from free nucleotides either chemically, or enzymatically using an RNA polymerase to transcribe a DNA template. As with DNA, RNA can be delivered to cells by a variety of means including microinjection, electroporation, and lipid-mediated transfection. If the RNA encodes a protein, transfected cells may translate the RNA into the encoded protein. If the RNA is a regulatory RNA (such as a miRNA), the RNA may cause other changes in the cell (such as RNAi-mediated knockdown). Encapsulating the RNA molecule in lipid nanoparticles was a breakthrough for producing viable RNA vaccines, solving a number of key technical barriers in delivering the RNA molecule into the human cell. RNA molecules shorter than about 25nt (nucleotides) largely evade detection by the innate immune system, which is triggered by longer RNA molecules. Most cells of the body express proteins of the innate immune system, and upon exposure to exogenous long RNA molecules, these proteins initiate signaling cascades that result in inflammation. This inflammation hypersensitizes the exposed cell and nearby cells to subsequent exposure. As a result, while a cell can be repeatedly transfected with short RNA with few non-specific effects, repeatedly transfecting cells with even a small amount of long RNA can cause cell death unless measures are taken to suppress or evade the innate immune system (see "Long-RNA transfection" below). Short-RNA transfection is routinely used in biological research to knock down the expression of a protein of interest (using siRNA) or to express or block the activity of a miRNA (using short RNA that acts independently of the cell's RNAi machinery, and therefore is not referred to as siRNA). While DNA-based vectors (viruses, plasmids) that encode a short RNA molecule can also be used, short-RNA transfection does not risk modification of the cell's DNA, a characteristic that has led to the development of short RNA as a new class of macromolecular drugs. Long-RNA transfection is the process of deliberately introducing RNA molecules longer than about 25nt into living cells. A distinction is made between short- and long-RNA transfection because exogenous long RNA molecules elicit an innate immune response in cells that can cause a variety of nonspecific effects including translation block, cell-cycle arrest, and apoptosis. Endogenous vs. exogenous long RNA The innate immune system has evolved to protect against infection by detecting pathogen-associated molecular patterns (PAMPs), and triggering a complex set of responses collectively known as "inflammation". Many cells express specific pattern recognition receptors (PRRs) for exogenous RNA including toll-like receptor 3,7,8 (TLR3, TLR7, TLR8), the RNA helicase RIG1 (RARRES3), protein kinase R (PKR, a.k.a. EIF2AK2), members of the oligoadenylate synthetase family of proteins (OAS1, OAS2, OAS3), and others. All of these proteins can specifically bind to exogenous RNA molecules and trigger an immune response. The specific chemical, structural or other characteristics of long RNA molecules that are required for recognition by PRRs remain largely unknown despite intense study. At any given time, a typical mammalian cell may contain several hundred thousand mRNA and other, regulatory long RNA molecules. How cells distinguish exogenous long RNA from the large amount of endogenous long RNA is an important open question in cell biology. Several reports suggest that phosphorylation of the 5'-end of a long RNA molecule can influence its immunogenicity, and specifically that 5'-triphosphate RNA, which can be produced during viral infection, is more immunogenic than 5'-diphosphate RNA, 5'-monophosphate RNA or RNA containing no 5' phosphate. However, in vitro-transcribed (ivT) long RNA containing a 7-methylguanosine cap (present in eukaryotic mRNA) is also highly immunogenic despite having no 5' phosphate, suggesting that characteristics other than 5'-phosphorylation can influence the immunogenicity of an RNA molecule. Eukaryotic mRNA contains chemically modified nucleotides such as N6-methyladenosine, 5-methylcytidine, and 2'-O-methylated nucleotides. Although only a very small number of these modified nucleotides are present in a typical mRNA molecule, they may help prevent mRNA from activating the innate immune system by disrupting secondary structure that would resemble double-stranded RNA (dsRNA), a type of RNA thought to be present in cells only during viral infection. The immunogenicity of long RNA has been used to study both innate and adaptive immunity. Repeated long-RNA transfection Inhibiting only three proteins, interferon-β, STAT2, and EIF2AK2 is sufficient to rescue human fibroblasts from the cell death caused by frequent transfection with long, protein-encoding RNA. Inhibiting interferon signaling disrupts the positive-feedback loop that normally hypersensitizes cells exposed to exogenous long RNA. Researchers have recently used this technique to express reprogramming proteins in primary human fibroblasts. See also Gene targeting Minicircle Protofection Transformation Transduction Transgene Vector (molecular biology) Viral vector References Further reading External links Biology Research Resource — Articles and Forums about Transfection Research in optical transfection at the University of St Andrews The 10th US-Japan Symposium on Drug Delivery Systems Molecular biology Gene delivery Applied genetics Biotechnology
Transfection
[ "Chemistry", "Biology" ]
3,393
[ "Genetics techniques", "Biotechnology", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry", "Gene delivery" ]
619,739
https://en.wikipedia.org/wiki/Nitrile
In organic chemistry, a nitrile is any organic compound that has a functional group. The name of the compound is composed of a base, which includes the carbon of the , suffixed with "nitrile", so for example is called "propionitrile" (or propanenitrile). The prefix cyano- is used interchangeably with the term nitrile in industrial literature. Nitriles are found in many useful compounds, including methyl cyanoacrylate, used in super glue, and nitrile rubber, a nitrile-containing polymer used in latex-free laboratory and medical gloves. Nitrile rubber is also widely used as automotive and other seals since it is resistant to fuels and oils. Organic compounds containing multiple nitrile groups are known as cyanocarbons. Inorganic compounds containing the group are not called nitriles, but cyanides instead. Though both nitriles and cyanides can be derived from cyanide salts, most nitriles are not nearly as toxic. Structure and basic properties The N−C−C geometry is linear in nitriles, reflecting the sp hybridization of the triply bonded carbon. The C−N distance is short at 1.16 Å, consistent with a triple bond. Nitriles are polar, as indicated by high dipole moments. As liquids, they have high relative permittivities, often in the 30s. History The first compound of the homolog row of nitriles, the nitrile of formic acid, hydrogen cyanide was first synthesized by C. W. Scheele in 1782. In 1811 J. L. Gay-Lussac was able to prepare the very toxic and volatile pure acid. Around 1832 benzonitrile, the nitrile of benzoic acid, was prepared by Friedrich Wöhler and Justus von Liebig, but due to minimal yield of the synthesis neither physical nor chemical properties were determined nor a structure suggested. In 1834 Théophile-Jules Pelouze synthesized propionitrile, suggesting it to be an ether of propionic alcohol and hydrocyanic acid. The synthesis of benzonitrile by Hermann Fehling in 1844 by heating ammonium benzoate was the first method yielding enough of the substance for chemical research. Fehling determined the structure by comparing his results to the already known synthesis of hydrogen cyanide by heating ammonium formate. He coined the name "nitrile" for the newfound substance, which became the name for this group of compounds. Synthesis Industrially, the main methods for producing nitriles are ammoxidation and hydrocyanation. Both routes are green in the sense that they do not generate stoichiometric amounts of salts. Ammoxidation In ammoxidation, a hydrocarbon is partially oxidized in the presence of ammonia. This conversion is practiced on a large scale for acrylonitrile: CH3CH=CH2 + 3/2 O2 + NH3 -> N#CCH=CH2 + 3 H2O In the production of acrylonitrile, a side product is acetonitrile. On an industrial scale, several derivatives of benzonitrile, phthalonitrile, as well as Isobutyronitrile are prepared by ammoxidation. The process is catalysed by metal oxides and is assumed to proceed via the imine. Hydrocyanation Hydrocyanation is an industrial method for producing nitriles from hydrogen cyanide and alkenes. The process requires homogeneous catalysts. An example of hydrocyanation is the production of adiponitrile, a precursor to nylon-6,6 from 1,3-butadiene: From organic halides and cyanide salts Two salt metathesis reactions are popular for laboratory scale reactions. In the Kolbe nitrile synthesis, alkyl halides undergo nucleophilic aliphatic substitution with alkali metal cyanides. Aryl nitriles are prepared in the Rosenmund-von Braun synthesis. In general, metal cyanides combine with alkyl halides to give a mixture of the nitrile and the isonitrile, although appropriate choice of counterion and temperature can minimize the latter. An alkyl sulfate obviates the problem entirely, particularly in nonaqueous conditions (the Pelouze synthesis). Cyanohydrins The cyanohydrins are a special class of nitriles. Classically they result from the addition of alkali metal cyanides to aldehydes in the cyanohydrin reaction. Because of the polarity of the organic carbonyl, this reaction requires no catalyst, unlike the hydrocyanation of alkenes. O-Silyl cyanohydrins are generated by the addition trimethylsilyl cyanide in the presence of a catalyst (silylcyanation). Cyanohydrins are also prepared by transcyanohydrin reactions starting, for example, with acetone cyanohydrin as a source of HCN. Dehydration of amides Nitriles can be prepared by the dehydration of primary amides. Common reagents for this include phosphorus pentoxide () and thionyl chloride (). In a related dehydration, secondary amides give nitriles by the von Braun amide degradation. In this case, one C-N bond is cleaved. Oxidation of amines Numerous traditional methods exist for nitrile preparation by amine oxidation. In addition, several selective methods have been developed in the last decades for electrochemical processes. From aldehydes and oximes The conversion of aldehydes to nitriles via aldoximes is a popular laboratory route. Aldehydes react readily with hydroxylamine salts, sometimes at temperatures as low as ambient, to give aldoximes. These can be dehydrated to nitriles by simple heating, although a wide range of reagents may assist with this, including triethylamine/sulfur dioxide, zeolites, or sulfuryl chloride. The related hydroxylamine-O-sulfonic acid reacts similarly. In specialised cases the Van Leusen reaction can be used. Biocatalysts such as aliphatic aldoxime dehydratase are also effective. Sandmeyer reaction Aromatic nitriles are often prepared in the laboratory from the aniline via diazonium compounds. This is the Sandmeyer reaction. It requires transition metal cyanides. Other methods A commercial source for the cyanide group is diethylaluminum cyanide which can be prepared from triethylaluminium and HCN. It has been used in nucleophilic addition to ketones. For an example of its use see: Kuwajima Taxol total synthesis Cyanide ions facilitate the coupling of dibromides. Reaction of α,α′-dibromoadipic acid with sodium cyanide in ethanol yields the cyano cyclobutane: Aromatic nitriles can be prepared from base hydrolysis of trichloromethyl aryl ketimines () in the Houben-Fischer synthesis Nitriles can be obtained from primary amines via oxidation. Common methods include the use of potassium persulfate, Trichloroisocyanuric acid, or anodic electrosynthesis. α-Amino acids form nitriles and carbon dioxide via various means of oxidative decarboxylation. Henry Drysdale Dakin discovered this oxidation in 1916. From aryl carboxylic acids (Letts nitrile synthesis) Reactions Nitrile groups in organic compounds can undergo a variety of reactions depending on the reactants or conditions. A nitrile group can be hydrolyzed, reduced, or ejected from a molecule as a cyanide ion. Hydrolysis The hydrolysis of nitriles RCN proceeds in the distinct steps under acid or base treatment to first give carboxamides and then carboxylic acids . The hydrolysis of nitriles to carboxylic acids is efficient. In acid or base, the balanced equations are as follows: Strictly speaking, these reactions are mediated (as opposed to catalyzed) by acid or base, since one equivalent of the acid or base is consumed to form the ammonium or carboxylate salt, respectively. Kinetic studies show that the second-order rate constant for hydroxide-ion catalyzed hydrolysis of acetonitrile to acetamide is 1.6 M−1 s−1, which is slower than the hydrolysis of the amide to the carboxylate (7.4 M−1 s−1). Thus, the base hydrolysis route will afford the carboxylate (or the amide contaminated with the carboxylate). On the other hand, the acid catalyzed reactions requires a careful control of the temperature and of the ratio of reagents in order to avoid the formation of polymers, which is promoted by the exothermic character of the hydrolysis. The classical procedure to convert a nitrile to the corresponding primary amide calls for adding the nitrile to cold concentrated sulfuric acid. The further conversion to the carboxylic acid is disfavored by the low temperature and low concentration of water. Two families of enzymes catalyze the hydrolysis of nitriles. Nitrilases hydrolyze nitriles to carboxylic acids: Nitrile hydratases are metalloenzymes that hydrolyze nitriles to amides. These enzymes are used commercially to produce acrylamide. The "anhydrous hydration" of nitriles to amides has been demonstrated using an oxime as water source: Reduction Nitriles are susceptible to hydrogenation over diverse metal catalysts. The reaction can afford either the primary amine () or the tertiary amine (), depending on conditions. In conventional organic reductions, nitrile is reduced by treatment with lithium aluminium hydride to the amine. Reduction to the imine followed by hydrolysis to the aldehyde takes place in the Stephen aldehyde synthesis, which uses stannous chloride in acid. Deprotonation Alkyl nitriles are sufficiently acidic to undergo deprotonation of the C-H bond adjacent to the group. Strong bases are required, such as lithium diisopropylamide and butyl lithium. The product is referred to as a nitrile anion. These carbanions alkylate a wide variety of electrophiles. Key to the exceptional nucleophilicity is the small steric demand of the unit combined with its inductive stabilization. These features make nitriles ideal for creating new carbon-carbon bonds in sterically demanding environments. Nucleophiles The carbon center of a nitrile is electrophilic, hence it is susceptible to nucleophilic addition reactions: with an organozinc compound in the Blaise reaction with alcohols in the Pinner reaction. with amines, e.g. the reaction of the amine sarcosine with cyanamide yields creatine with arenes to form ketones in the Houben–Hoesch reaction via an imine intermediate. with Grignard reagents to form primary ketimines in the Moureau-Mignonac ketimine synthesis. While not a classical Grignard reaction, it may be considered one under broader modern definitions. Miscellaneous methods and compounds In reductive decyanation the nitrile group is replaced by a proton. Decyanations can be accomplished by dissolving metal reduction (e.g. HMPA and potassium metal in tert-butanol) or by fusion of a nitrile in KOH. Similarly, α-aminonitriles can be decyanated with other reducing agents such as lithium aluminium hydride. In the so-called Franchimont Reaction (developed by the Belgian doctoral student Antoine Paul Nicolas Franchimont (1844-1919) in 1872), an α-cyanocarboxylic acid heated in acid hydrolyzes and decarboxylates to a dimer. Nitriles self-react in presence of base in the Thorpe reaction in a nucleophilic addition In organometallic chemistry nitriles are known to add to alkynes in carbocyanation: Complexation Nitriles are precursors to transition metal nitrile complexes, which are reagents and catalysts. Examples include tetrakis(acetonitrile)copper(I) hexafluorophosphate () and bis(benzonitrile)palladium dichloride (). Nitrile derivatives Organic cyanamides Cyanamides are N-cyano compounds with general structure and related to the parent cyanamide. Nitrile oxides Nitrile oxides have the chemical formula . Their general structure is . The R stands for any group (typically organyl, e.g., acetonitrile oxide , hydrogen in the case of fulminic acid , or halogen (e.g., chlorine fulminate ). Nitrile oxides are quite different from nitriles: they are highly reactive 1,3-dipoles, and cannot be synthesized from the direct oxidation of nitriles. Instead, they can be synthesised by nitroalkane dehydration, oxime dehydrogenation, or halooxime elimination in base. They are used in 1,3-dipolar cycloadditions, such as to isoxazoles. They undergo type 1 dyotropic rearrangement to isocyanates. The heavier nitrile sulfides are extremely reactive and rare, but temporarily form during the thermolysis of oxathiazolones. They react similarly to nitrile oxides. Occurrence and applications Nitriles occur naturally in a diverse set of plant and animal sources. Over 120 naturally occurring nitriles have been isolated from terrestrial and marine sources. Nitriles are commonly encountered in fruit pits, especially almonds, and during cooking of Brassica crops (such as cabbage, Brussels sprouts, and cauliflower), which release nitriles through hydrolysis. Mandelonitrile, a cyanohydrin produced by ingesting almonds or some fruit pits, releases hydrogen cyanide and is responsible for the toxicity of cyanogenic glycosides. Over 30 nitrile-containing pharmaceuticals are currently marketed for a diverse variety of medicinal indications with more than 20 additional nitrile-containing leads in clinical development. The types of pharmaceuticals containing nitriles are diverse, from vildagliptin, an antidiabetic drug, to anastrozole, which is the gold standard in treating breast cancer. In many instances the nitrile mimics functionality present in substrates for enzymes, whereas in other cases the nitrile increases water solubility or decreases susceptibility to oxidative metabolism in the liver. The nitrile functional group is found in several drugs. See also Protonated nitriles: Nitrilium Deprotonated nitriles: Nitrile anion Cyanocarbon Nitrile ylide References External links Functional groups
Nitrile
[ "Chemistry" ]
3,241
[ "Nitriles", "Functional groups" ]
619,795
https://en.wikipedia.org/wiki/History%20of%20the%20periodic%20table
The periodic table is an arrangement of the chemical elements, structured by their atomic number, electron configuration and recurring chemical properties. In the basic form, elements are presented in order of increasing atomic number, in the reading sequence. Then, rows and columns are created by starting new rows and inserting blank cells, so that rows (periods) and columns (groups) show elements with recurring properties (called periodicity). For example, all elements in group (column) 18 are noble gases that are largely—though not completely—unreactive. The history of the periodic table reflects over two centuries of growth in the understanding of the chemical and physical properties of the elements, with major contributions made by Antoine-Laurent de Lavoisier, Johann Wolfgang Döbereiner, John Newlands, Julius Lothar Meyer, Dmitri Mendeleev, Glenn T. Seaborg, and others. Early history Around 330 BCE, the Greek philosopher Aristotle proposed that everything is made up of a mixture of one or more roots, an idea originally suggested by the Sicilian philosopher Empedocles. The four roots, which the Athenian philosopher Plato called elements, were earth, water, air and fire. Similar ideas existed in other ancient traditions, such as Indian philosophy with five elements: Earth, water, fire, air and aether collectively called 'pañca bhūta'. Of the chemical elements shown on the periodic table, nine – carbon, sulfur, iron, copper, silver, tin, gold, mercury, and lead – have been known since antiquity, as they are found in their native form and are relatively simple to mine with primitive tools. Five more elements were known in the age of alchemy: zinc, arsenic, antimony, and bismuth. Platinum was known to pre-Columbian South Americans, but knowledge of it did not reach Europe until the 16th century. First classification The history of the periodic table is also a history of the discovery of the chemical elements. The first person in recorded history to discover a new element was Hennig Brand, a bankrupt German merchant. Brand tried to discover the philosopher's stone—a mythical object that was supposed to turn inexpensive base metals into gold. In 1669, or later, his experiments with distilled human urine resulted in the production of a glowing white substance, which he called "cold fire" (kaltes Feuer). He kept his discovery secret until 1680, when Anglo-Irish chemist Robert Boyle rediscovered phosphorus and published his findings. The discovery of phosphorus helped to raise the question of what it meant for a substance (any given variety of matter) to be an element, in a world where versions of atomic theory were only speculative and later understandings of the nature of substances were only beginning to become possible. In 1661, Boyle defined elements as "those primitive and simple Bodies of which the mixt ones are said to be composed, and into which they are ultimately resolved." In 1718, Étienne François Geoffroy's Affinity Table made use of several aspects — (1) tabular grouping and (2) correlation with chemical affinity — that would later be reprised. In 1789, French chemist Antoine Lavoisier wrote Traité Élémentaire de Chimie (Elementary Treatise of Chemistry), which is considered to be the first modern textbook about chemistry. Lavoisier defined an element as a substance whose smallest units cannot be broken down into a simpler substance. Lavoisier's book contained a list of "simple substances" that Lavoisier believed could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc and sulfur, which formed the basis for the modern list of elements. Lavoisier's list also included "light" and "caloric", which at the time were believed to be material substances. He classified these substances into metals and nonmetals. While many leading chemists refused to believe Lavoisier's new revelations, the Elementary Treatise was written well enough to convince the younger generation. However, Lavoisier's descriptions of his elements lack completeness, as he only classified them as metals and non-metals. In 1808–10, British natural philosopher John Dalton published a method by which to arrive at provisional atomic weights for the elements known in his day, from stoichiometric measurements and reasonable inferences. Dalton's atomic theory was adopted by many chemists during the 1810s and 1820s. In 1815, British physician and chemist William Prout noticed that atomic weights seemed to be multiples of that of hydrogen. In 1817, German physicist Johann Wolfgang Döbereiner began to formulate one of the earliest attempts to classify the elements. In 1829, he found that he could form some of the elements into groups of three, with the members of each group having related properties. He termed these groups triads. Definition of Triad law "Chemically analogous elements arranged in increasing order of their atomic weights formed well marked groups of three called Triads in which the atomic weight of the middle element was found to be generally the arithmetic mean of the atomic weight of the other two elements in the triad. chlorine, bromine, and iodine calcium, strontium, and barium sulfur, selenium, and tellurium lithium, sodium, and potassium" All those attempts to sort elements by atomic weights were inhibited by the inaccurate determination of weights, and not just slightly: carbon, oxygen and many other elements were believed to be half their actual masses (cf. the illustration by Dalton above), because only monatomic gases were believed to exist. Even though Amedeo Avogadro and, independently of him, André-Marie Ampère, proposed the solution in the form of diatomic molecules and Avogadro's law already in the 1810s, it was not until after Stanislao Cannizzaro's publications in late 1850s when the theory began to be widely considered. In 1860, the modern scientific consensus emerged at the first international chemical conference, the Karlsruhe Congress, and a revised list of elements and atomic masses was adopted. It helped spur creation of more extensive systems. The first such system emerged in two years. Comprehensive formalizations French geologist Alexandre-Émile Béguyer de Chancourtois noticed that the elements, when ordered by their atomic weights, displayed similar properties at regular intervals. In 1862, he devised a three-dimensional chart, named the "telluric helix", after the element tellurium, which fell near the center of his diagram. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois saw that elements with similar properties lined up vertically. The original paper from Chancourtois in Comptes rendus de l'Académie des Sciences did not include a chart and used geological rather than chemical terms. In 1863, he extended his work by including a chart and adding ions and compounds. The next attempt was made in 1864. British chemist John Newlands presented in Chemical News a classification of the 62 known elements. Newlands noticed recurring trends in physical properties of the elements at recurring intervals of multiples of eight in order of mass number; based on this observation, he produced a classification of these elements into eight groups. Each group displayed a similar progression; Newlands likened these progressions to the progression of notes within a musical scale. Newlands's table left no gaps for possible future elements, and in some cases had two elements at the same position in the same octave. Newlands's table was ignored or ridiculed by some of his contemporaries. The Chemical Society refused to publish his work. The president of the Society, William Odling, defended the Society's decision by saying that such "theoretical" topics might be controversial; there was even harsher opposition from within the Society, suggesting the elements could have been just as well listed alphabetically. Later that year, Odling suggested a table of his own but failed to get recognition following his role in opposing Newlands's table. German chemist Lothar Meyer also noted the sequences of similar chemical and physical properties repeated at periodic intervals. According to him, if the atomic weights were plotted as ordinates (i.e. vertically) and the atomic volumes as abscissas (i.e. horizontally)—the curve obtained is a series of maximums and minimums—the most electropositive elements would appear at the peaks of the curve in the order of their atomic weights. In 1864, a book of his was published; it contained an early version of the periodic table containing 28 elements, and classified elements into six families by their valence—for the first time, elements had been grouped according to their valence. Works on organizing the elements by atomic weight had until then been stymied by inaccurate measurements of the atomic weights. In 1868, he revised his table, but this revision was published as a draft only after his death. In a paper dated December 1869 which appeared early in 1870, Meyer published a new periodic table of 55 elements, in which the series of periods are ended by an element of the alkaline earth metal group. The paper also included a line chart of relative atomic volumes, which illustrated periodic relationships of physical characteristics of the elements, and which assisted Meyer in deciding where elements should appear in his periodic table. By this time he had already seen the publication of Mendeleev's first periodic table, but his work appears to have been largely independent. In 1869, Russian chemist Dmitri Mendeleev arranged 63 elements by increasing atomic weight in several columns, noting recurring chemical properties across them. It is sometimes said that he played "chemical solitaire" on long train journeys, using cards containing the symbols, atomic weights, and chemical properties of the known elements. Another possibility is that he was inspired in part by the periodicity of the Sanskrit alphabet, which was pointed out to him by his friend and linguist Otto von Böhtlingk. Mendeleev used the trends he saw to suggest that atomic weights of some elements were incorrect, and accordingly changed their placements: for instance, he figured there was no place for a trivalent beryllium with the atomic weight of 14 in his work, and he cut both the atomic weight and valency of beryllium by a third, suggesting it was a divalent element with the atomic weight of 9.4. Mendeleev widely distributed printed broadsheets of the table to various chemists in Russia and abroad. Mendeleev argued in 1869 there were seven types of highest oxides. Mendeleev continued to improve his ordering; in 1870, it gained a tabular shape, and each column was given its own highest oxide, and in 1871, he further developed it and formulated what he termed the "law of periodicity". Some changes also occurred with new revisions, with some elements changing positions. Priority dispute and recognition Mendeleev's predictions and inability to incorporate the rare-earth metals Even as Mendeleev corrected positions of some elements, he thought that some relationships that he could find in his grand scheme of periodicity could not be found because some elements were still undiscovered, and that the properties of such undiscovered elements could be deduced from their expected relationships with other elements. In 1870, he first tried to characterize the yet undiscovered elements, and he gave detailed predictions for three elements, which he termed eka-boron, eka-aluminium, and eka-silicium; he also more briefly noted a few other expectations. It has been proposed that the prefixes eka, dvi, and tri, Sanskrit for one, two, and three, respectively, are a tribute to Pāṇini and other ancient Sanskrit grammarians for their invention of a periodic alphabet. In 1871, Mendeleev expanded his predictions further. Compared to the rest of the work, Mendeleev's 1869 list misplaces seven then known elements: indium, thorium, and five rare-earth metals: yttrium, cerium, lanthanum, erbium, and didymium. The last two were later found to be mixtures of two different elements; ignoring those would allow him to restore the logic of increasing atomic weight. These elements (all thought to be divalent at the time) puzzled Mendeleev as they did not show a regular increase in valency despite their seemingly consequential atomic weights. Mendeleev grouped them together, thinking of them as of a particular kind of series. In early 1870, he decided that the weights for these elements must be wrong and that the rare-earth metals should be trivalent (which accordingly increased their predicted atomic weights by half). He measured the heat capacity of indium, uranium, and cerium to demonstrate their higher assumed valency (which was soon confirmed by Prussian chemist Robert Bunsen). Mendeleev treated the change by assessing each element to an individual place in his system of the elements rather than continuing to treat them as a series. Mendeleev noticed that there was a significant difference in atomic mass between cerium and tantalum with no element between them; his consideration was that between them, there was a row of yet undiscovered elements, which would display similar properties to those elements which were to be found above and below them: for instance, an eka-molybdenum would behave as a heavier homolog of molybdenum and a lighter homolog of wolfram (the name under which Mendeleev knew tungsten). This row would begin with a trivalent lanthanum, a tetravalent cerium, and a pentavalent didymium. However, the higher valency for didymium had not been established, and Mendeleev tried to do so himself. Having had no success in that, he abandoned his attempts to incorporate the rare-earth metals in late 1871 and embarked on his grand idea of luminiferous ether. His idea was carried on by Austrian-Hungarian chemist Bohuslav Brauner, who sought to find a place in the periodic table for the rare-earth metals; Mendeleev later referred to him as to "one of the true consolidators of the periodic law". In addition to the predictions of scandium, gallium, and germanium that were quickly realized, Mendeleev's 1871 table left many more spaces for undiscovered elements, though he did not provide detailed predictions of their properties. In total, he predicted eighteen elements, though only half corresponded to elements that were later discovered. Priority of discovery None of the proposals were accepted immediately, and many contemporary chemists found it too abstract to have any meaningful value. Of those chemists that proposed their categorizations, Mendeleev strove to back his work and promote his vision of periodicity, Meyer did not promote his work very actively, and Newlands did not make a single attempt to gain recognition abroad. Both Mendeleev and Meyer created their respective tables for their pedagogical needs; the difference between their tables is well explained by the fact that the two chemists sought to use a formalized system to solve different problems. Mendeleev's intent was to aid composition of his textbook, Foundations of Chemistry, whereas Meyer was rather concerned with presentation of theories. Mendeleev's predictions emerged outside of the pedagogical scope in the realm of journal science, while Meyer made no predictions at all and explicitly stated his table and his textbook it was contained in, Modern Theories, should not be used for prediction in order to make the point to his students to not make too many purely theoretically constructed projections. Mendeleev and Meyer differed in temperament, at least when it came to promotion of their respective works. Boldness of Mendeleev's predictions was noted by some contemporary chemists, however skeptical they may have been. Meyer referred to Mendeleev's "boldness" in an edition of Modern Theories, whereas Mendeleev mocked Meyer's indecisiveness to predict in an edition of Foundations of Chemistry. Recognition of Mendeleev's table Eventually, the periodic table was appreciated for its descriptive power and for finally systematizing the relationship between the elements, although such appreciation was not universal. In 1881, Mendeleev and Meyer had an argument via an exchange of articles in British journal Chemical News over priority of the periodic table, which included an article from Mendeleev, one from Meyer, one of critique of the notion of periodicity, and many more. In 1882, the Royal Society in London awarded the Davy Medal to both Mendeleev and Meyer for their work to classify the elements; although two of Mendeleev's predicted elements had been discovered by then, Mendeleev's predictions were not at all mentioned in the prize rationale. Mendeleev's eka-aluminium was discovered in 1875 and became known as gallium; eka-boron and eka-silicium were discovered in 1879 and 1886, respectively, and were named scandium and germanium. Mendeleev was even able to correct some initial measurements with his predictions, including the first prediction of gallium, which matched eka-aluminium fairly closely but had a different density. Mendeleev advised the discoverer, French chemist Paul-Émile Lecoq de Boisbaudran, to measure the density again; de Boisbaudran was initially skeptical (not least because he thought Mendeleev was trying to take credit from him) but eventually admitted the correctness of the prediction. Mendeleev contacted all three discoverers; all three noted the close similarity of their discovered elements with Mendeleev's predictions, with the last of them, German chemist Clemens Winkler, admitting this suggestion was not first made by Mendeleev or himself after the correspondence with him, but by a different person, German chemist Hieronymous Theodor Richter. Some contemporary chemists were not convinced by these discoveries, noting the dissimilarities between the new elements and the predictions or claiming those similarities that did exist were coincidental. However, success of Mendeleev's predictions helped spread the word about his periodic table. Later, chemists used the successes of these Mendeleev's predictions to justify his table. By 1890, Mendeleev's periodic table had been universally recognized as a piece of basic chemical knowledge. Apart from his own correct predictions, a number of aspects may have contributed to this, such as the correct accommodation of many elements whose atomic weights were thought to have wrong values but were later corrected. The debate on the position of the rare-earth metals helped spur the discussion about the table as well. In 1889, Mendeleev noted at the Faraday Lecture to the Royal Institution in London that he had not expected to live long enough "to mention their discovery to the Chemical Society of Great Britain as a confirmation of the exactitude and generality of the periodic law". Inert gases and ether Inert gases British chemist Henry Cavendish, the discoverer of hydrogen in 1766, discovered that air is composed of more gases than nitrogen and oxygen. He recorded these findings in 1784 and 1785; among them, he found a then-unidentified gas less reactive than nitrogen. Helium was first reported in 1868; the report was based on the new technique of spectroscopy; some spectral lines in light emitted by the Sun did not match those of any of the known elements. Mendeleev was not convinced by this finding since variance of temperature led to change of intensity of spectral lines and their location on the spectrum. This opinion was held by some other scientists of the day, some of whom believed the spectral lines were due to a particular state of hydrogen existing in the Sun's atmosphere. Others believed the spectral lines could belong to an element that occurred on the Sun but not on Earth; some believed it was yet to be found on Earth. In 1894, British chemist William Ramsay and British physicist Lord Rayleigh isolated argon from air and determined that it was a new element. Argon, however, did not engage in any chemical reactions and was—highly unusually for a gas—monatomic; it did not fit into the periodic law and thus challenged the very notion of it. Not all scientists immediately accepted this report; Mendeleev's original response was that argon was a triatomic form of nitrogen rather than an element of its own. While the notion of a possibility of a group between that of halogens and that of alkali metals had existed (some scientists believed that several atomic weight values between halogens and alkali metals were missing, especially since places in this half of group VIII remained vacant), argon did not easily match the position between chlorine and potassium because its atomic weight exceeded those of both chlorine and potassium. Other explanations were proposed; for example, Ramsay supposed argon could be a mixture of different gases. For a while, Ramsay believed argon could be a mixture of three gases of similar atomic weights; this triad would resemble the triad of iron, cobalt, and nickel, and be similarly placed in group VIII. Certain that shorter periods contain triads of gases at their ends, Ramsay suggested in 1898 the existence of a gas between helium and argon with an atomic weight of 20; after its discovery later that year (it was named neon), Ramsay continued to interpret it as a member of a horizontal triad at the end of that period. In 1896, Ramsay tested a report of American chemist William Francis Hillebrand, who found a stream of an unreactive gas from a sample of uraninite. Wishing to prove it was nitrogen, Ramsay analyzed a different uranium mineral, cleveite, and found a new element, which he named krypton. This finding was corrected by British chemist William Crookes, who matched its spectrum to that of the Sun's helium. Following this discovery, Ramsay, using fractional distillation to separate the components air, discovered several more such gases in 1898: metargon, krypton, neon, and xenon; detailed spectroscopic analysis of the first of these demonstrated it was argon contaminated by a carbon-based impurity. Ramsay was initially skeptical about the existence of gases heavier than argon, and the discovery of krypton and xenon came as a surprise to him; however, Ramsay accepted his own discovery, and the five newly discovered inert gases (now noble gases) were placed in a single column in the periodic table. Although Mendeleev's table predicted several undiscovered elements, it did not predict the existence of such inert gases, and Mendeleev originally rejected those findings as well. Changes to the periodic table Although the sequence of atomic weights suggested that inert gases should be located between halogens and alkali metals, and there were suggestions to put them into group VIII coming from as early as 1895, such placement contradicted one of Mendeleev's basic considerations, that of the highest oxides. Inert gases did not form any oxides, and no other compounds at all, and as such, their placement in a group where elements should form tetroxides was seen as merely auxiliary and not natural; Mendeleev doubted inclusion of those elements in group VIII. Later developments, particularly by British scientists, focused on correspondence of inert gases with halogens to their left and alkali metals to their right. In 1898, when only helium, argon, and krypton were definitively known, Crookes suggested these elements be placed in a single column between the hydrogen group and the fluorine group. In 1900, at the Prussian Academy of Sciences, Ramsay and Mendeleev discussed the new inert gases and their location in the periodic table; Ramsay proposed that these elements be put in a new group between halogens and alkali metals, to which Mendeleev agreed. Ramsay published an article after his discussions with Mendeleev; the tables in it featured halogens to the left of inert gases and alkali metals to the right. Two weeks before that discussion, Belgian botanist Léo Errera had proposed to the Royal Academy of Science, Letters and Fine Arts of Belgium to put those elements in a new group 0. In 1902, Mendeleev wrote that those elements should be put in a new group 0; he said this idea was consistent with what Ramsay suggested to him and referred to Errera as to the first person to suggest the idea. Mendeleev himself added these elements to the table as group 0 in 1902, without disturbing the basic concept of the periodic table. In 1905, Swiss chemist Alfred Werner resolved the dead zone of Mendeleev's table. He determined that the rare-earth elements (lanthanides), 13 of which were known, lay within that gap. Although Mendeleev knew of lanthanum, cerium, and erbium, they were previously unaccounted for in the table because their total number and exact order were not known; Mendeleev still could not fit them in his table by 1901. This was in part a consequence of their similar chemistry and the imprecise determination of their atomic masses. Combined with the lack of a known group of similar elements, this rendered the placement of the lanthanides in the periodic table difficult. This discovery led to a restructuring of the table and the first appearance of the 32-column form. Ether By 1904, Mendeleev's table rearranged several elements, and included the noble gases along with most other newly discovered elements. It still had the dead zone, and a row zero was added above hydrogen and helium to include coronium and the ether, which were widely believed to be elements at the time. Although the Michelson–Morley experiment in 1887 cast doubt on the possibility of a luminiferous ether as a space-filling medium, physicists set constraints for its properties. Mendeleev believed it to be a very light gas, with an atomic weight several orders of magnitude smaller than that of hydrogen. He also postulated that it would rarely interact with other elements, similar to the noble gases of his group zero, and instead permeate substances at a velocity of per second. Mendeleev was not satisfied with the lack of understanding of the nature of this periodicity; this would only be possible through the understanding of the composition of the atom. However, Mendeleev firmly believed that future would only develop the notion rather than challenge it and reaffirmed his belief in writing in 1902. Atomic theory and isotopes Radioactivity and isotopes In 1907 it was discovered that thorium and radiothorium, products of radioactive decay, were physically different but chemically identical; this led Frederick Soddy to propose in 1910 that they were the same element but with different atomic weights. Soddy later proposed to call these elements with complete chemical identity "isotopes". The problem of placing isotopes in the periodic table had arisen beginning in 1900 when four radioactive elements were known: radium, actinium, thorium, and uranium. These radioactive elements (termed "radioelements") were accordingly placed at the bottom of the periodic table, as they were known to have greater atomic weights than stable elements, although their exact order was not known. Researchers believed there were still more radioactive elements yet to be discovered, and during the next decade, the decay chains of thorium and uranium were extensively studied. Many new radioactive substances were found, including the noble gas radon, and their chemical properties were investigated. By 1912, almost 50 different radioactive substances had been found in the decay chains of thorium and uranium. American chemist Bertram Boltwood proposed several decay chains linking these radioelements between uranium and lead. These were thought at the time to be new chemical elements, substantially increasing the number of known "elements" and leading to speculations that their discoveries would undermine the concept of the periodic table which had long been established to obey the octet rule. For example, there was not enough room between lead and uranium to accommodate these discoveries, even assuming that some discoveries were duplicates or incorrect identifications. It was also believed that radioactive decay violated one of the central principles of the periodic table, namely that chemical elements could not undergo transmutations and always had unique identities. Soddy and Kazimierz Fajans, who had been following these developments, published in 1913 that although these substances emitted different radiation, many of these substances were identical in their chemical characteristics, so shared the same place in the periodic table. They became known as isotopes, from the Greek ("same place"). Austrian chemist Friedrich Paneth cited a difference between "real elements" (elements) and "simple substances" (isotopes), also determining that the existence of different isotopes was mostly irrelevant in determining chemical properties. Following British physicist Charles Glover Barkla's discovery of characteristic X-rays emitted from metals in 1906, British physicist Henry Moseley considered a possible correlation between X-ray emissions and physical properties of elements. Moseley, along with Charles Galton Darwin, Niels Bohr, and George de Hevesy, proposed that the nuclear charge (Z) might be mathematically related to physical properties. The significance of these atomic properties was determined in the Geiger–Marsden experiments, in which the atomic nucleus and its charge were discovered, conducted between 1908 and 1913. Rutherford model and atomic number In 1913, amateur Dutch physicist Antonius van den Broek was the first to propose that the atomic number (nuclear charge) determined the placement of elements in the periodic table. He correctly determined the atomic number of all elements up to atomic number 50 (tin), though he made several errors with heavier elements. However, Van den Broek did not have any method to experimentally verify the atomic numbers of elements; thus, they were still believed to be a consequence of atomic weight, which remained in use in ordering elements. Moseley was determined to test Van den Broek's hypothesis. After a year of investigation of the Fraunhofer lines of various elements, he found a relationship between the X-ray wavelength of an element and its atomic number. With this, Moseley obtained the first accurate measurements of atomic numbers and determined an absolute sequence to the elements, allowing him to restructure the periodic table. Moseley's research immediately resolved discrepancies between atomic weight and chemical properties, where sequencing strictly by atomic weight would result in groups with inconsistent chemical properties. For example, his measurements of X-ray wavelengths enabled him to correctly place argon (Z = 18) before potassium (Z = 19), cobalt (Z = 27) before nickel (Z = 28), as well as tellurium (Z = 52) before iodine (Z = 53), in line with periodic trends. The determination of atomic numbers also clarified the order of chemically similar rare-earth elements; it was also used to confirm that Georges Urbain's claimed discovery of a new rare-earth element (celtium) was invalid, earning Moseley acclamation for this technique. Swedish physicist Karl Siegbahn continued Moseley's work for elements heavier than gold (Z = 79), and found that the heaviest known element at the time, uranium, had atomic number 92. In determining the largest identified atomic number, gaps in the atomic number sequence were conclusively determined where an atomic number had no known corresponding element; the gaps occurred at atomic numbers 43 (technetium), 61 (promethium), 72 (hafnium), 75 (rhenium), 85 (astatine), and 87 (francium). Electron shell and quantum mechanics In 1888, Swedish physicist Johannes Rydberg working from the 1885 Balmer formula noticed that the atomic numbers of the noble gases was equal to doubled sums of squares of simple numbers: 2 = 2·12, 10 = 2(12 + 22), 18 = 2(12 + 22 + 22), 36 = 2(12 + 22 + 22 + 32), 54 = 2(12 + 22 + 22 + 32 + 32), 86 = 2(12 + 22 + 22 + 32 + 32 + 42). This finding was accepted as an explanation of the fixed lengths of periods and led to repositioning of the noble gases from the left edge of the table, in group 0, to the right, in group VIII. Unwillingness of the noble gases to engage in chemical reaction was explained in the alluded stability of closed noble gas electron configurations; from this notion emerged the octet rule originally referred to as Abegg's Rule of 1904. Among the notable works that established the importance of the periodicity of eight were the valence bond theory, published in 1916 by American chemist Gilbert N. Lewis and the octet theory of chemical bonding, published in 1919 by American chemist Irving Langmuir. The chemists' approach during the period of the Old Quantum Theory (1913 to 1925) was incorporated into the understanding of the electron shells and orbitals under current quantum mechanics. A real pioneer who gave us the foundation for our current model of electrons is Irving Langmuir. In his 1919 paper, he postulated the existence of "cells", which we now call orbitals, which could each only contain two electrons, and these were arranged in "equidistant layers" which we now call shells. He made an exception for the first shell to only contain two electrons. These postulates were introduced on the basis of Rydberg's rule which Niels Bohr had used not in chemistry, but in physics, to apply to the orbits of electrons around the nucleus. In the Langmuir paper, he introduced the rule as 2N2 where N was a positive integer. The chemist Charles Rugeley Bury made the next major step toward our modern theory in 1921, by suggesting that eight and eighteen electrons in a shell form stable configurations. Bury's scheme was built upon that of earlier chemists and was a chemical model. Bury proposed that the electron configurations in transitional elements depended upon the valency electrons in their outer shell. In some early papers, the model was called the "Bohr-Bury Atom". He introduced the word transition to describe the elements now known as transition metals or transition elements. In the 1910s and 1920s, pioneering research into quantum mechanics led to new developments in atomic theory and small changes to the periodic table. In the 19th century, Mendeleev had already asserted that there was a fixed periodicity of eight, and expected a mathematical correlation between atomic number and chemical properties. The Bohr model was developed beginning 1913, and championed the idea of electron configurations that determine chemical properties. Bohr proposed that elements in the same group behaved similarly because they have similar electron configurations, and that noble gases had filled valence shells; this forms the basis of the modern octet rule. Bohr's study of spectroscopy and chemistry was not usual among theoretical atomic physicists. Even Rutherford told Bohr that he was struggling "to form an idea of how you arrive at your conclusions". This is because none of the quantum mechanical equations describe the number of electrons per shell and orbital. Bohr acknowledged that he was influenced by the work of Walther Kossel, who in 1916 was the first to establish an important connection between the quantum atom and the periodic table. He noticed that the difference between the atomic numbers 2, 10, 18 of the first three noble gases, helium, neon, argon, was 8, and argued that the electrons in such atoms orbited in "closed shells". The first contained only 2 electrons, the second and third, 8 each. Bohr's research then led Austrian physicist Wolfgang Pauli to investigate the length of periods in the periodic table in 1924. Pauli demonstrated that this was not the case. Instead, the Pauli exclusion principle was developed, not upon a mathematical basis, but upon the previous developments in alignment with chemistry. This rule states that no electrons can coexist in the same quantum state, and showed, in conjunction with empirical observations, the existence of four quantum numbers and the consequence on the order of shell filling. This determines the order in which electron shells are filled and explains the periodicity of the periodic table. British chemist Charles Bury is credited with the first use of the term transition metal in 1921 to refer to elements between the main-group elements of groups II and III. He explained the chemical properties of transition elements as a consequence of the filling of an inner subshell rather than the valence shell. This proposition, based upon the work of American chemist Gilbert N. Lewis, suggested the appearance of the d subshell in period 4 and the f subshell in period 6, lengthening the periods from 8 to 18 and then 18 to 32 elements, thus explaining the position of the lanthanides in the periodic table. Proton and neutron The discovery of proton and neutron demonstrated that an atom was divisible; this rendered Lavoisier's definition of a chemical element obsolete. A chemical element is defined today as a species of atoms with a consistent number of protons and that number is now known to be precisely the atomic number of an element. The discovery also explained the mechanism of several types of radioactive decay, such as alpha decay. Eventually, it was proposed that protons and neutrons were made of even smaller particles called quarks; their discovery explained the transmutation of neutrons into protons in beta decay. From short form into long form (into -A and -B groups) Circa 1925, the periodic table changed by shifting some Reihen (series) to the right, into an extra set of columns (groups). The original groups I–VII were repeated, distinguished by adding "A" and "B". Group VIII (with three columns) remained sole. Thus, Reihen 4 and 5 were shifted, and together formed new period 4 with groups IA–VIIA, VIII, IB–VIIB. Later expansions and the end of the periodic table Actinides As early as 1913, Bohr's research on electronic structure led physicists such as Johannes Rydberg to extrapolate the properties of undiscovered elements heavier than uranium. Many agreed that the next noble gas after radon would most likely have the atomic number 118, from which it followed that the transition series in the seventh period should resemble those in the sixth. Although it was thought that these transition series would include a series analogous to the rare-earth elements, characterized by filling of the 5f shell, it was unknown where this series began. Predictions ranged from atomic number 90 (thorium) to 99, many of which proposed a beginning beyond the known elements (at or beyond atomic number 93). The elements from actinium to uranium were instead believed to form part of a fourth series of transition metals because of their high oxidation states; accordingly, they were placed in groups 3 through 6. In 1940, neptunium and plutonium were the first transuranic elements to be discovered; they were placed in sequence beneath rhenium and osmium, respectively. However, preliminary investigations of their chemistry suggested a greater similarity to uranium than to lighter transition metals, challenging their placement in the periodic table. During his Manhattan Project research in 1943, American chemist Glenn T. Seaborg experienced unexpected difficulties in isolating the elements americium and curium, as they were believed to be part of a fourth series of transition metals. Seaborg wondered if these elements belonged to a different series, which would explain why their chemical properties, in particular the instability of higher oxidation states, were different from predictions. In 1945, against the advice of colleagues, he proposed a significant change to Mendeleev's table: the actinide series. Seaborg's actinide concept of heavy element electronic structure proposed that the actinides form an inner transition series analogous to the rare-earth series of lanthanide elements—they would comprise the second row of the f-block (the 5f series), in which the lanthanides formed the 4f series. This facilitated chemical identification of americium and curium, and further experiments corroborated Seaborg's hypothesis; a spectroscopic study at the Los Alamos National Laboratory by a group led by American physicist Edwin McMillan indicated that 5f orbitals, rather than 6d orbitals, were indeed being filled. However, these studies could not unambiguously determine the first element with 5f electrons and therefore the first element in the actinide series; it was thus also referred to as the "thoride" or "uranide" series until it was later found that the series began with actinium. In light of these observations and an apparent explanation for the chemistry of transuranic elements, and despite fear among his colleagues that it was a radical idea that would ruin his reputation, Seaborg nevertheless submitted it to Chemical & Engineering News and it gained widespread acceptance; new periodic tables thus placed the actinides below the lanthanides. Following its acceptance, the actinide concept proved pivotal in the groundwork for discoveries of heavier elements, such as berkelium in 1949. It also supported experimental results for a trend towards +3 oxidation states in the elements beyond americium—a trend observed in the analogous 4f series. Relativistic effects and expansions beyond period 7 Seaborg's subsequent elaborations of the actinide concept theorized a series of superheavy elements in a transactinide series comprising elements from 104 to 121 and a superactinide series of elements from 122 to 153. He proposed an extended periodic table with an additional period of 50 elements (thus reaching element 168); this eighth period was derived from an extrapolation of the Aufbau principle and placed elements 121 to 138 in a g-block, in which a new g subshell would be filled. Seaborg's model, however, did not take into account relativistic effects resulting from high atomic number and electron orbital speed. Burkhard Fricke in 1971 and Pekka Pyykkö in 2010 used computer modeling to calculate the positions of elements up to Z = 172, and found that the positions of several elements were different from those predicted by Seaborg. Although models from Pyykkö and Fricke generally place element 172 as the next noble gas, there is no clear consensus on the electron configurations of elements beyond 120 and thus their placement in an extended periodic table. It is now thought that because of relativistic effects, such an extension will feature elements that break the periodicity in known elements, thus posing another hurdle to future periodic table constructs. The discovery of tennessine in 2010 filled the last remaining gap in the seventh period. Any newly discovered elements will thus be placed in an eighth period. Despite the completion of the seventh period, experimental chemistry of some transactinides has been shown to be inconsistent with the periodic law. In the 1990s, Ken Czerwinski at University of California, Berkeley observed similarities between rutherfordium and plutonium and between dubnium and protactinium, rather than a clear continuation of periodicity in groups 4 and 5. More recent experiments on copernicium and flerovium have yielded inconsistent results, some of which suggest that these elements behave more like the noble gas radon rather than mercury and lead, their respective congeners. As such, the chemistry of many superheavy elements has yet to be well characterized, and it remains unclear whether the periodic law can still be used to extrapolate the properties of undiscovered elements. See also History of chemistry Periodic systems of small molecules The Mystery of Matter: Search for the Elements (PBS film) Discovery of chemical elements Types of periodic tables Notes References Sources . Republished from . Republished from Further reading . Republished from External links Development of the periodic table (part of a collection of pages that explores the periodic table and the elements) by the Royal Society of Chemistry Dr. Eric Scerri's web page, which contains interviews, lectures and articles on various aspects of the periodic system, including the history of the periodic table. The Internet Database of Periodic Tables – a large collection of periodic tables and periodic system formulations. History of Mendeleev periodic table of elements as a data visualization at Stack Exchange History of chemistry Periodic table Periodic table
History of the periodic table
[ "Chemistry" ]
9,135
[ "Periodic table" ]
619,926
https://en.wikipedia.org/wiki/Gravitational%20collapse
Gravitational collapse is the contraction of an astronomical object due to the influence of its own gravity, which tends to draw matter inward toward the center of gravity. Gravitational collapse is a fundamental mechanism for structure formation in the universe. Over time an initial, relatively smooth distribution of matter, after sufficient accretion, may collapse to form pockets of higher density, such as stars or black holes. Star formation involves a gradual gravitational collapse of interstellar medium into clumps of molecular clouds and potential protostars. The compression caused by the collapse raises the temperature until thermonuclear fusion occurs at the center of the star, at which point the collapse gradually comes to a halt as the outward thermal pressure balances the gravitational forces. The star then exists in a state of dynamic equilibrium. During the star's evolution a star might collapse again and reach several new states of equilibrium. Star formation An interstellar cloud of gas will remain in hydrostatic equilibrium as long as the kinetic energy of the gas pressure is in balance with the potential energy of the internal gravitational force. Mathematically this is expressed using the virial theorem, which states that to maintain equilibrium, the gravitational potential energy must equal twice the internal thermal energy. If a pocket of gas is massive enough that the gas pressure is insufficient to support it, the cloud will undergo gravitational collapse. The critical mass above which a cloud will undergo such collapse is called the Jeans mass. This mass depends on the temperature and density of the cloud but is typically thousands to tens of thousands of solar masses. Stellar remnants At what is called the star's death (when a star has burned out its fuel supply), it will undergo a contraction that can be halted only if it reaches a new state of equilibrium. Depending on the mass during its lifetime, these stellar remnants can take one of three forms: White dwarfs, in which gravity is opposed by electron degeneracy pressure Neutron stars, in which gravity is opposed by neutron degeneracy pressure and short-range repulsive neutron–neutron interactions mediated by the strong force Black hole, in which there is no force strong enough to resist gravitational collapse White dwarf The collapse of the stellar core to a white dwarf takes place over tens of thousands of years, while the star blows off its outer envelope to form a planetary nebula. If it has a companion star, a white dwarf-sized object can accrete matter from the companion star. Before it reaches the Chandrasekhar limit (about one and a half times the mass of the Sun, at which point gravitational collapse would start again), the increasing density and temperature within a carbon-oxygen white dwarf initiate a new round of nuclear fusion, which is not regulated because the star's weight is supported by degeneracy rather than thermal pressure, allowing the temperature to rise exponentially. The resulting runaway carbon detonation completely blows the star apart in a type Ia supernova. Neutron star Neutron stars are formed by the gravitational collapse of the cores of larger stars. They are the remnant of supernova types Ib, Ic, and II. Neutron stars are expected to have a skin or "atmosphere" of normal matter on the order of a millimeter thick, underneath which they are composed almost entirely of closely packed neutrons called neutron matter with a slight dusting of free electrons and protons mixed in. This degenerate neutron matter has a density of about . The appearance of stars composed of exotic matter and their internal layered structure is unclear since any proposed equation of state of degenerate matter is highly speculative. Other forms of hypothetical degenerate matter may be possible, and the resulting quark stars, strange stars (a type of quark star), and preon stars, if they exist, would, for the most part, be indistinguishable from a neutron star: In most cases, the exotic matter would be hidden under a crust of "ordinary" degenerate neutrons. Black holes According to Einstein's theory, for even larger stars, above the Landau–Oppenheimer–Volkoff limit, also known as the Tolman–Oppenheimer–Volkoff limit (roughly double the mass of the Sun) no known form of cold matter can provide the force needed to oppose gravity in a new dynamical equilibrium. Hence, the collapse continues with nothing to stop it. Once a body collapses to within its Schwarzschild radius it forms what is called a black hole, meaning a spacetime region from which not even light can escape. It follows from general relativity and the theorem of Roger Penrose that the subsequent formation of some kind of singularity is inevitable. Nevertheless, according to Penrose's cosmic censorship hypothesis, the singularity will be confined within the event horizon bounding the black hole, so the spacetime region outside will still have a well-behaved geometry, with strong but finite curvature, that is expected to evolve towards a rather simple form describable by the historic Schwarzschild metric in the spherical limit and by the more recently discovered Kerr metric if angular momentum is present. If the precursor has a magnetic field, it is dispelled during the collapse, as black holes are thought to have no magnetic field of their own. On the other hand, the nature of the kind of singularity to be expected inside a black hole remains rather controversial. According to theories based on quantum mechanics, at a later stage, the collapsing object will reach the maximum possible energy density for a certain volume of space or the Planck density (as there is nothing that can stop it). This is the point at which it has been hypothesized that the known laws of gravity cease to be valid. There are competing theories as to what occurs at this point. For example loop quantum gravity predicts that a Planck star would form. Regardless, it is argued that gravitational collapse ceases at that stage and a singularity, therefore, does not form. Theoretical minimum radius for a star The radii of larger mass neutron stars (about 2.8 solar mass) are estimated to be about 12 km, or approximately 2 times their equivalent Schwarzschild radius. It might be thought that a sufficiently massive neutron star could exist within its Schwarzschild radius (1.0 SR) and appear like a black hole without having all the mass compressed to a singularity at the center; however, this is probably incorrect. Within the event horizon, the matter would have to move outward faster than the speed of light in order to remain stable and avoid collapsing to the center. No physical force, therefore, can prevent a star smaller than 1.0 SR from collapsing to a singularity (at least within the currently accepted framework of general relativity; this does not hold for the Einstein–Yang–Mills–Dirac system). A model for the nonspherical collapse in general relativity with the emission of matter and gravitational waves has been presented. See also Big Crunch Gravitational compression Stellar evolution Thermal runaway References Bibliography Black holes Effects of gravity Neutron stars White dwarfs Star formation
Gravitational collapse
[ "Physics", "Astronomy" ]
1,422
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
619,938
https://en.wikipedia.org/wiki/Mevalonate%20pathway
The mevalonate pathway, also known as the isoprenoid pathway or HMG-CoA reductase pathway is an essential metabolic pathway present in eukaryotes, archaea, and some bacteria. The pathway produces two five-carbon building blocks called isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP), which are used to make isoprenoids, a diverse class of over 30,000 biomolecules such as cholesterol, vitamin K, coenzyme Q10, and all steroid hormones. The mevalonate pathway begins with acetyl-CoA and ends with the production of IPP and DMAPP. It is best known as the target of statins, a class of cholesterol lowering drugs. Statins inhibit HMG-CoA reductase within the mevalonate pathway. Upper mevalonate pathway The mevalonate pathway of eukaryotes, archaea, and eubacteria all begin the same way. The sole carbon feed stock of the pathway is acetyl-CoA. The first step condenses two acetyl-CoA molecules to yield acetoacetyl-CoA. This is followed by a second condensation to form HMG-CoA (3-hydroxy-3- methyl-glutaryl-CoA). Reduction of HMG-CoA yields (R)-mevalonate. These first 3 enzymatic steps are called the upper mevalonate pathway. Lower mevalonate pathway The lower mevalonate pathway which converts (R)-mevalonate into IPP and DMAPP has 3 variants. In eukaryotes, mevalonate is phosphorylated twice in the 5-OH position, then decarboxylated to yield IPP. In some archaea such as Haloferax volcanii, mevalonate is phosphorylated once in the 5-OH position, decarboxylated to yield isopentenyl phosphate (IP), and finally phosphorylated again to yield IPP (Archaeal Mevalonate Pathway I). A third mevalonate pathway variant found in Thermoplasma acidophilum, phosphorylates mevalonate at the 3-OH position followed by phosphorylation at the 5-OH position. The resulting metabolite, mevalonate-3,5-bisphosphate, is decarboxylated to IP, and finally phosphorylated to yield IPP (Archaeal Mevalonate Pathway II). Regulation and feedback Several key enzymes can be activated through DNA transcriptional regulation on activation of SREBP (sterol regulatory element-binding protein-1 and -2). This intracellular sensor detects low cholesterol levels and stimulates endogenous production by the HMG-CoA reductase pathway, as well as increasing lipoprotein uptake by up-regulating the LDL-receptor. Regulation of this pathway is also achieved by controlling the rate of translation of the mRNA, degradation of reductase and phosphorylation. Pharmacology A number of drugs target the mevalonate pathway: Statins (used to decrease cholesterol levels); Bisphosphonates (used to treat various bone-degenerative diseases such as osteoporosis) Diseases A number of diseases affect the mevalonate pathway: Mevalonate Kinase Deficiency Mevalonic Aciduria Hyperimmunoglobulinemia D Syndrome (HIDS). Alternative pathway Plants, most bacteria, and some protozoa such as malaria parasites have the ability to produce isoprenoids using an alternative pathway called the methylerythritol phosphate (MEP) or non-mevalonate pathway. The output of both the mevalonate pathway and the MEP pathway are the same, IPP and DMAPP, however the enzymatic reactions to convert acetyl-CoA into IPP are entirely different. Interaction between the two metabolic pathways can be studied by using 13C-glucose isotopomers. In higher plants, the MEP pathway operates in plastids while the mevalonate pathway operates in the cytosol. Examples of bacteria that contain the MEP pathway include Escherichia coli and pathogens such as Mycobacterium tuberculosis. Enzymatic reactions References External links Rensselaer Polytechnic Institute page on cholesterol synthesis (including regulation) Metabolic pathways
Mevalonate pathway
[ "Chemistry" ]
946
[ "Metabolic pathways", "Metabolism" ]
619,960
https://en.wikipedia.org/wiki/Eupnea
In the mammalian respiratory system, eupnea is normal, good, healthy and unlabored breathing, sometimes known as quiet breathing or a resting respiratory rate. In eupnea, expiration employs only the elastic recoil of the lungs. Eupnea is the unaffected natural breathing in all mammals, including humans. Eupnea does not require any volitional effort whatsoever, but occurs whenever a mammal is in a natural state of relaxation, i.e. when there is no clear and present danger in their environment and without substantial exertion. When a mammal perceives potential danger or is under exertion, eupnea stops, and a much more limited and labored form of breathing—shallow breathing—occurs. Eupnea is an efficient and effective form of breathing, which balances between maximizing air intake, and minimizing muscular effort. During eupnea, neural output to respiratory muscles is highly regular and stable, with rhythmic bursts of activity during inspiration only to the diaphragm and external intercostal muscles. Etymology and pronunciation The word eupnea uses combining forms of eu- + -pnea, from Greek , from , "well" + , "breath". See pronunciation information at dyspnea. See also List of terms of lung size and activity Respiratory rate Dyspnea Tachypnea Bradypnea Apnea Footnotes Pollita regordita Chochito finito References Respiratory system
Eupnea
[ "Biology" ]
296
[ "Organ systems", "Respiratory system" ]
619,980
https://en.wikipedia.org/wiki/Thermocline
A thermocline (also known as the thermal layer or the metalimnion in lakes) is a distinct layer based on temperature within a large body of fluid (e.g. water, as in an ocean or lake; or air, e.g. an atmosphere) with a high gradient of distinct temperature differences associated with depth. In the ocean, the thermocline divides the upper mixed layer from the calm deep water below. Depending largely on season, latitude, and turbulent mixing by wind, thermoclines may be a semi-permanent feature of the body of water in which they occur, or they may form temporarily in response to phenomena such as the radiative heating/cooling of surface water during the day/night. Factors that affect the depth and thickness of a thermocline include seasonal weather variations, latitude, and local environmental conditions, such as tides and currents. Oceans Most of the heat energy of the sunlight that strikes the Earth is absorbed in the first few centimeters at the ocean's surface, which heats during the day and cools at night as heat energy is lost to space by radiation. Waves mix the water near the surface layer and distribute heat to deeper water such that the temperature may be relatively uniform in the upper , depending on wave strength and the existence of surface turbulence caused by currents. Below this mixed layer, the temperature remains relatively stable over day/night cycles. The temperature of the deep ocean drops gradually with depth. As saline water does not freeze until it reaches (colder as depth and pressure increase) the temperature well below the surface is usually not far from zero degrees. The thermocline varies in depth. It is semi-permanent in the tropics, variable in temperate regions and shallow to nonexistent in the polar regions, where the water column is cold from the surface to the bottom. A layer of sea ice will act as an insulation blanket. The first accurate global measurements were made during the oceanographic expedition of HMS Challenger. In the open ocean, the thermocline is characterized by a negative sound speed gradient, making the thermocline important in submarine warfare because it can reflect active sonar and other acoustic signals. This stems from a discontinuity in the acoustic impedance of water created by the sudden change in density. In scuba diving, a thermocline where water drops in temperature by a few degrees Celsius quite suddenly can sometimes be observed between two bodies of water, for example where colder upwelling water runs into a surface layer of warmer water. It gives the water an appearance of wrinkled glass, the kind often used in bathroom windows to obscure the view, and is caused by the altered refractive index of the cold or warm water column. These same schlieren can be observed when hot air rises off the tarmac at airports or desert roads and is the cause of mirages. Thermocline seasonality The thermocline in the ocean can vary in depth and strength seasonally. This is particularly noticeable in mid-latitudes with a thicker mixed layer in the winter and thinner mixed layer in summer. The cooler winter temperatures cause the thermocline to drop to further depths and warmed summer temperatures bring the thermocline back to the upper layer. In areas around the tropics and subtropics, the thermocline may become even thinner in the summer than in other locations. At higher latitudes, around the poles, there is more of a seasonal thermocline than a permanent one with warmer surface waters. This is where there is a dichothermal layer instead. In the Northern hemisphere, the maximum temperatures at the surface occur through August and September and minimum temperatures occur through February and March with total heat content being lowest in March. This is when the seasonal thermocline starts to build back up after being broken down through the colder months. A permanent thermocline is one that is not affected by season and lies below the yearly mixed layer maximum depth. Other water bodies Thermoclines can also be observed in lakes. In colder climates, this leads to a phenomenon called stratification. During the summer, warm water, which is less dense, will sit on top of colder, denser, deeper water with a thermocline separating them. The warm layer is called the epilimnion and the cold layer is called the hypolimnion. Because the warm water is exposed to the sun during the day, a stable system exists and very little mixing of warm water and cold water occurs, particularly in calm weather. One result of this stability is that as the summer wears on, there is less and less oxygen below the thermocline as the water below the thermocline never circulates to the surface and organisms in the water deplete the available oxygen. As winter approaches, the temperature of the surface water will drop as nighttime cooling dominates heat transfer. A point is reached where the density of the cooling surface water becomes greater than the density of the deep water and overturning begins as the dense surface water moves down under the influence of gravity. This process is aided by wind or any other process (currents for example) that agitates the water. This effect also occurs in Arctic and Antarctic waters, bringing water to the surface which, although low in oxygen, is higher in nutrients than the original surface water. This enriching of surface nutrients may produce blooms of phytoplankton, making these areas productive. As the temperature continues to drop, the water on the surface may get cold enough to freeze and the lake/ocean begins to ice over. A new thermocline develops where the densest water () sinks to the bottom, and the less dense water (water that is approaching the freezing point) rises to the top. Once this new stratification establishes itself, it lasts until the water warms enough for the 'spring turnover,' which occurs after the ice melts and the surface water temperature rises to 4 °C. During this transition, a thermal bar may develop. Waves can occur on the thermocline, causing the depth of the thermocline as measured at a single location to oscillate (usually as a form of seiche). Alternately, the waves may be induced by flow over a raised bottom, producing a thermocline wave which does not change with time, but varies in depth as one moves into or against the flow. Atmosphere The thermal boundary between the troposphere (lower atmosphere) and the stratosphere (upper atmosphere) is a thermocline. Temperature generally decreases with altitude, but the heat from the day's exposure to sun is released at night, which can create a warm region at ground with colder air above. This is known as an inversion (a further example of a thermocline). At sunrise, the sun's energy warms the ground, causing the warming air to rise, thus destabilizing and eventually reversing the inversion layer. This phenomenon was first applied to the field of noise pollution study in the 1960s, contributing to the design of urban highways and noise barriers. See also Thin layers (oceanography) References Anti-submarine warfare Oceanography Physical oceanography Aquatic ecology Fisheries science Limnology
Thermocline
[ "Physics", "Biology", "Environmental_science" ]
1,490
[ "Hydrology", "Applied and interdisciplinary physics", "Oceanography", "Ecosystems", "Physical oceanography", "Aquatic ecology" ]
619,984
https://en.wikipedia.org/wiki/Surface%20layer
The surface layer is the layer of a turbulent fluid most affected by interaction with a solid surface or the surface separating a gas and a liquid where the characteristics of the turbulence depend on distance from the interface. Surface layers are characterized by large normal gradients of tangential velocity and large concentration gradients of any substances (temperature, moisture, sediments et cetera) transported to or from the interface. The term boundary layer is used in meteorology and physical oceanography. The atmospheric surface layer is the lowest part of the atmospheric boundary layer (typically the bottom 10% where the log wind profile is valid). The ocean has two surface layers: the benthic, found immediately above the sea floor, and the marine surface layer, at the air-sea interface. Mathematical formulation A simple model of the surface layer can be derived by first examining the turbulent momentum flux through a surface. Using Reynolds decomposition to express the horizontal flow in the direction as the sum of a slowly varying component, , and a turbulent component, , and the vertical flow, , in an analogous fashion, we can express the flux of turbulent momentum through a surface, , as the time-averaged magnitude of vertical turbulent transport of horizontal turbulent momentum, : . If the flow is homogeneous within the region, we can set the product of the vertical gradient of the mean horizontal flow and the eddy viscosity coefficient equal to : , where is defined in terms of Prandtl's mixing length hypothesis: where is the mixing length. We can then express as: . Assumptions about the mixing length From the figure above, we can see that the size of a turbulent eddy near the surface is constrained by its proximity to the surface; turbulent eddies centered near the surface cannot be as large as those centered further from the surface. From this consideration, and in neutral conditions, it is reasonable to assume that the mixing length, is proportional to the eddy's depth in the surface: , where is the depth and is known as the von Kármán constant. Thus the gradient can be integrated to solve for : . So, we see that the mean flow in the surface layer has a logarithmic relationship with depth. In non-neutral conditions the mixing length is also affected by buoyancy forces and Monin-Obukhov similarity theory is required to describe the horizontal-wind profile. Surface layer in oceanography The surface layer is studied in oceanography, as both the wind stress and action of surface waves can cause turbulent mixing necessary for the formation of a surface layer. The world's oceans are made up of many different water masses. Each have particular temperature and salinity characteristics as a result of the location in which they formed. Once formed at a particular source, a water mass will travel some distance via large-scale ocean circulation. Typically, the flow of water in the ocean is described as turbulent (i.e. it doesn't follow straight lines). Water masses can travel across the ocean as turbulent eddies, or parcels of water usually along constant density (isopycnic) surfaces where the expenditure of energy is smallest. When these turbulent eddies of different water masses interact, they will mix together. With enough mixing, some stable equilibrium is reached and a mixed layer is formed. Turbulent eddies can also be produced from wind stress by the atmosphere on the ocean. This kind of interaction and mixing through buoyancy at the surface of the ocean also plays a role in the formation of a surface mixed layer. Discrepancies with traditional theory The logarithmic flow profile has long been observed in the ocean, but recent, highly sensitive measurements reveal a sublayer within the surface layer in which turbulent eddies are enhanced by the action of surface waves. It is becoming clear that the surface layer of the ocean is only poorly modeled as being up against the "wall" of the air-sea interaction. Observations of turbulence in Lake Ontario reveal under wave-breaking conditions the traditional theory significantly underestimates the production of turbulent kinetic energy within the surface layer. Diurnal cycle The depth of the surface mixed layer is affected by solar insolation and thus is related to the diurnal cycle. After nighttime convection over the ocean, the turbulent surface layer is found to completely decay and restratify. The decay is caused by the decrease in solar insolation, divergence of turbulent flux and relaxation of lateral gradients. During the nighttime, the surface ocean cools because the atmospheric circulation is reduced due to the change in heat with the setting of the sun each day. Cooler water is less buoyant and will sink. This buoyancy effect causes water masses to be transported to lower depths even lower those reached during daytime. During the following daytime, water at depth is restratified or un-mixed because of the warming of the sea surface and buoyancy driving the warmed water upward. The entire cycle will be repeated and the water will be mixed during the following nighttime. In general, the surface mixed layer only occupies the first 100 meters of the ocean but can reach 150 m in the end of winter. The diurnal cycle does not change the depth of the mixed layer significantly relative to the seasonal cycle which produces much larger changes in sea surface temperature and buoyancy. With several vertical profiles, one can estimate the depth of the mixed layer by assigning a set temperature or density difference in water between surface and deep ocean observations – this is known as the “threshold method”. However, this diurnal cycle does not have the same effect in midlatitudes as it does at tropical latitudes. Tropical regions are less likely than midlatitude regions to have a mixed layer dependent on diurnal temperature changes. One study explored diurnal variability of the mixed layer depth in the Western Equatorial Pacific Ocean. Results suggested no appreciable change in the mixed layer depth with the time of day. The significant precipitation in this tropical area would lead to further stratification of the mixed layer. Another study which instead focused on the Central Equatorial Pacific Ocean found a tendency for increased depths of the mixed layer during nighttime. The extratropical or midlatitude mixed layer was shown in one study to be more affected by diurnal variability than the results of the two tropical ocean studies. Over a 15-day study period in Australia, the diurnal mixed layer cycle repeated in a consistent manner with decaying turbulence throughout the day. See also Boundary layer Mixed layer Density Salinity Sea surface microlayer References Boundary layer meteorology Oceanography
Surface layer
[ "Physics", "Environmental_science" ]
1,320
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
620,083
https://en.wikipedia.org/wiki/Sensitivity%20analysis
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem. Motivation A mathematical model (for example in biology, climate change, economics, renewable energy, agronomy...) can be highly complex, and as a result, its relationships between inputs and outputs may be faultily understood. In such cases, the model can be viewed as a black box, i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, errors in input data, parameter estimation and approximation procedure, absence of information and poor or partial understanding of the driving forces and mechanisms, choice of underlying hypothesis of model, and so on. This uncertainty limits our confidence in the reliability of the model's response or output. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events. In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance and can be useful to determine the impact of a uncertain variable for a range of purposes, including: Testing the robustness of the results of a model or system in the presence of uncertainty. Increased understanding of the relationships between input and output variables in a system or model. Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness. Searching for errors in the model (by encountering unexpected relationships between inputs and outputs). Model simplification – fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure. Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive). Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering). For calibration of models with large number of parameters, by focusing on the sensitive parameters. To identify important connections between observations, model inputs, and predictions or forecasts, leading to the development of better models. Mathematical formulation and vocabulary The object of study for sensitivity analysis is a function , (called "mathematical model" or "programming code"), viewed as a black box, with the -dimensional input vector and the output , presented as following: The variability in input parameters have an impact on the output . While uncertainty analysis aims to describe the distribution of the output (providing its statistics, moments, pdf, cdf,...), sensitivity analysis aims to measure and quantify the impact of each input or a group of inputs on the variability of the output (by calculating the corresponding sensitivity indices). Figure 1 provides a schematic representation of this statement. Challenges, settings and related issues Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating the -function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are: Computational expense: Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a sampling-based approach. This can be a significant problem when: Time-consuming models are very often encountered when complex models are involved. A single run of the model takes a significant amount of time (minutes, hours or longer). The use of statistical model (meta-model, data-driven model) including HDMR to approximate the -function is one way of reducing the computation costs. The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the multidimensional input space, which grows exponentially in size with the number of inputs. Therefore, screening methods can be useful for dimension reduction. Another way to tackle the curse of dimensionality is to use sampling based on low discrepancy sequences. Correlated inputs: Most common sensitivity analysis methods assume independence between model inputs, but sometimes inputs can be strongly correlated. Correlations between inputs must then be taken into account in the analysis. Nonlinearity: Some sensitivity analysis approaches, such as those based on linear regression, can inaccurately measure sensitivity when the model response is nonlinear with respect to its inputs. In such cases, variance-based measures are more appropriate. Multiple or functional outputs: Generally introduced for single-output codes, sensitivity analysis extends to cases where the output is a vector or function. When outputs are correlated, it does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret. Stochastic code: A code is said to be stochastic when, for several evaluations of the code with the same inputs, different outputs are obtained (as opposed to a deterministic code when, for several evaluations of the code with the same inputs, the same output is always obtained). In this case, it is necessary to separate the variability of the output due to the variability of the inputs from that due to stochasticity. Data-driven approach: Sometimes it is not possible to evaluate the code at all desired points, either because the code is confidential or because the experiment is not reproducible. The code output is only available for a given set of points, and it can be difficult to perform a sensitivity analysis on a limited set of data. We then build a statistical model (meta-model, data-driven model) from the available data (that we use for training) to approximate the code (the -function). To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section. Sensitivity analysis methods There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example) variance decompositions, partial derivatives or elementary effects. In general, however, most procedures adhere to the following outline: Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note that this can be difficult and many methods exist to elicit uncertainty distributions from subjective data. Identify the model output to be analysed (the target of interest should ideally have a direct relation to the problem tackled by the model). Run the model a number of times using some design of experiments, dictated by the method of choice and the input uncertainty. Using the resulting model outputs, calculate the sensitivity measures of interest. In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis. The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given. In addition, an engineering view of the methods that takes into account the four important sensitivity analysis parameters has also been proposed. Visual analysis The first intuitive approach (especially useful in less complex cases) is to analyze the relationship between each input and the output using scatter plots, and observe the behavior of these pairs. The diagrams give an initial idea of the correlation and which input has an impact on the output. Figure 2 shows an example where two inputs, and are highly correlated with the output. One-at-a-time (OAT) One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output. OAT customarily involves moving one input variable, keeping others at their baseline (nominal) values, then, returning the variable to its nominal value, then repeating for each of the other inputs in the same way. Sensitivity may then be measured by monitoring changes in the output, e.g. by partial derivatives or linear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously. OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure. Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of interactions between input variables and is unsuitable for nonlinear models. The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. The convex hull bounding all these points is an octahedron which has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms a hyperoctahedron which has a volume fraction of . With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added. While the sparsity of OAT is theoretically not a concern for linear models, true linearity is rare in nature. Morris Named after statistician Max D. Morris this method is suitable for screening systems with many parameters. This is also known as method of elementary effects because it combines repeated steps along the various parametric axes. Derivative-based local methods Local derivative-based methods involve taking the partial derivative of the output with respect to an input factor : where the subscript x0 indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling and Automated Differentiation are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification. One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables. Regression analysis Regression analysis, in the context of sensitivity analysis, involves fitting a linear regression to the model response and using standardized regression coefficients as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the coefficient of determination is large. The advantages of regression analysis are that it is simple and has a low computational cost. Variance-based methods Variance-based methods are a class of probabilistic approaches which quantify the input and output uncertainties as random variables, represented via their probability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input. This amount is quantified and calculated using Sobol indices: they represent the proportion of variance explained by an input or group of inputs. This expression essentially measures the contribution of alone to the uncertainty (variance) in (averaged over variations in other variables), and is known as the first-order sensitivity index or main effect index . For an input , Sobol index is defined as following: where and denote the variance and expected value operators respectively. Importantly, first-order sensitivity index of does not measure the uncertainty caused by interactions has with other variables. A further measure, known as the total effect index , gives the total variance in caused by and its interactions with any of the other input variables. The total effect index is given as following: where denotes the set of all input variables except . Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of Monte Carlo methods, but since this can involve many thousands of model runs, other methods (such as metamodels) can be used to reduce computational expense when necessary. Moment-independent methods Moment-independent methods extend variance-based techniques by considering the probability density or cumulative distribution function of the model output . Thus, they do not refer to any particular moment of , whence the name. The moment-independent sensitivity measures of , here denoted by , can be defined through an equation similar to variance-based indices replacing the conditional expectation with a distance, as , where is a statistical distance [metric or divergence] between probability measures, and are the marginal and conditional probability measures of . If is a distance, the moment-independent global sensitivity measure satisfies zero-independence. This is a relevant statistical property also known as Renyi's postulate D. The class of moment-independent sensitivity measures includes indicators such as the -importance measure, the new correlation coefficient of Chatterjee, the Wasserstein correlation of Wiesel and the kernel-based sensitivity measures of Barr and Rabitz. Another measure for global sensitivity analysis, in the category of moment-independent approaches, is the PAWN index. Variogram analysis of response surfaces (VARS) One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the model in the parameter space. By utilizing the concepts of directional variograms and covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values of , and hence also to the values of . Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directional variograms for a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods. More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient). Noteworthy, it has been shown that there is a theoretical link between the VARS framework and the variance-based and derivative-based approaches. Fourier amplitude sensitivity test (FAST) The Fourier amplitude sensitivity test (FAST) uses the Fourier series to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings. Shapley effects Shapley effects rely on Shapley values and represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobol’s indices as their value falls between the first order Sobol’ effect and the total order effect. Chaos polynomials The principle is to project the function of interest onto a basis of orthogonal polynomials. The Sobol indices are then expressed analytically in terms of the coefficients of this decomposition. Complementary research approaches for time-consuming simulations A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to computational expense). Generally, these methods focus on efficiently (by creating a metamodel of the costly function to be evaluated and/or by “ wisely ” sampling the factor space) calculating variance-based measures of sensitivity. Metamodels Metamodels (also known as emulators, surrogate models or response surfaces) are data-modeling/machine learning approaches that involve building a relatively simple mathematical function, known as an metamodels, that approximates the input/output behavior of the model itself. In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs . By running the model at a number of points in the input space, it may be possible to fit a much simpler metamodels , such that to within an acceptable margin of error. Then, sensitivity measures can be calculated from the metamodel (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the metamodel can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model. Clearly, the crux of an metamodel approach is to find an (metamodel) that is a sufficiently close approximation to the model . This requires the following steps, Sampling (running) the model at a number of points in its input space. This requires a sample design. Selecting a type of emulator (mathematical function) to use. "Training" the metamodel using the sample data from the model – this generally involves adjusting the metamodel parameters until the metamodel mimics the true model as well as possible. Sampling the model can often be done with low-discrepancy sequences, such as the Sobol sequence – due to mathematician Ilya M. Sobol or Latin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the metamodel type and the training are intrinsically linked since the training method will be dependent on the class of metamodel. Some types of metamodels that have been used successfully for sensitivity analysis include: Gaussian processes (also known as kriging), where any combination of output points is assumed to be distributed as a multivariate Gaussian distribution. Recently, "treed" Gaussian processes have been used to deal with heteroscedastic and discontinuous responses. Random forests, in which a large number of decision trees are trained, and the result averaged. Gradient boosting, where a succession of simple regressions are used to weight data points to sequentially reduce error. Polynomial chaos expansions, which use orthogonal polynomials to approximate the response surface. Smoothing splines, normally used in conjunction with high-dimensional model representation (HDMR) truncations (see below). Discrete Bayesian networks, in conjunction with canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables to significantly reduce dimensionality. The use of an emulator introduces a machine learning problem, which can be difficult if the response of the model is highly nonlinear. In all cases, it is useful to check the accuracy of the emulator, for example using cross-validation. High-dimensional model representations (HDMR) A high-dimensional model representation (HDMR) (the term is due to H. Rabitz) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators. Monte Carlo filtering Sensitivity analysis via Monte Carlo filtering is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output. Related concepts Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions. The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments. Sensitivity auditing It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant). In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP, a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests. Sensitivity auditing is recommended in the European Commission guidelines for impact assessment, as well as in the report Science Advice for Policy by European Academies. Pitfalls and difficulties Some common difficulties in sensitivity analysis include: Assumptions vs. inferences: In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer: " I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful." Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate. Not enough information to build probability distributions for the inputs: Probability distributions can be constructed from expert elicitation, although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis. Unclear purpose of the analysis: Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output. Too many model outputs are considered: This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis. Piecewise sensitivity: This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error). SA in international context The importance of understanding and managing uncertainty in model results has inspired many scientists from different research centers all over the world to take a close interest in this subject. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment), the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency's modeling guidelines. Specific applications of sensitivity analysis The following pages discuss sensitivity analyses in relation to specific applications: Environmental sciences Business (Corporate) finance Epidemiology Multi-criteria decision making Model calibration See also Causality Elementary effects method Experimental uncertainty analysis Fourier amplitude sensitivity testing Info-gap decision theory Interval FEM Perturbation analysis Probabilistic design Probability bounds analysis Robustification ROC curve Uncertainty quantification Variance-based sensitivity analysis Multiverse analysis Feature selection References Further reading Borgonovo, E. (2017). Sensitivity Analysis: An Introduction for the Management Scientist. International Series in Management Science and Operations Research, Springer New York. Pilkey, O. H. and L. Pilkey-Jarvis (2007), Useless Arithmetic. Why Environmental Scientists Can't Predict the Future. New York: Columbia University Press. Santner, T. J.; Williams, B. J.; Notz, W.I. (2003) Design and Analysis of Computer Experiments; Springer-Verlag. Haug, Edward J.; Choi, Kyung K.; Komkov, Vadim (1986) Design sensitivity analysis of structural systems. Mathematics in Science and Engineering, 177. Academic Press, Inc., Orlando, FL. Hall, C. A. S. and Day, J. W. (1977). Ecosystem Modeling in Theory and Practice: An Introduction with Case Histories. John Wiley & Sons, New York, NY. isbn=978-0-471-34165-9. External links Web site with material from SAMO conference series (1995-2025) Simulation Business intelligence terms Mathematical modeling Mathematical and quantitative methods (economics)
Sensitivity analysis
[ "Mathematics" ]
5,858
[ "Applied mathematics", "Mathematical modeling" ]
620,110
https://en.wikipedia.org/wiki/Raggare
Raggare is a subculture found mostly in Sweden and parts of Norway and Finland, and to a lesser extent in Denmark, Germany, and Austria. Raggare are related to the American greaser and rockabilly subcultures and are known for their love of hot rod cars and 1950s American pop culture. Loosely translated into English, the term is roughly equivalent to the American "greaser", English "rocker", and Australian "Bodgie" and "Widgie" culture; all share a common passion for mid-20th-century American cars, rockabilly-based music and related fashion (blue-collar in origin, consisting of the likes of white T-shirts, loose fitting denim trousers with rolled cuffs, and canvas top sneakers such as Keds or Converse Chucks, or low-topped boots of an industrial nature). While the raggare movement has its roots in late 1950s youth counterculture, today it is associated mainly with middle-aged men who enjoy meeting and showing off their retro American cars. However, the subculture retains its rural and small-town roots as well as its blue collar and low brow feel. The original phenomenon unleashed moral panic but the contemporary raggare subculture tends to be met with amusement or mild disapproval by mainstream society. Description Influences The Raggare subculture's influences are American popular culture of the 1950s, such as the movies Rebel Without a Cause with James Dean, and American Graffiti. Cars Cars are an important part of the subculture, especially V8-powered cars and other large cars from the United States. Statistically, the most common raggare car () is the 1960s Pontiac Bonneville. They are plentiful, classic, relatively cheap, and have a huge backseat so the Raggare can pile in all of their friends. Raggare have been described as closely related to the hot rod culture, but while hotrodders in the US have to do extensive modifications to their cars to stand out, raggare can use stock US cars and still stand out compared to the more sober Swedish cars. Some raggare also drive European cars from the 1950s, 1960s and the 1970s. According to an estimate by one Swedish car restorer, there are more restored 1950s American cars in Sweden than in the entire United States and although only two 1958 Cadillac convertibles were sold in Sweden there are now 200 of them in Sweden. Between 4000 and 5000 classic US cars were at one point imported to Sweden each year. The latest generation of raggare, the so-called pilsnerraggare such as the club Mattsvart who was the subject of the 2019 documentary "Raggarjävlar" ("Greaser scum") do not show much interest in restoring vintage cars, instead opting for driving around in trashed old US cars, drinking alcohol and playing loud music, not necessarily the rockabilly and classic rock traditionally preferred by raggare. Fashion The clothes and hairstyle are that of 1950s rockabilly. Blue jeans, cowboy boots, white T-shirts, sometimes with print (also used to store a pack of cigarettes by folding the sleeve), leather or denim jacket. The hair is styled using Brylcreem or some other pomade. Symbols The display of the battle flag of the Confederate States is popular in the subculture, as followers view it as a symbol of rebellion and American culture. They do not view it as a symbol of slavery or racism. History Formation of the raggare culture was aided by Sweden staying neutral during World War II and untouched by the war. As a result, Sweden's infrastructure remained intact and export economy boomed, which made it possible for the working-class Swedish youth to buy cars, in contrast to most of Europe, which needed to be rebuilt. When raggare first appeared in the 1950s, they caused a moral panic with concerns about the use of alcohol, violence, high-speed driving, and having sex in the back seat. Raggare gangs were seen as a serious problem. The film Raggare! covered the issue in 1959. One especially infamous raggare gang was Stockholm-based "Road Devils", formed in the late 1950s by Bosse "Gamen" Sandberg (1939-1994), which was very heavily publicized in the press. The name of the gang originated from a 1957 movie Hot Rod Rumble, which featured a gang by the same name. Later, raggare often got into fights with hippies and punks, something described in the punk rock song "Raggare Is a Bunch of Motherfuckers" by Rude Kids (and later re-recorded by Turbonegro). When The Sex Pistols played in Sweden on 28 July 1977, a group of raggare waited outside and cornered some young girls who came out from the show. The girls had safety pins through their cheeks, and the raggare ripped them out of their faces. The band was upstairs drinking beer when they heard about it. Sid Vicious wanted to go down and fight, and someone else suggested they should get the limousine and run them over. In the end, the gig promoter called the police. The Hjo band Reklamation was forced to cancel a gig after threats from raggare. Also, Rude Kids was forced to cancel a sold-out gig as the police didn't have the manpower to offer protection against raggare. When Rude Kids played in Stockholm the police had to bring in seven police cars to stop the raggare. When The Stranglers played in Sweden, their followers were caught making Molotov cocktails, and the police intervened after a fight broke out. In 1996, the Swedish post office issued a stamp featuring raggare. Public image Because of their mostly rural roots, retro-aesthetics and attitude towards sex, raggare are often depicted as poorly educated and financially unsuccessful. A famous example is the 1990s TV series, "Ronny and Ragge", a pair of stereotypical raggare who cruise around in a beat-up Ford Taunus. There are several periodic gatherings for raggare around Sweden. The Power Big Meet is the most famous, and is also one of the biggest American car meets in the world. In the media and other popular culture In 1975, then glam rocker Magnus Uggla made the song "Raggarna", which was a tribute to the culture. When performing live in late 1970s and early 1980s, raggare threw rocks and tried to thrash the arenas in which Uggla performed, accusing him of being a punk rocker due to his success with the more punk-oriented albums he released in the late 1970s. Eddie Meduza have performed songs like "Punkjävlar" ("Punk Bastards"), or "Ragga runt," a tribute to the Raggare subculture. Rude Kids made a song about raggare (later re-recorded by Turbonegro) called "Raggare Is a Bunch of Motherfuckers", as an answer to "punkjävlar" by Eddie Meduza. The large number of punk songs about raggare shows the conflict between the two subcultures. The 1959 film Raggare! was about raggare and the moral panic of the time. The TV series Ronny and Ragge is about two raggare who cruise around in a beat-up 1976 - 1994 Ford Taunus. Onkel Kånkel made a song about raggare behaviour during cruising called "Åka femtitalsbil" (later covered by Charta 77). The early Swedish punk band P.F. Commando has issued a song called "Raggare" on their 1978 Svenne Pop 7-inch EP Raggargänget (1962) with Ernst-Hugo Järegård and Sigge Fürst Massproduktion published a compilation album titled Vägra Raggarna Bensin – Punk Från Provinserna. On 1 May 1979 about 100 punks formed their own parade down Kungsgatan under the slogan "Vägra raggarna bensin ("Refuse the raggare gasoline"). Nadja's brothers "Roffe", "Ragge" and Reinhold, Bert Tjenare Kungen (2005) In Welcome to Sweden, Bengt is a raggare, and delighted to meet his niece's American boyfriend because of it. Raggarjävlar (2019) is a documentary about the new generation of raggare in the club Mattsvart from Köping See also Car subcultures like Kustom Kulture, and more generally the Import scene Biker subcultures like the rockers in the UK the Bōsōzoku in Japan, and Chicanos in the US Stereotypes like the Harry in Norway, Gopniks in eastern Europe and rednecks in the US Youth cultures, like Nozem in the Netherlands, and Teddy Boyjens in the UK Americanization relating to the assimilation of American and Canadian cultures into a pan-European soil. Politically inspired subcultures, like Neo-Confederates, Nazi chic, and Skinheads References External links Cultural Imperialism or Hyper-Americanization – Swedish raggare and Chicano Lowriders – article by Scott Holmquist Raggare video guide Raggare – @The Movie Photos from Power Big Meet by Frank Aschberg Stadin Raggarit – Finnish Raggare site Over 6000 pictures and many articles (in Finnish), etc. Punktjafs: Raggare – music and period articles about punks and raggare Culture of Sweden Swedish youth culture Transport culture Musical subcultures Working class in Europe 1950s cars Counterculture Counterculture of the 1950s Counterculture of the 1960s Counterculture of the 1970s
Raggare
[ "Physics" ]
2,001
[ "Physical systems", "Transport", "Transport culture" ]
620,229
https://en.wikipedia.org/wiki/SOS%20response
The SOS response is a global response to DNA damage in which the cell cycle is arrested and DNA repair and mutagenesis are induced. The system involves the RecA protein (Rad51 in eukaryotes). The RecA protein, stimulated by single-stranded DNA, is involved in the inactivation of the repressor (LexA) of SOS response genes thereby inducing the response. It is an error-prone repair system that contributes significantly to DNA changes observed in a wide range of species. Discovery The SOS response was articulated by Evelyn Witkin. Later, by characterizing the phenotypes of mutagenised E. coli, she and post doctoral student Miroslav Radman detailed the SOS response to UV radiation in bacteria. The SOS response to DNA damage was a seminal discovery because it was the first coordinated stress response to be elucidated. Mechanism During normal growth, the SOS genes are negatively regulated by LexA repressor protein dimers. Under normal conditions, LexA binds to a 20-bp consensus sequence (the SOS box) in the operator region for those genes. Some of these SOS genes are expressed at certain levels even in the repressed state, according to the affinity of LexA for their SOS box. Activation of the SOS genes occurs after DNA damage by the accumulation of single stranded (ssDNA) regions generated at replication forks, where DNA polymerase is blocked. RecA forms a filament around these ssDNA regions in an ATP-dependent fashion, and becomes activated. The activated form of RecA interacts with the LexA repressor to facilitate the LexA repressor's self-cleavage from the operator. Once the pool of LexA decreases, repression of the SOS genes goes down according to the level of LexA affinity for the SOS boxes. Operators that bind LexA weakly are the first to be fully expressed. In this way LexA can sequentially activate different mechanisms of repair. Genes having a weak SOS box (such as lexA, recA, uvrA, uvrB, and uvrD) are fully induced in response to even weak SOS-inducing treatments. Thus the first SOS repair mechanism to be induced is nucleotide excision repair (NER), whose aim is to fix DNA damage without commitment to a full-fledged SOS response. If, however, NER does not suffice to fix the damage, the LexA concentration is further reduced, so the expression of genes with stronger LexA boxes (such as sulA, umuD, umuC – these are expressed late) is induced. SulA stops cell division by binding to FtsZ, the initiating protein in this process. This causes filamentation, and the induction of UmuDC-dependent mutagenic repair. As a result of these properties, some genes may be partially induced in response to even endogenous levels of DNA damage, while other genes appear to be induced only when high or persistent DNA damage is present in the cell. Antibiotic resistance Research has shown that the SOS response system can lead to mutations which can lead to resistance to antibiotics. The increased rate of mutation during the SOS response is caused by three low-fidelity DNA polymerases: Pol II, Pol IV and Pol V. Researchers are now targeting these proteins with the aim of creating drugs that prevent SOS repair. By doing so, the time needed for pathogenic bacteria to evolve antibiotic resistance could be extended, thus improving the long term viability of some antibiotic drugs. As well as genetic resistance the SOS response can also promote phenotypic resistance. Here, the genome is preserved whilst other non-genetic factors are altered to enable the bacteria to survive. The SOS dependent tisB-istR toxin-antitoxin system has, for example, been linked to DNA damage-dependent persister cell induction. Genotoxicity testing In Escherichia coli, different classes of DNA-damaging agents can initiate the SOS response, as described above. Taking advantage of an operon fusion placing the lac operon (responsible for producing beta-galactosidase, a protein which degrades lactose) under the control of an SOS-related protein, a simple colorimetric assay for genotoxicity is possible. A lactose analog is added to the bacteria, which is then degraded by beta-galactosidase, thereby producing a colored compound which can be measured quantitatively through spectrophotometry. The degree of color development is an indirect measure of the beta-galactosidase produced, which itself is directly related to the amount of DNA damage. The E. coli are further modified in order to have a number of mutations including a uvrA mutation which renders the strain deficient in excision repair, increasing the response to certain DNA-damaging agents, as well as an rfa mutation, which renders the bacteria lipopolysaccharide-deficient, allowing better diffusion of certain chemicals into the cell in order to induce the SOS response. Commercial kits which measures the primary response of the E. coli cell to genetic damage are available and may be highly correlated with the Ames Test for certain materials. Cyanobacteria Cyanobacteria, the only prokaryotes capable of oxygen evolving photosynthesis, are major producers of the Earth’s oxygenic atmosphere. The marine cyanobacteria Prochlorococcus and Synechococcus appear to have an E. coli like SOS system for repair of DNA, since they encode genes homologous to key E. coli SOS genes such as lexA and sulA. Additional images See also Induction of lysis in lambda phage References External links 1975 in science DNA repair
SOS response
[ "Biology" ]
1,197
[ "Molecular genetics", "DNA repair", "Cellular processes" ]
620,330
https://en.wikipedia.org/wiki/Polyphasic%20sleep
Polyphasic sleep is the practice of sleeping during multiple periods over the course of 24 hours, in contrast to monophasic sleep, which is one period of sleep within 24 hours. Biphasic (or diphasic, bifurcated, or bimodal) sleep refers to two periods, while polyphasic usually means more than two. Segmented sleep and divided sleep may refer to polyphasic or biphasic sleep, but may also refer to interrupted sleep, where the sleep has one or several shorter periods of wakefulness, as was the norm for night sleep in pre-industrial societies. A common form of biphasic or polyphasic sleep includes a nap, which is a short period of sleep, typically taken between the hours of 9 am and 9 pm as an adjunct to the usual nocturnal sleep period. Napping behavior during daytime hours is the simplest form of polyphasic sleep, especially when the naps are taken on a daily basis. The term polyphasic sleep was first used in the early 20th century by psychologist J. S. Szymanski, who observed daily fluctuations in activity patterns. It does not imply any particular sleep schedule. The circadian rhythm disorder known as irregular sleep-wake syndrome is an example of polyphasic sleep in humans. Polyphasic sleep is common in many animals, and is believed to be the ancestral sleep state for mammals, although simians are monophasic. The term polyphasic sleep is also used by an online community that experiments with alternative sleeping schedules in an attempt to increase productivity. There is no scientific evidence that this practice is effective or beneficial. Biphasic sleep Biphasic sleep (also referred to as segmented sleep, or bimodal sleep) is a pattern of sleep which is divided into two segments, or phases, in a 24-hour period. Single nap (siesta) One classic cultural example of a biphasic sleep pattern is the practice of siesta, which is a nap taken in the early afternoon, often after the midday meal. Such a period of sleep is a common tradition in some countries, particularly those where the weather is warm. The siesta is historically common throughout the Mediterranean and Southern Europe. It is the traditional daytime sleep of China, India, South Africa, Italy, Greece, Spain and, through Spanish influence, the Philippines and many Hispanic American countries. In modern times, fewer Spaniards take a daily siesta, ostensibly due to more demanding work schedules. Historical "first sleep" and "second sleep" A separate biphasic sleep pattern is sometimes described as segmented sleep, involved sleeping in two phases, separated by about an hour of wakefulness. This pattern was common in preindustrial societies, and it was most common to sleep early ("first sleep"), wake around midnight, and return to bed later ("second sleep"). Along with a nap in the day, it has been argued that this is the natural pattern of human sleep in long winter nights. A case has been made that maintaining such a sleep pattern may be important in regulating stress. Historian A. Roger Ekirch has argued that before the Industrial Revolution, interrupted sleep was dominant in Western civilization. Ekirch asserts that the intervening period of wakefulness was used to pray and reflect, and to interpret dreams, which were more vivid at that hour than upon waking in the morning. This was also a favorite time for scholars and poets to write uninterrupted, whereas still others visited neighbors, engaged in sexual activity, or committed petty crime. He draws evidence from more than 500 references to a segmented sleeping pattern in documents from the ancient, medieval, and modern world. Other historians, such as Craig Koslofsky, have endorsed Ekirch's analysis. Ekirch suggests that it is due to the modern use of electric lighting that most modern humans do not practice interrupted sleep. Some have proposed that a sleep pattern based on the historical biphasic sleep schedule might be beneficial for stress or improve health markers. Ekirch does not advocate for this, stating that the current uninterrupted sleep pattern is the ideal schedule for the modern world. Experimental schedules Everyman schedule The Everyman schedule involves sleeping 3 hours during the night ("core sleep"), and taking three 20-minute naps during the day. This totals 4 hours of sleep in a 24-hour period. Uberman schedule The Uberman sleep schedule consists of a 30-minute nap every four hours, totaling 3 hours of sleep in a 24-hour period. Other variations of this sleep pattern involve 8 naps throughout the day, or 20-minute sleep intervals as opposed to 30 minutes. Dymaxion schedule Buckminster Fuller described a regimen consisting of 30-minute naps every six hours. The short article about Fuller's nap schedule in Time in 1943, which referred to the schedule as "intermittent sleeping", says that he maintained it for two years, and notes that "he had to quit because his schedule conflicted with that of his business associates, who insisted on sleeping like other men." This schedule is likely the most extreme type of polyphasic sleep schedule, totaling only two hours of sleep in a 24-hour period. In extreme situations In crises and other extreme conditions, people may not be able to achieve the recommended seven to nine hours of sleep per day. Systematic napping may be considered necessary in such situations. Claudio Stampi, as a result of his interest in long-distance solo boat racing, has studied the systematic timing of short naps as a means of ensuring optimal performance in situations where extreme sleep deprivation is inevitable, but he does not advocate ultrashort napping as a lifestyle. Scientific American Frontiers (PBS) has reported on Stampi's 49-day experiment where a young man napped for a total of three hours per day. It purportedly shows that all stages of sleep were included. Stampi has written about his research in his book Why We Nap: Evolution, Chronobiology, and Functions of Polyphasic and Ultrashort Sleep (1992). In 1989 he published results of a field study in the journal Work & Stress, concluding that "polyphasic sleep strategies improve prolonged sustained performance" under continuous work situations. In addition, other long-distance solo sailors have documented their techniques for maximizing wake time on the open seas. One account documents the process by which a solo sailor broke his sleep into between six and seven naps per day. The naps would not be placed equiphasically, instead occurring more densely during night hours. The U.S. military has studied fatigue countermeasures. An Air Force report states: Similarly, the Canadian Marine pilots in their trainer's handbook report that: NASA, in cooperation with the National Space Biomedical Research Institute, has funded research on napping. Despite NASA recommendations that astronauts sleep eight hours a day when in space, they usually have trouble sleeping eight hours at a stretch, so the agency needs to know about the optimal length, timing and effect of naps. Professor David Dinges of the University of Pennsylvania School of Medicine led research in a laboratory setting on sleep schedules which combined various amounts of "anchor sleep", ranging from about four to eight hours in length, with no nap or daily naps of up to 2.5 hours. Longer naps were found to be better, with some cognitive functions benefiting more from napping than others. Vigilance and basic alertness benefited the least while working memory benefited greatly. Naps in the individual subjects' biological daytime worked well, but naps in their nighttime were followed by much greater sleep inertia lasting up to an hour. The Italian Air Force (Aeronautica Militare Italiana) also conducted experiments for their pilots. In schedules involving night shifts and fragmentation of duty periods through the entire day, a sort of polyphasic sleeping schedule was studied. Subjects were to perform two hours of activity followed by four hours of rest (sleep allowed), this was repeated four times throughout the 24-hour day. Subjects adopted a schedule of sleeping only during the final three rest periods in linearly increasing duration. The AMI published findings that "total sleep time was substantially reduced as compared to the usual 7–8 hour monophasic nocturnal sleep" while "maintaining good levels of vigilance as shown by the virtual absence of EEG microsleeps." EEG microsleeps are measurable and usually unnoticeable bursts of sleep in the brain while a subject appears to be awake. Nocturnal sleepers who sleep poorly may be heavily bombarded with microsleeps during waking hours, limiting focus and attention. Physiology The brain exhibits high levels of the pituitary hormone prolactin during the period of nighttime wakefulness, which may contribute to the feeling of peace that many people associate with it. In his 1992 study "In short photoperiods, human sleep is biphasic", Thomas Wehr had seven healthy men confined to a room for fourteen hours of darkness daily for a month. At first the participants slept for about eleven hours, presumably making up for their sleep debt. After this the subjects began to sleep much as people in pre-industrial times were claimed to have done. They would sleep for about four hours, wake up for two to three hours, then go back to bed for another four hours. They also took about two hours to fall asleep. Polyphasic sleep can be caused by irregular sleep-wake syndrome, a rare circadian rhythm sleep disorder which is usually caused by neurological abnormality, head injury or dementia. Much more common examples are the sleep of human infants and of many animals. Elderly humans often have disturbed sleep, including polyphasic sleep. In their 2006 paper "The Nature of Spontaneous Sleep Across Adulthood", Campbell and Murphy studied sleep timing and quality in young, middle-aged, and older adults. They found that, in free-running conditions, the average duration of major nighttime sleep was significantly longer in young adults than in the other groups. The paper states further: See also Tahajjud Tikkun Chatzot Watchkeeping References Further reading External links Polyphasic Sleep Wiki Sleep Circadian rhythm
Polyphasic sleep
[ "Biology" ]
2,085
[ "Behavior", "Sleep", "Circadian rhythm" ]
620,353
https://en.wikipedia.org/wiki/RecBCD
Exodeoxyribonuclease V (EC 3.1.11.5, RecBCD, Exonuclease V, Escherichia coli exonuclease V, E. coli exonuclease V, gene recBC endoenzyme, RecBC deoxyribonuclease, gene recBC DNase, gene recBCD enzymes) is an enzyme of E. coli that initiates recombinational repair from potentially lethal double strand breaks in DNA which may result from ionizing radiation, replication errors, endonucleases, oxidative damage, and a host of other factors. The RecBCD enzyme is both a helicase that unwinds, or separates the strands of DNA, and a nuclease that makes single-stranded nicks in DNA. It catalyses exonucleolytic cleavage (in the presence of ATP) in either 5′- to 3′- or 3′- to 5′-direction to yield 5′-phosphooligonucleotides. Structure The enzyme complex is composed of three different subunits called RecB, RecC, and RecD and hence the complex is named RecBCD (Figure 1). Before the discovery of the recD gene, the enzyme was known as “RecBC.” Each subunit is encoded by a separate gene: Function Both the RecD and RecB subunits are helicases, i.e., energy-dependent molecular motors that unwind DNA (or RNA in the case of other proteins). The RecB subunit in addition has a nuclease function. Finally, RecBCD enzyme (perhaps the RecC subunit) recognizes a specific sequence in DNA, 5'-GCTGGTGG-3', known as Chi (sometimes designated with the Greek letter χ). RecBCD is unusual amongst helicases because it has two helicases that travel with different rates and because it can recognize and be altered by the Chi DNA sequence. RecBCD avidly binds an end of linear double-stranded (ds) DNA. The RecD helicase travels on the strand with a 5' end at which the enzyme initiates unwinding, and RecB on the strand with a 3' end. RecB is slower than RecD, so that a single-stranded (ss) DNA loop accumulates ahead of RecB (Figure 2). This produces DNA structures with two ss tails (a shorter 3’ ended tail and a longer 5’ ended tail) and one ss loop (on the 3' ended strand) observed by electron microscopy. The ss tails can anneal to produce a second ss loop complementary to the first one; such twin-loop structures were initially referred to as “rabbit ears.” Mechanism of action During unwinding the nuclease in RecB can act in different ways depending on the reaction conditions, notably the ratio of the concentrations of Mg2+ ions and ATP. (1) If ATP is in excess, the enzyme simply nicks the strand with Chi (the strand with the initial 3' end) (Figure 2). Unwinding continues and produces a 3' ss tail with Chi near its terminus. This tail can be bound by RecA protein, which promotes strand exchange with an intact homologous DNA duplex. When RecBCD reaches the end of the DNA, all three subunits disassemble and the enzyme remains inactive for an hour or more; a RecBCD molecule that acted at Chi does not attack another DNA molecule. (2) If Mg2+ ions are in excess, RecBCD cleaves both DNA strands endonucleolytically, although the 5' tail is cleaved less often (Figure 3). When RecBCD encounters a Chi site on the 3' ended strand, unwinding pauses and digestion of the 3' tail is reduced. When RecBCD resumes unwinding, it now cleaves the opposite strand (i.e., the 5' tail) and loads RecA protein onto the 3’-ended strand. After completing reaction on one DNA molecule, the enzyme quickly attacks a second DNA, on which the same reactions occur as on the first DNA. Although neither reaction has been verified by analysis of intracellular DNA, due to the transient nature of reaction intermediates, genetic evidence indicates that the first reaction more nearly mimics that in cells. For example, the activity of Chi is influenced by nucleotides to its 3' side, both in cells and in reactions with ATP in excess but not with Mg2+ in excess [PMIDs 27401752, 27330137]. RecBCD mutants lacking detectable exonuclease activity retain high Chi hotspot activity in cells and nicking at Chi outside cells. A Chi site on one DNA molecule in cells reduces or eliminates Chi activity on another DNA, perhaps reflecting the Chi-dependent disassembly of RecBCD observed in vitro under conditions of excess ATP and nicking of DNA at Chi. Under both reaction conditions, the 3' strand remains intact downstream of Chi. The RecA protein is then actively loaded onto the 3' tail by RecBCD. At some undetermined point RecBCD dissociates from the DNA, although RecBCD can unwind at least 60 kb of DNA without falling off. RecA initiates exchange of the DNA strand to which it is bound with the identical, or nearly identical, strand in an intact DNA duplex; this strand exchange generates a joint DNA molecule, such as a D-loop (Figure 2). The joint DNA molecule is thought to be resolved either by replication primed by the invading 3’ ended strand containing Chi or by cleavage of the D-loop and formation of a Holliday junction. The Holliday junction can be resolved into linear DNA by the RuvABC complex or dissociated by the RecG protein. Each of these events can generate intact DNA with new combinations of genetic markers by which the parental DNAs may differ. This process, homologous recombination, completes the repair of the double-stranded DNA break. RecD1 and RecD2 RecD enzymes are divided into two groups RecD1 (known as RecD) and RecD2. Many organisms have a recD gene even though the other members of a recBCD complex, i. e. rec B and recC, are not present. For instance, the bacterium Deinococcus radiodurans, that has an extraordinary DNA repair capability, is an example of an organism that does not possess a recB or recC gene, and yet does have a recD gene. In the bacterium Escherichia coli, RecD protein is part of the well studied RecBCD complex that is necessary for recombinational DNA repair (as described above). In the bacterium Bacillus subtilis, RecD2 protein has a role as a modulator of replication restart and also a modulator of the RecA recombinase. RecD2 may inhibit unwanted recombination events when replication forks are stalled, and also may have a role in displacing RecA protein from recombination intermediates in order to permit advance of the replication fork. Applications RecBCD is a model enzyme for the use of single molecule fluorescence as an experimental technique used to better understand the function of protein-DNA interactions. The enzyme is also useful in removing linear DNA, either single- or double-stranded, from preparations of circular double-stranded DNA, since it requires a DNA end for activity. References External links EC 3.1.11 Escherichia coli genes DNA repair
RecBCD
[ "Biology" ]
1,606
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
620,355
https://en.wikipedia.org/wiki/RuvABC
RuvABC is a complex of three proteins that mediate branch migration and resolve the Holliday junction created during homologous recombination in bacteria. As such, RuvABC is critical to bacterial DNA repair. RuvA and RuvB bind to the four strand DNA structure formed in the Holliday junction intermediate, and migrate the strands through each other, using a putative spooling mechanism. The RuvAB complex can carry out DNA helicase activity, which helps unwind the duplex DNA. The binding of the RuvC protein to the RuvAB complex is thought to cleave the DNA strands, thereby resolving the Holliday junction. Protein complex The RuvABC is a complex of three proteins that resolve the Holliday junction formed during bacterial homologous recombination. In Escherichia coli bacteria, DNA replication forks stall at least once per cell cycle, so that DNA replication must be restarted if the cell is to survive. Replication restart is a multi-step process in E. coli that requires the sequential action of several proteins. When the progress of the replication fork is impeded the proteins single-stranded binding protein SSB and RecG helicase along with the RuvABC complex are required for rescue. The resolution of Holliday junctions that accumulate following replication on damaged DNA templates in E. coli requires the RuvABC complex. RuvA RuvA (Holliday junction branch migration complex subunit RuvA) is a DNA-binding protein that binds Holliday junctions with high affinity. The structure of the complex has been variously elucidated through X-ray crystallography and EM data, and suggest that the complex consists of either one or two RuvA tetramers, with charge lined grooves through which the incoming DNA is channelled. The structure also showed the presence of so-called 'acidic pins' in the centre of the tetramer, which serve to separate the DNA duplexes. Its crystal structure has been solved at 1.9A. RuvB RuvB (Holliday junction branch migration complex subunit RuvB) is an ATPase that is only active in the presence of DNA and compared to RuvA, RuvB has a low affinity for DNA. The RuvB proteins are thought to form hexameric rings on the exit points of the newly formed DNA duplexes, and it is proposed that they 'spool' the emerging DNA through the RuvA tetramer. RuvC RuvC (Crossover junction endodeoxyribonuclease RuvC) is the resolvase, which cleaves the Holliday junction. RuvC proteins have been shown to form dimers in solution and its structure has been solved at 2.5A. It is thought to bind either on the open, DNA exposed face of a single RuvA tetramer, or to replace one of the two tetramers. Binding is proposed to be mediated by an unstructured loop on RuvC, which becomes structured on binding RuvA. RuvC can be bound to the complex in either orientation, therefore resolving Holliday junctions in either a horizontal or vertical manner. See also RecBCD References Further reading Eggleston AK, Mitchell AH, and West SC (1997). “In Vitro Reconstitution of the Late Steps of Genetic Recombination in E. coli”. Cell. 89: 607–617. External links Proteins
RuvABC
[ "Chemistry" ]
711
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
620,362
https://en.wikipedia.org/wiki/Cointegrate
A cointegrate is the intermediate molecule between donor DNA and target DNA covalently bind during the formation of a Holliday junction. Transposons elements are DNA sequences that can change its position within the genome, sometimes creating reversing mutations. A number of bacterial transposons, especially those related to Tn3 Tn552, encodes two recombinases that participate in their transposition to other DNA. The initial step that mediated this process involve the formation of cointegrate. The cointegrate is formed between the transposon containing donor DNA and the target molecule. In this transpositional intermediate, the donor and target DNAs are joined together by copies of the duplicated transposon, one copy occurring at each donor–target junction. References DNA
Cointegrate
[ "Chemistry", "Biology" ]
160
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
620,529
https://en.wikipedia.org/wiki/Grader
A grader, also commonly referred to as a road grader, motor grader, or simply blade, is a form of heavy equipment with a long blade used to create a flat surface during grading. Although the earliest models were towed behind horses, and later tractors, most modern graders are self-propelled and thus technically "motor graders". Typical graders have three axles, with the steering wheels in front, followed by the grading blade or mouldboard, then a cab and engine atop tandem rear axles. Some graders also have front-wheel drives for improved performance. Some graders have optional rear attachments, such as a ripper, scarifier, or compactor. A blade forward of the front axle may also be added. For snowplowing and some dirt grading operations, a main blade extension can also be mounted. Capacities range from a blade width of 2.50 to 7.30 m (8 to 24  ft) and engines from 93–373 kW (125–500 hp). Certain graders can operate multiple attachments, or be designed for specialized tasks like underground mining. Function In civil engineering "rough grading" is performed by heavy equipment such as wheel tractor-scrapers and bulldozers. Graders are used to "finish grade", with the angle, tilt (or pitch), and height of their blade capable of being adjusted to a high level of precision. Graders are commonly used in the construction and maintenance of dirt and gravel roads. In constructing paved roads, they prepare a wide flat base course for the final road surface. Graders are also used to set native soil or gravel foundation pads to finish grade before the construction of large buildings. Graders can produce canted surfaces for drainage or safety. They may be used to produce drainage ditches with shallow V-shaped cross-sections on either side of highways. Steering is performed via a steering wheel, or a joystick capable of controlling both the angle and cant of the front wheels. Many models also allow frame articulation between the front and rear axles, which allows a smaller turning radius in addition to allowing the operator to adjust the articulation angle to aid in the efficiency of moving material. Other implement functions are typically hydraulically powered and can be directly controlled by levers, or by joystick inputs or electronic switches controlling electrohydraulic servo valves. Graders are also outfitted with modern digital grade control technologies, such as those manufactured by Topcon Positioning Systems, Inc., Trimble Navigation, Leica Geosystems, or Mikrofyn. These may combine both laser and GPS guidance to establish precise grade control and (potentially) "stateless" construction. Manufacturers such as John Deere have also begun to integrate these technologies during construction. History Early graders were drawn by humans and draft animals. The Fresno Scraper is a machine pulled by horses used for constructing canals and ditches in sandy soil. The design of the Fresno Scraper forms the basis of most modern earthmoving scrapers, having the ability to scrape and move a quantity of soil, and also to discharge it at a controlled depth, thus quadrupling the volume which could be handled manually. The Fresno scraper was invented in 1883 by James Porteous. Working with farmers in Fresno, California, he had recognised the dependence of the Central San Joaquin Valley on irrigation, and the need for a more efficient means of constructing canals and ditches in the sandy soil. In perfecting the design of his machine, Porteous made several revisions on his own and also traded ideas with William Deidrick, Frank Dusy, and Abijah McCall, who invented and held patents on similar scrapers. The era of motorization by traction engines, steam tractors, motor trucks, and tractors saw such towed graders grow in size and productivity. The first self-propelled grader was made in 1920 by the Russell Grader Manufacturing Company, which called it the Russell Motor Hi-Way Patrol. These early graders were created by adding the grader blade as an attachment to a generalist tractor unit. After purchasing the company in 1928, Caterpillar went on to truly integrate the tractor and grader into one design—at the same time replacing crawler tracks with wheels to yield the first rubber-tire self-propelled grader, the Caterpillar Auto Patrol, released in 1931. Regional uses In addition to their use in road construction, graders may also be used to perform roughly equivalent work. In some locales such as Northern Europe, Canada, and places in the United States, graders are often used in municipal and residential snow removal. In scrubland and grassland areas of Australia and Africa, graders are often an essential piece of equipment on ranches, large farms, and plantations to make dirt tracks where the absence of rocks and trees means bulldozers are not required. Manufacturers Case Construction Equipment Caterpillar Inc. Deere & Company Galion Iron Works HEPCO Komatsu Limited LiuGong Construction Machinery, LLC. Mitsubishi Heavy Industries New Holland Construction Sany SDLG Terex Corporation Volvo XCMG See also King road drag Land grading References External links Video with technical development from graders photos/videos from different types of grader works A Road-Scraper That Cuts Through Snow, Popular Science monthly, February 1919, page 26, Scanned by Google Books: https://books.google.com/books?id=7igDAAAAMBAJ&pg=PA26 http://www.wisegeek.com/what-are-road-graders.htm Construction equipment Engineering vehicles Heavy equipment Road construction Snow removal American inventions
Grader
[ "Engineering" ]
1,136
[ "Construction equipment", "Construction", "Road construction", "Engineering vehicles", "Industrial machinery" ]
620,604
https://en.wikipedia.org/wiki/Text%20mode
Text mode is a computer display mode in which content is internally represented on a computer screen in terms of characters rather than individual pixels. Typically, the screen consists of a uniform rectangular grid of character cells, each of which contains one of the characters of a character set; at the same time, contrasted to graphics mode or other kinds of computer graphics modes. Text mode applications communicate with the user by using command-line interfaces and text user interfaces. Many character sets used in text mode applications also contain a limited set of predefined semi-graphical characters usable for drawing boxes and other rudimentary graphics, which can be used to highlight the content or to simulate widget or control interface objects found in GUI programs. A typical example is the IBM code page 437 character set. An important characteristic of text mode programs is that they assume monospaced fonts, where every character has the same width on screen, which allows them to easily maintain the vertical alignment when displaying semi-graphical characters. This was an analogy of early mechanical printers which had fixed pitch. This way, the output seen on the screen could be sent directly to the printer maintaining the same format. Depending on the environment, the screen buffer can be directly addressable. Programs that display output on remote video terminals must issue special control sequences to manipulate the screen buffer. The most popular standards for such control sequences are ANSI and VT100. Programs accessing the screen buffer through control sequences may lose synchronization with the actual display so that many text mode programs have a redisplay everything command, often associated with the key combination. History Text mode video rendering came to prominence in the early 1970s, when video-oriented text terminals started to replace teleprinters in the interactive use of computers. Benefits The advantages of text modes as compared to graphics modes include lower memory consumption and faster screen manipulation. At the time text terminals were beginning to replace teleprinters in the 1970s, the extremely high cost of random-access memory in that period made it exorbitantly expensive to install enough memory for a computer to simultaneously store the current value of every pixel on a screen, to form what would now be called a framebuffer. Early framebuffers were standalone devices which cost tens of thousands of dollars, in addition to the expense of the advanced high-resolution displays to which they were connected. For applications that required simple line graphics but for which the expense of a framebuffer could not be justified, vector displays were a popular workaround. But there were many computer applications (e.g., data entry into a database) for which all that was required was the ability to render ordinary text in a quick and cost-effective fashion to a cathode-ray tube. Text mode avoids the problem of expensive memory by having dedicated display hardware re-render each line of text from characters into pixels with each scan of the screen by the cathode ray. In turn, the display hardware needs only enough memory to store the pixels equivalent to one line of text (or even less) at a time. Thus, the computer's screen buffer only stores and knows about the underlying text characters (hence the name "text mode") and the only location where the actual pixels representing those characters exist as a single unified image is the screen itself, as viewed by the user (thanks to the phenomenon of persistence of vision). For example, a screen buffer sufficient to hold a standard grid of 80 by 25 characters requires at least 2,000 bytes. Assuming a monochrome display, 8 bits per byte, and a standard size of 8 times 8 bits for each character, a framebuffer large enough to hold every pixel on the resulting screen would require at least 128,000 bits, 16,000 bytes, or just under 16 kilobytes. By the standards of modern computers, these may seem like trivial amounts of memory, but to put them in context, the original Apple II was released in 1977 with only four kilobytes of memory and a price of $1,300 in U.S. dollars (at a time when the minimum wage in the United States was only $2.30 per hour). Furthermore, from a business perspective, the business case for text terminals made no sense unless they could be produced and operated more cheaply than the paper-hungry teleprinters they were supposed to replace. Another advantage of text mode is that it has relatively low bandwidth requirements in remote terminal use. Thus, a text mode remote terminal can necessarily update the screen much faster than a graphics mode remote terminal linked to the same amount of bandwidth (and in turn will seem more responsive), since the remote server may only need to transmit a few dozen bytes for each screen update in text mode, as opposed to complex raster graphics remote procedure calls that may require the transmission and rendering of entire bitmaps. User-defined characters The border between text mode and graphical programs can sometimes be fuzzy, especially on the PC's VGA hardware, because many later text mode programs tried to push the model to the extreme by playing with the video controller. For example, they redefined the character set in order to create custom semi-graphical characters, or even created the appearance of a graphical mouse pointer by redefining the appearance of the characters over which the mouse pointer was shown at a given time. Text mode rendering with user-defined characters has also been useful for 2D computer and video games because the game screen can be manipulated much faster than with pixel-oriented rendering. Technical basis A video controller implementing a text mode usually uses two distinct areas of memory. Character memory or a pattern table contains a raster font in use, where each character is represented by a dot matrix (a matrix of bits), so the character memory could be considered as a three-dimensional bit array. Display matrix (a text buffer, screen buffer, or nametable) tracks which character is in each cell. In the simple case the display matrix can be just a matrix of code points (so named character pointer table), but it usually stores for each character position not only a code, but also attributes. In the case of raster scan output, which is the most common for computer monitors, the corresponding video signal is made by the character generator, a special electronic unit similar to devices with the same name used in video technology. The video controller has two registers: scan line counter and dot counter, serving as coordinates in the screen dot matrix. Each of them must be divided by corresponding glyph size to obtain an index in the display matrix; the remainder is an index in glyph matrix. If glyph size equals to 2n, then it is possible just to use n low bits of a binary register as an index in glyph matrix, and the rest of bits as an index in the display matrix — see the scheme. The character memory resides in a read-only memory in some systems. Other systems allow the use of RAM for this purpose, making it possible to redefine the typeface and even the character set for application-specific purposes. The use of RAM-based characters also facilitates some special techniques, such as the implementation of a pixel-graphics frame buffer by reserving some characters for a bitmap and writing pixels directly to their corresponding character memory. In some historical graphics chips, including the TMS9918, the MOS Technology VIC, and the Game Boy graphics hardware, this was actually the canonical way of doing pixel graphics. Text modes often assign attributes to the displayed characters. For example, the VT100 terminal allows each character to be underlined, brightened, blinking or inverse. Color-supporting devices usually allow the color of each character, and often the background color as well, to be selected from a limited palette of colors. These attributes can either coexist with the character indices or use a different memory area called color memory or attribute memory. Some text mode implementations also have the concept of line attributes. For example, the VT100-compatible line of text terminals supports the doubling of the width and height of the characters on individual text lines. PC common text modes Depending on the graphics adapter used, a variety of text modes are available on IBM PC–compatible computers. They are listed on the table below: MDA text could be emphasized with bright, underline, reverse and blinking attributes. Video cards in general are backward compatible, i.e. EGA supports all MDA and CGA modes, VGA supports MDA, CGA and EGA modes. By far the most common text mode used in DOS environments, and initial Windows consoles, is the default 80 columns by 25 rows, or 80×25, with 16 colors. This mode was available on practically all IBM and compatible personal computers. Several programs, such as terminal emulators, used only 80×24 for the main display and reserved the bottom row for a status bar. Two other VGA text modes, 80×43 and 80×50, exist but were very rarely used. The 40-column text modes were never very popular outside games and other applications designed for compatibility with television monitors, and were used only for demonstration purposes or with very old hardware. Character sizes and graphical resolutions for the extended VESA-compatible Super VGA text modes are manufacturer-dependent. Also on these display adapters, available colors can be halved from 16 to 8 when a second customized character set is employed (giving a total repertoire of 512 —instead the common 256— different graphic characters simultaneously displayed on the screen). Some cards (e.g. S3) supported custom very large text modes, like 100×37 or even 160×120. In Linux systems, a program called SVGATextMode is often used with SVGA cards to set up very large console text modes, such as for use with split-screen terminal multiplexers. Modern usage Many modern programs with a graphical interface simulate the display style of text mode programs, notably when it is important to preserve the vertical alignment of text, e.g., during computer programming. There exist also software components to emulate text mode, such as terminal emulators or command line consoles. In Microsoft Windows, the Win32 console usually opens in emulated, graphical window mode. It can be switched to full screen, true text mode and vice versa by pressing the Alt and Enter keys together. This is no longer supported by the WDDM display drivers introduced with Windows Vista. Linux virtual consoles operate in text mode. Most Linux distributions support several virtual console screens, accessed by pressing Ctrl, Alt and a function key together. The AAlib open source library provides programs and routines that specialize in translating standard image and video files, such as PNG and WMV, and displaying them as a collection of ASCII characters. This enables a rudimentary viewing of graphics files on text mode systems, and on text mode web browsers such as Lynx. See also Text-based user interface Teletext Text semigraphics ASCII art Twin Hardware code page VGA text mode VGA-compatible text mode details Line-oriented printer Characters per line References External links High-Resolution console on Linux Further reading (NB. For example: Signetics 2513 MOS ROM.) Display technology
Text mode
[ "Engineering" ]
2,295
[ "Electronic engineering", "Display technology" ]
620,616
https://en.wikipedia.org/wiki/Catechol-O-methyltransferase
Catechol-O-methyltransferase (COMT; ) is one of several enzymes that degrade catecholamines (neurotransmitters such as dopamine, epinephrine, and norepinephrine), catecholestrogens, and various drugs and substances having a catechol structure. In humans, catechol-O-methyltransferase protein is encoded by the COMT gene. Two isoforms of COMT are produced: the soluble short form (S-COMT) and the membrane bound long form (MB-COMT). As the regulation of catecholamines is impaired in a number of medical conditions, several pharmaceutical drugs target COMT to alter its activity and therefore the availability of catecholamines. COMT was first discovered by the biochemist Julius Axelrod in 1957. Function Catechol-O-methyltransferase is involved in the inactivation of the catecholamine neurotransmitters (dopamine, epinephrine, and norepinephrine). The enzyme introduces a methyl group to the catecholamine, which is donated by S-adenosyl methionine (SAM). Any compound having a catechol structure, like catecholestrogens and catechol-containing flavonoids, are substrates of COMT. Levodopa, a precursor of catecholamines, is an important substrate of COMT. COMT inhibitors, like entacapone, save levodopa from COMT and prolong the action of levodopa. Entacapone is a widely used adjunct drug of levodopa therapy. When given with an inhibitor of dopa decarboxylase (carbidopa or benserazide), levodopa is optimally saved. This "triple therapy" is becoming a standard in the treatment of Parkinson's disease. Specific reactions catalyzed by COMT include: L-DOPA (levodopa) → 3-O-methyldopa Dopamine → 3-methoxytyramine DOPAC → HVA (homovanillic acid) Norepinephrine → normetanephrine Epinephrine → metanephrine Dihydroxyphenylethylene glycol (DOPEG) → methoxyhydroxyphenylglycol (MOPEG) 3,4-Dihydroxymandelic acid (DOMA) → vanillylmandelic acid (VMA) In the brain, COMT-dependent dopamine degradation is of particular importance in brain regions with low expression of the presynaptic dopamine transporter (DAT), such as the prefrontal cortex (In the PFC, dopamine is also removed by presynaptic norepinephrine transporters (NET) and degraded by monoamine oxidase.). Controversy exists about the predominance and orientation of membrane bound COMT in the CNS, that is, whether this COMT process is active intracellularly in postsynaptic neurons and glia, or oriented outward on the membrane, acting extracellularly on synaptic and extrasynaptic dopamine. Soluble COMT can also be found extracellularly, although extracellular COMT plays a less significant role in the CNS than it does peripherally. Despite its importance in neurons, COMT is actually primarily expressed in the liver. Genetics in humans The COMT protein is coded by the gene COMT. The gene is associated with allelic variants. The best-studied is Val158Met. Others are rs737865 and rs165599 that have been studied, e.g., for association with personality traits, response to antidepressant medications, and psychosis risk associated with Alzheimer's disease. COMT has been studied as a potential gene in the pathogenesis of schizophrenia; however meta-analyses find no association between the risk of schizophrenia and a number of polymorphisms, including Val158Met. Val158Met polymorphism A functional single-nucleotide polymorphism (a common normal variant) of the gene for catechol-O-methyltransferase results in a valine to methionine mutation at position 158 (Val158Met) rs4680. In vitro, the homozygous Val variant metabolizes dopamine at up to four times the rate of its methionine counterpart. However, in vivo the Met variant is overexpressed in the brain, resulting in a 40% decrease (rather than 75% decrease) in functional enzyme activity. The lower rates of catabolism for the Met allele results in higher synaptic dopamine levels following neurotransmitter release, ultimately increasing dopaminergic stimulation of the postsynaptic neuron. Given the preferential role of COMT in prefrontal dopamine degradation, the Val158Met polymorphism is thought to exert its effects on cognition by modulating dopamine signaling in the frontal lobes. The gene variant has been shown to affect cognitive tasks broadly related to executive function, such as set shifting, response inhibition, abstract thought, and the acquisition of rules or task structure. Comparable effects on similar cognitive tasks, the frontal lobes, and the neurotransmitter dopamine have also all been linked to schizophrenia. It has been proposed that an inherited variant of COMT is one of the genetic factors that may predispose someone to developing schizophrenia later in life. A more recent study cast doubt on the proposed connection between this gene and any alleged casual effect of cannabis on schizophrenia development. A non-synonymous single-nucleotide polymorphism rs4680 was found to be associated with depressed factor of Positive and Negative Syndrome Scale(PANSS) and efficiency of emotion in schizophrenia subjects. It is increasingly recognised that allelic variation at the COMT gene are also relevant for emotional processing, as they seem to influence the interaction between prefrontal and limbic regions. Research conducted at the Section of Neurobiology of Psychosis, Institute of Psychiatry, King's College London has demonstrated an effect of COMT both in patients with bipolar disorder and in their relatives, but these findings have not been replicated so far. The COMT Val158Met polymorphism also has a pleiotropic effect on emotional processing. Furthermore, the polymorphism has been shown to affect ratings of subjective well-being. When 621 women were measured with experience sample monitoring, which is similar to mood assessment as response to beeping watch, the met/met form confers double the subjective mental sensation of well-being from a wide variety of daily events. The ability to experience reward increased with the number of Met alleles. Also, the effect of different genotype was greater for events that were felt as more pleasant. The effect size of genotypic moderation was quite large: Subjects with the Val/Val genotype generated almost similar amounts of subjective well-being from a 'very pleasant event' as Met/Met subjects did from a 'bit pleasant event'. Genetic variation with functional impact on cortical dopamine tone has a strong influence on reward experience in the flow of daily life. In one study participants with the met/met phenotype described an increase of positive affect twice as high in amplitude as participants with the Val/Val phenotype following very pleasant or pleasant events. One review found that those with Val/Val tended to be more extroverted, more novelty-seeking, and less neurotic than those with the Met/Met allele Temporomandibular joint dysfunction Temporomandibular joint dysfunction (TMD) does not appear to be a classic genetic disorder, however variations in the gene that codes for COMT have been suggested to be responsible for inheritance of a predisposition to develop TMD during life. Nomenclature COMT is the name given to the gene that codes for this enzyme. The O in the name stands for oxygen, not for ortho. COMT inhibitors COMT inhibitors include entacapone, tolcapone, opicapone, and nitecapone. All except nitecapone are used in the treatment of Parkinson's disease. Risk of liver toxicity and related digestive disorders restricts the use of tolcapone. See also Dopamine Schizophrenia O-methyltransferase Additional images References Further reading External links EC 2.1.1 O-methylated natural phenols metabolism O-methylation
Catechol-O-methyltransferase
[ "Chemistry" ]
1,791
[ "O-methylation", "Methylation" ]
620,665
https://en.wikipedia.org/wiki/Oakum
Oakum is a preparation of tarred fibers used to seal gaps. Its traditional application was in shipbuilding for caulking or packing the joints of timbers in wooden vessels and the deck planking of iron and steel ships. Oakum was also used in plumbing for sealing joints in cast iron pipe, and in log cabins for chinking. In shipbuilding it was forced into the seams using a hammer and a caulking iron, then sealed into place with hot pitch. It is also referenced frequently as a medical supply for medieval surgeons, often used alongside bandages for sealing wounds. History The word oakum derives from Middle English , from Old English , from (separative and perfective prefix) + (akin to Old English , "comb")—literally "off-combings". Oakum was at one time recycled from old tarry ropes and cordage, which were painstakingly unravelled and reduced to fibre, termed "picking". The task of picking and preparation was a common occupation in prisons and workhouses, where the young or the old and infirm were put to work picking oakum if they were unsuited for heavier labour. Sailors undergoing naval punishment were also frequently sentenced to pick oakum, with each man made to pick of oakum a day. The work was tedious, slow and taxing on the worker's thumbs and fingers. In 1862, girls under 16 at Tothill Fields Bridewell had to pick a day, and boys under 16 had to pick . Over the age of 16, girls and boys had to pick per day respectively. The oakum was sold for £4 10s () per hundredweight (). At Coldbath Fields Prison, the men's counterpart to Tothill Fields, prisoners had to pick per day unless sentenced to hard labour, in which case they had to pick between of oakum per day. In modern times, the fibrous material used in oakum comes from virgin hemp or jute. In plumbing and marine applications, the fibers are impregnated with tar or a tar-like substance, traditionally pine tar (also called "Stockholm tar"), an amber-coloured pitch made from pine sap. Tar-like petroleum by-products can also be used for modern oakum. "White oakum" is made from untarred material, and was chiefly used as packing between brick and masonry in homes and building construction prior to World War II, as its breathability allows moisture to continue to wick and transfer through the material. Plumbing Oakum can be used to seal cast iron pipe drains. After setting the pipes together, workers pack oakum into the joints, then pour molten lead into the joint to create a permanent "lead and oakum" seal. The oakum swells and seals the joint, the tar in the oakum prevents rot, and the lead keeps the joint physically tight. Oakum present in older cast iron bell/spigot joints may also contain asbestos, requiring special methods for removal. Today, modern methods, such as rubber seals (for example, gaskets or o-rings), are more common. Cultural references In Herman Melville's novella Benito Cereno, crew members of a slave ship spend their idle hours picking oakum. Charles Dickens's novel Oliver Twist mentions the extraction of oakum by orphaned children in the workhouse. The oakum extracted is for use on navy ships, and the instructor says that the children are serving the country. The Innocents Abroad, a travel book by Mark Twain, also mentions in chapter 37 a "Baker's Boy/Famine Breeder" who eats soap and oakum, but prefers oakum, which makes his breath foul and teeth stuck up with tar. Jack London, in his book The People of the Abyss (1903), mentions picking oakum in the workhouses of London. Robert Jordan, in Winter's Heart alludes to picking oakum as a punishment among the Sea Folk. Joshua Slocum in Sailing Alone Around the World, describes caulking his ship the Spray, with oakum. Guy de Chauliac in The Major Surgery of Guy de Chauliac, frequently cites oakum as a medical supply in his treatments of wounds. Bernard Cornwell, in his Sharpe novels, refers to Richard Sharpe's childhood in a workhouse, picking fibres from old rope. References Hemp products Jute Shipbuilding Surgery
Oakum
[ "Engineering" ]
904
[ "Shipbuilding", "Marine engineering" ]
1,605,743
https://en.wikipedia.org/wiki/Fundamental%20pair%20of%20periods
In mathematics, a fundamental pair of periods is an ordered pair of complex numbers that defines a lattice in the complex plane. This type of lattice is the underlying object with which elliptic functions and modular forms are defined. Definition A fundamental pair of periods is a pair of complex numbers such that their ratio is not real. If considered as vectors in , the two are linearly independent. The lattice generated by and is This lattice is also sometimes denoted as to make clear that it depends on and It is also sometimes denoted by or or simply by The two generators and are called the lattice basis. The parallelogram with vertices is called the fundamental parallelogram. While a fundamental pair generates a lattice, a lattice does not have any unique fundamental pair; in fact, an infinite number of fundamental pairs correspond to the same lattice. Algebraic properties A number of properties, listed below, can be seen. Equivalence Two pairs of complex numbers and are called equivalent if they generate the same lattice: that is, if No interior points The fundamental parallelogram contains no further lattice points in its interior or boundary. Conversely, any pair of lattice points with this property constitute a fundamental pair, and furthermore, they generate the same lattice. Modular symmetry Two pairs and are equivalent if and only if there exists a matrix with integer entries and and determinant such that that is, so that This matrix belongs to the modular group This equivalence of lattices can be thought of as underlying many of the properties of elliptic functions (especially the Weierstrass elliptic function) and modular forms. Topological properties The abelian group maps the complex plane into the fundamental parallelogram. That is, every point can be written as for integers with a point in the fundamental parallelogram. Since this mapping identifies opposite sides of the parallelogram as being the same, the fundamental parallelogram has the topology of a torus. Equivalently, one says that the quotient manifold is a torus. Fundamental region Define to be the half-period ratio. Then the lattice basis can always be chosen so that lies in a special region, called the fundamental domain. Alternately, there always exists an element of the projective special linear group that maps a lattice basis to another basis so that lies in the fundamental domain. The fundamental domain is given by the set which is composed of a set plus a part of the boundary of where is the upper half-plane. The fundamental domain is then built by adding the boundary on the left plus half the arc on the bottom: Three cases pertain: If and , then there are exactly two lattice bases with the same in the fundamental region: and If , then four lattice bases have the same the above two , and , If , then there are six lattice bases with the same , , and their negatives. In the closure of the fundamental domain: and See also A number of alternative notations for the lattice and for the fundamental pair exist, and are often used in its place. See, for example, the articles on the nome, elliptic modulus, quarter period and half-period ratio. Elliptic curve Modular form Eisenstein series References Tom M. Apostol, Modular functions and Dirichlet Series in Number Theory (1990), Springer-Verlag, New York. (See chapters 1 and 2.) Jurgen Jost, Compact Riemann Surfaces (2002), Springer-Verlag, New York. (See chapter 2.) Riemann surfaces Modular forms Elliptic functions Lattice points
Fundamental pair of periods
[ "Mathematics" ]
701
[ "Modular forms", "Lattice points", "Number theory" ]
1,605,807
https://en.wikipedia.org/wiki/Stackelberg%20competition
The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially (hence, it is sometimes described as the "leader-follower game"). It is named after the German economist Heinrich Freiherr von Stackelberg who published Marktform und Gleichgewicht [Market Structure and Equilibrium] in 1934, which described the model. In game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader. There are some further constraints upon the sustaining of a Stackelberg equilibrium. The leader must know ex ante that the follower observes its action. The follower must have no means of committing to a future non-Stackelberg leader's action and the leader must know this. Indeed, if the 'follower' could commit to a Stackelberg leader action and the 'leader' knew this, the leader's best response would be to play a Stackelberg follower action. Firms may engage in Stackelberg competition if one has some sort of advantage enabling it to move first. More generally, the leader must have commitment power. Moving observably first is the most obvious means of commitment: once the leader has made its move, it cannot undo it—it is committed to that action. Moving first may be possible if the leader was the incumbent monopoly of the industry and the follower is a new entrant. Holding excess capacity is another means of commitment. Subgame perfect Nash equilibrium The Stackelberg model can be solved to find the subgame perfect Nash equilibrium or equilibria (SPNE), i.e. the strategy profile that serves best each player, given the strategies of the other player and that entails every player playing in a Nash equilibrium in every subgame. In very general terms, let the price function for the (duopoly) industry be ; price is simply a function of total (industry) output, so is where the subscript represents the leader and represents the follower. Suppose firm has the cost structure . The model is solved by backward induction. The leader considers what the best response of the follower is, i.e. how it will respond once it has observed the quantity of the leader. The leader then picks a quantity that maximises its payoff, anticipating the predicted response of the follower. The follower actually observes this and in equilibrium picks the expected quantity as a response. To calculate the SPNE, the best response functions of the follower must first be calculated (calculation moves 'backwards' because of backward induction). The profit of firm (the follower) is revenue minus cost. Revenue is the product of price and quantity and cost is given by the firm's cost structure, so profit is: . The best response is to find the value of that maximises given , i.e. given the output of the leader (firm ), the output that maximises the follower's profit is found. Hence, the maximum of with respect to is to be found. First differentiate with respect to : Setting this to zero for maximisation: The values of that satisfy this equation are the best responses. Now the best response function of the leader is considered. This function is calculated by considering the follower's output as a function of the leader's output, as just computed. The profit of firm (the leader) is , where is the follower's quantity as a function of the leader's quantity, namely the function calculated above. The best response is to find the value of that maximises given , i.e. given the best response function of the follower (firm ), the output that maximises the leader's profit is found. Hence, the maximum of with respect to is to be found. First, differentiate with respect to : Setting this to zero for maximisation: Examples The following example is very general. It assumes a generalised linear demand structure and imposes some restrictions on cost structures for simplicity's sake so the problem can be resolved. and for ease of computation. The follower's profit is: The maximisation problem resolves to (from the general case): Consider the leader's problem: Substituting for from the follower's problem: The maximisation problem resolves to (from the general case): Now solving for yields , the leader's optimal action: This is the leader's best response to the reaction of the follower in equilibrium. The follower's actual can now be found by feeding this into its reaction function calculated earlier: The Nash equilibria are all . It is clear (if marginal costs are assumed to be zero – i.e. cost is essentially ignored) that the leader has a significant advantage. Intuitively, if the leader was no better off than the follower, it would simply adopt a Cournot competition strategy. Plugging the follower's quantity , back into the leader's best response function will not yield . This is because once leader has committed to an output and observed the followers it always wants to reduce its output ex-post. However its inability to do so is what allows it to receive higher profits than under Cournot. Economic analysis An extensive-form representation is often used to analyze the Stackelberg leader-follower model. Also referred to as a “decision tree”, the model shows the combination of outputs and payoffs both firms have in the Stackelberg game. The image on the left depicts in extensive form a Stackelberg game. The payoffs are shown on the right. This example is fairly simple. There is a basic cost structure involving only marginal cost (there is no fixed cost). The demand function is linear and price elasticity of demand is 1. However, it illustrates the leader's advantage. The follower wants to choose to maximise its payoff . Taking the first order derivative and equating it to zero (for maximisation) yields as the maximum value of . The leader wants to choose to maximise its payoff . However, in equilibrium, it knows the follower will choose as above. So in fact the leader wants to maximise its payoff (by substituting for the follower's best response function). By differentiation, the maximum payoff is given by . Feeding this into the follower's best response function yields . Suppose marginal costs were equal for the firms (so the leader has no market advantage other than first move) and in particular . The leader would produce 2000 and the follower would produce 1000. This would give the leader a profit (payoff) of two million and the follower a profit of one million. Simply by moving first, the leader has accrued twice the profit of the follower. However, Cournot profits here are 1.78 million apiece (strictly, apiece), so the leader has not gained much, but the follower has lost. However, this is example-specific. There may be cases where a Stackelberg leader has huge gains beyond Cournot profit that approach monopoly profits (for example, if the leader also had a large cost structure advantage, perhaps due to a better production function). There may also be cases where the follower actually enjoys higher profits than the leader, but only because it, say, has much lower costs. This behaviour consistently work on duopoly markets even if the firms are asymmetrical. Credible and non-credible threats by the follower If, after the leader had selected its equilibrium quantity, the follower deviated from the equilibrium and chose some non-optimal quantity it would not only hurt itself, but it could also hurt the leader. If the follower chose a much larger quantity than its best response, the market price would lower and the leader's profits would be stung, perhaps below Cournot level profits. In this case, the follower could announce to the leader before the game starts that unless the leader chooses a Cournot equilibrium quantity, the follower will choose a deviant quantity that will hit the leader's profits. After all, the quantity chosen by the leader in equilibrium is only optimal if the follower also plays in equilibrium. The leader is, however, in no danger. Once the leader has chosen its equilibrium quantity, it would be irrational for the follower to deviate because it too would be hurt. Once the leader has chosen, the follower is better off by playing on the equilibrium path. Hence, such a threat by the follower would not be credible. However, in an (indefinitely) repeated Stackelberg game, the follower might adopt a punishment strategy where it threatens to punish the leader in the next period unless it chooses a non-optimal strategy in the current period. This threat may be credible because it could be rational for the follower to punish in the next period so that the leader chooses Cournot quantities thereafter. Stackelberg compared with Cournot The Stackelberg and Cournot models are similar because in both competition is on quantity. However, as seen, the first move gives the leader in Stackelberg a crucial advantage. There is also the important assumption of perfect information in the Stackelberg game: the follower must observe the quantity chosen by the leader, otherwise the game reduces to Cournot. With imperfect information, the threats described above can be credible. If the follower cannot observe the leader's move, it is no longer irrational for the follower to choose, say, a Cournot level of quantity (in fact, that is the equilibrium action). However, it must be that there is imperfect information and the follower is unable to observe the leader's move because it is irrational for the follower not to observe if it can once the leader has moved. If it can observe, it will so that it can make the optimal decision. Any threat by the follower claiming that it will not observe even if it can is as uncredible as those above. This is an example of too much information hurting a player. In Cournot competition, it is the simultaneity of the game (the imperfection of knowledge) that results in neither player (ceteris paribus) being at a disadvantage. Game-theoretic considerations As mentioned, imperfect information in a leadership game reduces to Cournot competition. However, some Cournot strategy profiles are sustained as Nash equilibria but can be eliminated as incredible threats (as described above) by applying the solution concept of subgame perfection. Indeed, it is the very thing that makes a Cournot strategy profile a Nash equilibrium in a Stackelberg game that prevents it from being subgame perfect. Consider a Stackelberg game (i.e. one which fulfills the requirements described above for sustaining a Stackelberg equilibrium) in which, for some reason, the leader believes that whatever action it takes, the follower will choose a Cournot quantity (perhaps the leader believes that the follower is irrational). If the leader played a Stackelberg action, (it believes) that the follower will play Cournot. Hence it is non-optimal for the leader to play Stackelberg. In fact, its best response (by the definition of Cournot equilibrium) is to play Cournot quantity. Once it has done this, the best response of the follower is to play Cournot. Consider the following strategy profiles: the leader plays Cournot; the follower plays Cournot if the leader plays Cournot and the follower plays Stackelberg if the leader plays Stackelberg and if the leader plays something else, the follower plays an arbitrary strategy (hence this actually describes several profiles). This profile is a Nash equilibrium. As argued above, on the equilibrium path play is a best response to a best response. However, playing Cournot would not have been the best response of the leader were it that the follower would play Stackelberg if it (the leader) played Stackelberg. In this case, the best response of the leader would be to play Stackelberg. Hence, what makes this profile (or rather, these profiles) a Nash equilibrium (or rather, Nash equilibria) is the fact that the follower would play non-Stackelberg if the leader were to play Stackelberg. However, this very fact (that the follower would play non-Stackelberg if the leader were to play Stackelberg) means that this profile is not a Nash equilibrium of the subgame starting when the leader has already played Stackelberg (a subgame off the equilibrium path). If the leader has already played Stackelberg, the best response of the follower is to play Stackelberg (and therefore it is the only action that yields a Nash equilibrium in this subgame). Hence the strategy profile – which is Cournot – is not subgame perfect. Comparison with other oligopoly models In comparison with other oligopoly models, The aggregate Stackelberg output is greater than the aggregate Cournot output, but less than the aggregate Bertrand output. The Stackelberg price is lower than the Cournot price, but greater than the Bertrand price. The Stackelberg consumer surplus is greater than the Cournot consumer surplus, but lower than the Bertrand consumer surplus. The aggregate Stackelberg output is greater than pure monopoly or cartel, but less than the perfectly competitive output. The Stackelberg price is lower than the pure monopoly or cartel price, but greater than the perfectly competitive price. Applications The Stackelberg concept has been extended to dynamic Stackelberg games. With the addition of time as a dimension, phenomena not found in static games were discovered, such as violation of the principle of optimality by the leader. In recent years, Stackelberg games have been applied in the security domain. In this context, the defender (leader) designs a strategy to protect a resource, such that the resource remains safe irrespective of the strategy adopted by the attacker (follower). Stackelberg differential games are also used to model supply chains and marketing channels. Other applications of Stackelberg games include heterogeneous networks, genetic privacy, robotics, autonomous driving, electrical grids, and integrated energy systems. See also Economic theory Cournot competition Bertrand competition Extensive form game Industrial organization Mathematical programming with equilibrium constraints References H. von Stackelberg, Market Structure and Equilibrium: 1st Edition Translation into English, Bazin, Urch & Hill, Springer 2011, XIV, 134 p., Fudenberg, D. and Tirole, J. (1993) Game Theory, MIT Press. (see Chapter 3, sect 1) Gibbons, R. (1992) A primer in game theory, Harvester-Wheatsheaf. (see Chapter 2, section 1B) Osborne, M.J. and Rubenstein, A. (1994) A Course in Game Theory, MIT Press (see p 97-98) Oligoply Theory made Simple, Chapter 6 of Surfing Economics by Huw Dixon. Eponyms in economics Game theory Non-cooperative games Competition (economics) Oligopoly
Stackelberg competition
[ "Mathematics" ]
3,093
[ "Game theory", "Non-cooperative games" ]
1,606,040
https://en.wikipedia.org/wiki/Fecal%20impaction
A fecal impaction or an impacted bowel is a solid, immobile bulk of feces that can develop in the rectum as a result of chronic constipation (a related term is fecal loading which refers to a large volume of stool in the rectum of any consistency). Fecal impaction is a common result of neurogenic bowel dysfunction and causes immense discomfort and pain. Its treatment includes laxatives, enemas, and pulsed irrigation evacuation (PIE) as well as digital removal. It is not a condition that resolves without direct treatment. Signs and symptoms Symptoms of a fecal impaction include the following: Chronic constipation Fecal incontinence-- paradoxical overflow diarrhea (encopresis) as a result of liquid stool passing around the obstruction Abdominal pain and bloating Loss of appetite Complications may include necrosis and ulcers of the rectal tissue, which if untreated can cause death. Causes There are many possible causes; these include a long period of physical inactivity, failure to consume adequate dietary fiber, dehydration, and deliberate retention of fecal matter. Opioids such as fentanyl, buprenorphine, methadone, codeine, oxycodone, hydrocodone, morphine, and hydromorphone as well as certain sedatives that reduce intestinal movement may cause fecal matter to become too large, hard and/or dry to expel. Specific conditions, such as irritable bowel syndrome, certain neurological disorders, paralytic ileus, gastroparesis, diabetes, enlarged prostate gland, distended colon, an ingested foreign object, inflammatory bowel diseases such as Crohn's disease and colitis, and autoimmune diseases such as amyloidosis, celiac disease, lupus, and scleroderma can cause a fecal impaction. Hypothyroidism can also cause chronic constipation because of sluggish, slower, or weaker colon contractions. Iron supplements or increased blood calcium levels are also potential causes. Spinal cord injury is a common cause of constipation, due to ileus. Diagnosis Prevention Reducing or replacing opiates, adequate intake of water, dietary fiber, and exercise. Treatment The treatment of fecal impaction requires both the remedy of the impaction and treatment to prevent recurrences. Decreased motility of the colon results in dry, hard stools that in the case of fecal impaction become compacted into a large, hard mass of stool that cannot be expelled from the rectum. Various methods of treatment attempt to remove the impaction by softening the stool, lubricating the stool, or breaking it into pieces small enough for removal. Enemas and osmotic laxatives can be used to soften the stool by increasing the water content until the stool is soft enough to be expelled. Osmotic laxatives such as magnesium citrate work within minutes to eight hours for onset of action, and even then they may not be sufficient to expel the stool. Osmotic laxatives can cause cramping and even severe pain as the patient's attempts to evacuate the contents of the rectum are blocked by the fecal mass. Polyethylene glycol (PEG 3350) may be used to increase the water content of the stool without cramping. This may take 24 to 48 hours, however, and it is not well suited to cases where the impaction needs to be removed immediately due to risk of complications or severe pain. Enemas (such as hyperosmotic saline) and suppositories (such as glycerine suppositories) work by increasing water content and stimulating peristalsis to aid in expulsion, and both work much more quickly than oral laxatives. Because enemas work in 2–15 minutes, they do not allow sufficient time for a large fecal mass to soften. Even if the enema is successful at dislodging the impacted stool, the impacted stool may remain too large to be expelled through the anal canal. Mineral oil enemas can assist by lubricating the stool for easier passage. In cases where enemas fail to remove the impaction, polyethylene glycol can be used to attempt to soften the mass over 24–48 hours, or if immediate removal of the mass is needed, manual disimpaction may be used. Manual disimpaction may be performed by lubricating the anus and using one gloved finger with a scoop-like motion to break up the fecal mass. Most often manual disimpaction is performed without general anaesthesia, although sedation may be used. In more involved procedures, general anaesthesia may be used, although the use of general anaesthesia increases the risk of damage to the anal sphincter. If all other treatments fail, surgery may be necessary. Another treatment method makes use of an enema and manual disimpaction via pulsed irrigation evacuation (PIE). By using pulsating water to enter into the colon to soften and break down the dense mass, PIE treats fecal impaction. Research shows that pulsed irrigation evacuation with the PIE MED device is successful in all tested patients in studies, making pulsed irrigation evacuation the most effective and reliable form of fecal impaction treatment. Individuals who have had one fecal impaction are at high risk of future impactions. Therefore, preventive treatment should be instituted in patients following the removal of the mass. Increasing dietary fiber, increasing fluid intake, exercising daily, and attempting regularly to defecate every morning after eating should be promoted in all patients. Often underlying medical conditions cause fecal impactions; these conditions should be treated to reduce the risk of future impactions. Many types of medications (most notably opioid pain medications, such as codeine) reduce motility of the colon, increasing the likelihood of fecal impactions. If possible, alternate medications should be prescribed that avoid the side effect of constipation. Given that all opioids can cause constipation, it is recommended that any patient placed on opioid pain medications be given medications to prevent constipation before it occurs. Daily medications can also be used to promote normal motility of the colon and soften stools. Daily use of laxatives or enemas should be avoided by most individuals as it can cause the loss of normal colon motility. However, for patients with chronic complications, daily medication under the direction of a physician may be needed. Polyethylene glycol 3350 can be taken daily to soften the stools without the significant risk of adverse effects that are common with other laxatives. In particular, stimulant laxatives should not be used frequently because they can cause dependence in which an individual loses normal colon function and is unable to defecate without taking a laxative. Frequent use of osmotic laxatives should be avoided as well as they can cause electrolyte imbalances. Fecaloma A fecaloma is a more extreme form of fecal impaction, giving the accumulation an appearance of a tumor. A fecaloma can develop as the fecal matter gradually stagnates and accumulates in the intestine and increases in volume until the intestine becomes deformed. It may occur in chronic obstruction of stool transit, as in megacolon and chronic constipation. Some diseases, such as Chagas disease, Hirschsprung's disease and others damage the autonomic nervous system in the colon's mucosa (Auerbach's plexus) and may cause extremely large or "giant" fecalomas, which must be surgically removed (disimpaction). Rarely, a fecalith will form around a hairball (Trichobezoar), or other absorbent or desiccant core. It can be diagnosed by: CT scan Projectional radiography Ultrasound Distal or sigmoid, fecalomas can often be disimpacted digitally or by a catheter which carries a flow of disimpaction fluid (water or other solvent or lubricant). Surgical intervention in the form of sigmoid colectomy or proctocolectomy and ileostomy may be required only when all conservative measures of evacuation fail. Attempts at removal can have severe and even lethal effects, such as the rupture of the colon wall by catheter or an acute angle of the fecaloma (stercoral perforation), followed by sepsis. It may also lead to stercoral perforation, a condition characterized by bowel perforation due to pressure necrosis from a fecal mass or fecaloma. See also Aerosol impaction Dental impaction Impaction (animals) References Further reading Feces Acute pain Constipation Rectal diseases
Fecal impaction
[ "Biology" ]
1,879
[ "Excretion", "Feces", "Animal waste products" ]
1,606,195
https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman%20algorithm
The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure. The algorithm was first proposed by Temple F. Smith and Michael S. Waterman in 1981. Like the Needleman–Wunsch algorithm, of which it is a variation, Smith–Waterman is a dynamic programming algorithm. As such, it has the desirable property that it is guaranteed to find the optimal local alignment with respect to the scoring system being used (which includes the substitution matrix and the gap-scoring scheme). The main difference to the Needleman–Wunsch algorithm is that negative scoring matrix cells are set to zero. Traceback procedure starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment. Because of its cubic time complexity, it often cannot be practically applied to large-scale problems and is replaced in favor of computationally more efficient alternatives such as (Gotoh, 1982), (Altschul and Erickson, 1986), and (Myers and Miller, 1988). History In 1970, Saul B. Needleman and Christian D. Wunsch proposed a heuristic homology algorithm for sequence alignment, also referred to as the Needleman–Wunsch algorithm. It is a global alignment algorithm that requires calculation steps ( and are the lengths of the two sequences being aligned). It uses the iterative calculation of a matrix for the purpose of showing global alignment. In the following decade, Sankoff, Reichert, Beyer and others formulated alternative heuristic algorithms for analyzing gene sequences. Sellers introduced a system for measuring sequence distances. In 1976, Waterman et al. added the concept of gaps into the original measurement system. In 1981, Smith and Waterman published their Smith–Waterman algorithm for calculating local alignment. The Smith–Waterman algorithm is fairly demanding of time: To align two sequences of lengths and , time is required. Gotoh and Altschul optimized the algorithm to steps. The space complexity was optimized by Myers and Miller from to (linear), where is the length of the shorter sequence, for the case where only one of the many possible optimal alignments is desired. Chowdhury, Le, and Ramachandran later optimized the cache performance of the algorithm while keeping the space usage linear in the total length of the input sequences. Motivation In recent years, genome projects conducted on a variety of organisms generated massive amounts of sequence data for genes and proteins, which requires computational analysis. Sequence alignment shows the relations between genes or between proteins, leading to a better understanding of their homology and functionality. Sequence alignment can also reveal conserved domains and motifs. One motivation for local alignment is the difficulty of obtaining correct alignments in regions of low similarity between distantly related biological sequences, because mutations have added too much 'noise' over evolutionary time to allow for a meaningful comparison of those regions. Local alignment avoids such regions altogether and focuses on those with a positive score, i.e. those with an evolutionarily conserved signal of similarity. A prerequisite for local alignment is a negative expectation score. The expectation score is defined as the average score that the scoring system (substitution matrix and gap penalties) would yield for a random sequence. Another motivation for using local alignments is that there is a reliable statistical model (developed by Karlin and Altschul) for optimal local alignments. The alignment of unrelated sequences tends to produce optimal local alignment scores which follow an extreme value distribution. This property allows programs to produce an expectation value for the optimal local alignment of two sequences, which is a measure of how often two unrelated sequences would produce an optimal local alignment whose score is greater than or equal to the observed score. Very low expectation values indicate that the two sequences in question might be homologous, meaning they might share a common ancestor. Algorithm Let and be the sequences to be aligned, where and are the lengths of and respectively. Determine the substitution matrix and the gap penalty scheme. - Similarity score of the elements that constituted the two sequences - The penalty of a gap that has length Construct a scoring matrix and initialize its first row and first column. The size of the scoring matrix is . The matrix uses 0-based indexing. Fill the scoring matrix using the equation below. where is the score of aligning and , is the score if is at the end of a gap of length , is the score if is at the end of a gap of length , means there is no similarity up to and . Traceback. Starting at the highest score in the scoring matrix and ending at a matrix cell that has a score of 0, traceback based on the source of each score recursively to generate the best local alignment. Explanation Smith–Waterman algorithm aligns two sequences by matches/mismatches (also known as substitutions), insertions, and deletions. Both insertions and deletions are the operations that introduce gaps, which are represented by dashes. The Smith–Waterman algorithm has several steps: Determine the substitution matrix and the gap penalty scheme. A substitution matrix assigns each pair of bases or amino acids a score for match or mismatch. Usually matches get positive scores, whereas mismatches get relatively lower scores. A gap penalty function determines the score cost for opening or extending gaps. It is suggested that users choose the appropriate scoring system based on the goals. In addition, it is also a good practice to try different combinations of substitution matrices and gap penalties. Initialize the scoring matrix. The dimensions of the scoring matrix are 1+length of each sequence respectively. All the elements of the first row and the first column are set to 0. The extra first row and first column make it possible to align one sequence to another at any position, and setting them to 0 makes the terminal gap free from penalty. Scoring. Score each element from left to right, top to bottom in the matrix, considering the outcomes of substitutions (diagonal scores) or adding gaps (horizontal and vertical scores). If none of the scores are positive, this element gets a 0. Otherwise the highest score is used and the source of that score is recorded. Traceback. Starting at the element with the highest score, traceback based on the source of each score recursively, until 0 is encountered. The segments that have the highest similarity score based on the given scoring system is generated in this process. To obtain the second best local alignment, apply the traceback process starting at the second highest score outside the trace of the best alignment. Comparison with the Needleman–Wunsch algorithm The Smith–Waterman algorithm finds the segments in two sequences that have similarities while the Needleman–Wunsch algorithm aligns two complete sequences. Therefore, they serve different purposes. Both algorithms use the concepts of a substitution matrix, a gap penalty function, a scoring matrix, and a traceback process. Three main differences are: One of the most important distinctions is that no negative score is assigned in the scoring system of the Smith–Waterman algorithm, which enables local alignment. When any element has a score lower than zero, it means that the sequences up to this position have no similarities; this element will then be set to zero to eliminate influence from previous alignment. In this way, calculation can continue to find alignment in any position afterwards. The initial scoring matrix of Smith–Waterman algorithm enables the alignment of any segment of one sequence to an arbitrary position in the other sequence. In Needleman–Wunsch algorithm, however, end gap penalty also needs to be considered in order to align the full sequences. Substitution matrix Each base substitution or amino acid substitution is assigned a score. In general, matches are assigned positive scores, and mismatches are assigned relatively lower scores. Take DNA sequence as an example. If matches get +1, mismatches get -1, then the substitution matrix is: This substitution matrix can be described as: Different base substitutions or amino acid substitutions can have different scores. The substitution matrix of amino acids is usually more complicated than that of the bases. See PAM, BLOSUM. Gap penalty Gap penalty designates scores for insertion or deletion. A simple gap penalty strategy is to use fixed score for each gap. In biology, however, the score needs to be counted differently for practical reasons. On one hand, partial similarity between two sequences is a common phenomenon; on the other hand, a single gene mutation event can result in insertion of a single long gap. Therefore, connected gaps forming a long gap usually is more favored than multiple scattered, short gaps. In order to take this difference into consideration, the concepts of gap opening and gap extension have been added to the scoring system. The gap opening score is usually higher than the gap extension score. For instance, the default parameters in EMBOSS Water are: gap opening = 10, gap extension = 0.5. Here we discuss two common strategies for gap penalty. See Gap penalty for more strategies. Let be the gap penalty function for a gap of length : Linear A linear gap penalty has the same scores for opening and extending a gap: , where is the cost of a single gap. The gap penalty is directly proportional to the gap length. When linear gap penalty is used, the Smith–Waterman algorithm can be simplified to: The simplified algorithm uses steps. When an element is being scored, only the gap penalties from the elements that are directly adjacent to this element need to be considered. Affine An affine gap penalty considers gap opening and extension separately: , where is the gap opening penalty, and is the gap extension penalty. For example, the penalty for a gap of length 2 is . An arbitrary gap penalty was used in the original Smith–Waterman algorithm paper. It uses steps, therefore is quite demanding of time. Gotoh optimized the steps for an affine gap penalty to , but the optimized algorithm only attempts to find one optimal alignment, and the optimal alignment is not guaranteed to be found. Altschul modified Gotoh's algorithm to find all optimal alignments while maintaining the computational complexity. Later, Myers and Miller pointed out that Gotoh and Altschul's algorithm can be further modified based on the method that was published by Hirschberg in 1975, and applied this method. Myers and Miller's algorithm can align two sequences using space, with being the length of the shorter sequence. Chowdhury, Le, and Ramachandran later showed how to run Gotoh's algorithm cache-efficiently in linear space using a different recursive divide-and-conquer strategy than the one used by Hirschberg. The resulting algorithm runs faster than Myers and Miller's algorithm in practice due to its superior cache performance. Gap penalty example Take the alignment of sequences and as an example. When linear gap penalty function is used, the result is (Alignments performed by EMBOSS Water. Substitution matrix is DNAfull (similarity score: +5 for matching characters otherwise -4). Gap opening and extension are 0.0 and 1.0 respectively): When affine gap penalty is used, the result is (Gap opening and extension are 5.0 and 1.0 respectively): This example shows that an affine gap penalty can help avoid scattered small gaps. Scoring matrix The function of the scoring matrix is to conduct one-to-one comparisons between all components in two sequences and record the optimal alignment results. The scoring process reflects the concept of dynamic programming. The final optimal alignment is found by iteratively expanding the growing optimal alignment. In other words, the current optimal alignment is generated by deciding which path (match/mismatch or inserting gap) gives the highest score from the previous optimal alignment. The size of the matrix is the length of one sequence plus 1 by the length of the other sequence plus 1. The additional first row and first column serve the purpose of aligning one sequence to any positions in the other sequence. Both the first line and the first column are set to 0 so that end gap is not penalized. The initial scoring matrix is: Example Take the alignment of DNA sequences and as an example. Use the following scheme: Substitution matrix: Gap penalty: (a linear gap penalty of ) Initialize and fill the scoring matrix, shown as below. This figure shows the scoring process of the first three elements. The yellow color indicates the bases that are being considered. The red color indicates the highest possible score for the cell being scored. The finished scoring matrix is shown below on the left. The blue color shows the highest score. An element can receive score from more than one element, each will form a different path if this element is traced back. In case of multiple highest scores, traceback should be done starting with each highest score. The traceback process is shown below on the right. The best local alignment is generated in the reverse direction. The alignment result is: Implementation An implementation of the Smith–Waterman Algorithm, SSEARCH, is available in the FASTA sequence analysis package from UVA FASTA Downloads. This implementation includes Altivec accelerated code for PowerPC G4 and G5 processors that speeds up comparisons 10–20-fold, using a modification of the Wozniak, 1997 approach, and an SSE2 vectorization developed by Farrar making optimal protein sequence database searches quite practical. A library, SSW, extends Farrar's implementation to return alignment information in addition to the optimal Smith–Waterman score. Accelerated versions FPGA Cray demonstrated acceleration of the Smith–Waterman algorithm using a reconfigurable computing platform based on FPGA chips, with results showing up to 28x speed-up over standard microprocessor-based solutions. Another FPGA-based version of the Smith–Waterman algorithm shows FPGA (Virtex-4) speedups up to 100x over a 2.2 GHz Opteron processor. The TimeLogic DeCypher and CodeQuest systems also accelerate Smith–Waterman and Framesearch using PCIe FPGA cards. A 2011 Master's thesis includes an analysis of FPGA-based Smith–Waterman acceleration. In a 2016 publication OpenCL code compiled with Xilinx SDAccel accelerates genome sequencing, beats CPU/GPU performance/W by 12-21x, a very efficient implementation was presented. Using one PCIe FPGA card equipped with a Xilinx Virtex-7 2000T FPGA, the performance per Watt level was better than CPU/GPU by 12-21x. GPU Lawrence Livermore National Laboratory and the United States (US) Department of Energy's Joint Genome Institute implemented an accelerated version of Smith–Waterman local sequence alignment searches using graphics processing units (GPUs) with preliminary results showing a 2x speed-up over software implementations. A similar method has already been implemented in the Biofacet software since 1997, with the same speed-up factor. Several GPU implementations of the algorithm in NVIDIA's CUDA C platform are also available. When compared to the best known CPU implementation (using SIMD instructions on the x86 architecture), by Farrar, the performance tests of this solution using a single NVidia GeForce 8800 GTX card show a slight increase in performance for smaller sequences, but a slight decrease in performance for larger ones. However, the same tests running on dual NVidia GeForce 8800 GTX cards are almost twice as fast as the Farrar implementation for all sequence sizes tested. A newer GPU CUDA implementation of SW is now available that is faster than previous versions and also removes limitations on query lengths. See CUDASW++. Eleven different SW implementations on CUDA have been reported, three of which report speedups of 30X. Finally, other GPU-accelerated implementations of the Smith-Waterman can be found in NVIDIA Parabricks, NVIDIA's software suite for genome analysis. SIMD In 2000, a fast implementation of the Smith–Waterman algorithm using the single instruction, multiple data (SIMD) technology available in Intel Pentium MMX processors and similar technology was described in a publication by Rognes and Seeberg. In contrast to the Wozniak (1997) approach, the new implementation was based on vectors parallel with the query sequence, not diagonal vectors. The company Sencel Bioinformatics has applied for a patent covering this approach. Sencel is developing the software further and provides executables for academic use free of charge. A SSE2 vectorization of the algorithm (Farrar, 2007) is now available providing an 8-16-fold speedup on Intel/AMD processors with SSE2 extensions. When running on Intel processor using the Core microarchitecture the SSE2 implementation achieves a 20-fold increase. Farrar's SSE2 implementation is available as the SSEARCH program in the FASTA sequence comparison package. The SSEARCH is included in the European Bioinformatics Institute's suite of similarity searching programs. Danish bioinformatics company CLC bio has achieved speed-ups of close to 200 over standard software implementations with SSE2 on an Intel 2.17 GHz Core 2 Duo CPU, according to a publicly available white paper. Accelerated version of the Smith–Waterman algorithm, on Intel and Advanced Micro Devices (AMD) based Linux servers, is supported by the GenCore 6 package, offered by Biocceleration. Performance benchmarks of this software package show up to 10 fold speed acceleration relative to standard software implementation on the same processor. Currently the only company in bioinformatics to offer both SSE and FPGA solutions accelerating Smith–Waterman, CLC bio has achieved speed-ups of more than 110 over standard software implementations with CLC Bioinformatics Cube. The fastest implementation of the algorithm on CPUs with SSSE3 can be found the SWIPE software (Rognes, 2011), which is available under the GNU Affero General Public License. In parallel, this software compares residues from sixteen different database sequences to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. It is faster than BLAST when using the BLOSUM50 matrix. An implementation of Smith–Waterman named diagonalsw, in C and C++, uses SIMD instruction sets (SSE4.1 for the x86 platform and AltiVec for the PowerPC platform). It is released under an open-source MIT License. Cell Broadband Engine In 2008, Farrar described a port of the Striped Smith–Waterman to the Cell Broadband Engine and reported speeds of 32 and 12 GCUPS on an IBM QS20 blade and a Sony PlayStation 3, respectively. Limitations Fast expansion of genetic data challenges speed of current DNA sequence alignment algorithms. Essential needs for an efficient and accurate method for DNA variant discovery demand innovative approaches for parallel processing in real time. See also Bioinformatics Sequence alignment Sequence mining Needleman–Wunsch algorithm Levenshtein distance BLAST FASTA References External links JAligner — an open source Java implementation of the Smith–Waterman algorithm B.A.B.A. — an applet (with source) which visually explains the algorithm FASTA/SSEARCH — services page at the EBI UGENE Smith–Waterman plugin — an open source SSEARCH compatible implementation of the algorithm with graphical interface written in C++ OPAL — an SIMD C/C++ library for massive optimal sequence alignment diagonalsw — an open-source C/C++ implementation with SIMD instruction sets (notably SSE4.1) under the MIT license SSW — an open-source C++ library providing an API to an SIMD implementation of the Smith–Waterman algorithm under the MIT license melodic sequence alignment — a javascript implementation for melodic sequence alignment DRAGMAP A C++ port of the Illumina DRAGEN FPGA implementation Bioinformatics algorithms Computational phylogenetics Sequence alignment algorithms Dynamic programming
Smith–Waterman algorithm
[ "Biology" ]
4,190
[ "Genetics techniques", "Computational phylogenetics", "Bioinformatics algorithms", "Bioinformatics", "Phylogenetics" ]
1,606,222
https://en.wikipedia.org/wiki/Dow%20process%20%28bromine%29
The Dow process is the electrolytic method of bromine extraction from brine, and was Herbert Henry Dow's second revolutionary process for generating bromine commercially. This process was patented in 1891. In the original invention, bromide-containing brines are treated with sulfuric acid and bleaching powder to oxidize bromide to bromine, which remains dissolved in the water. The aqueous solution is dripped onto burlap, and air is blown through causing bromine to volatilize. Bromine is trapped with iron turnings to give a solution of ferric bromide. Treatment with more iron metal converted the ferric bromide to ferrous bromide via comproportionation. Where desired, free bromine may be obtained by thermal decomposition of ferrous bromide. Before Dow entered the bromine business, brine was evaporated by heating with wood scraps and then crystallized sodium chloride was removed. An oxidizing agent was added, and bromine was formed in the solution. Then bromine was distilled. This was a very complicated and costly process. References Chemical processes
Dow process (bromine)
[ "Chemistry" ]
236
[ "Chemical process engineering", "Chemical process stubs", "Chemical processes", "nan" ]
1,606,245
https://en.wikipedia.org/wiki/Rapid%20plant%20movement
Rapid plant movement encompasses movement in plant structures occurring over a very short period, usually under one second. For example, the Venus flytrap closes its trap in about 100 milliseconds. The traps of Utricularia are much faster, closing in about 0.5 milliseconds. The dogwood bunchberry's flower opens its petals and fires pollen in less than 0.5 milliseconds. The record is currently held by the white mulberry tree, with flower movement taking 25 microseconds, as pollen is catapulted from the stamens at velocities in excess of half the speed of sound—near the theoretical physical limits for movements in plants. These rapid plant movements differ from the more common, but much slower "growth-movements" of plants, called tropisms. Tropisms encompass movements that lead to physical, permanent alterations of the plant while rapid plant movements are usually reversible or occur over a shorter span of time. A variety of mechanisms are employed by plants in order to achieve these fast movements. Extremely fast movements such as the explosive spore dispersal techniques of Sphagnum mosses may involve increasing internal pressure via dehydration, causing a sudden propulsion of spores up or through the rapid opening of the "flower" opening triggered by insect pollination. Fast movement can also be demonstrated in predatory plants, where the mechanical stimulation of insect movement creates an electrical action potential and a release of elastic energy within the plant tissues. This release can be seen in the closing of a Venus flytrap, the curling of sundew leaves, and in the trapdoor action and suction of bladderworts. Slower movement, such as the folding of Mimosa pudica leaves, may depend on reversible, but drastic or uneven changes in water pressure in the plant tissues This process is controlled by the fluctuation of ions in and out of the cell, and the osmotic response of water to the ion flux. In 1880 Charles Darwin published The Power of Movement in Plants, his second-to-last work before his death. Plants that capture and consume prey Venus flytrap (Dionaea muscipula) Waterwheel plant (Aldrovanda vesiculosa) Bladderwort (Utricularia) Certain varieties of sundew (Drosera) Plants that move leaves and leaflets Plants that are able to rapidly move their leaves or their leaflets in response to mechanical stimulation such as touch (thigmonasty): Aeschynomene: Large leaf sensitive plant (Aeschynomene fluitans) Aeschynomene americana Aeschynomene deightonii Starfruit (Averrhoa carambola) Biophytum: Biophytum abyssinicum Biophytum helenae Biophytum petersianum Biophytum reinwardtii Biophytum sensitivum Chamaecrista: Partridge pea (Chamaecrista fasciculata) Sensitive partridge pea (Chamaecrista nictitans) Chamaecrista mimosoides L. Mimosa: Giant false sensitive plant (Mimosa diplotricha) Catclaw brier (Mimosa nuttallii) Giant sensitive plant (Mimosa pigra) Mimosa polyantha Mimosa polycarpa var. spegazzinii Mimosa polydactyla Sensitive plant (Mimosa pudica) Roemer sensitive briar (Mimosa roemeriana) Eastern sensitive plant, sensitive briar (Mimosa rupertiana) Mimosa uruguensis Neptunia: Yellow neptunia (Neptunia lutea) Sensitive neptunia (Neptunia oleracea) Neptunia plena Neptunia gracili Senna alata Plants that move their leaves or leaflets at speeds rapid enough to be perceivable with the naked eye: Telegraph plant (Codariocalyx motorius) Plants that spread seeds or pollen by rapid movement Squirting cucumber (Ecballium elaterium) Cardamine hirsuta and other Cardamine spp. have seed pods which explode when touched. Impatiens (Impatiens) Sandbox tree Triggerplant (all Stylidium species) Canadian dwarf cornel (aka dogwood bunchberry, Cornus canadensis) White mulberry (Morus alba) Orchids (all genus Catasetum) Dwarf mistletoe (Arceuthobium) Witch-hazel (Hamamelis) Some Fabaceae have beans that twist as they dry out, putting tension on the seam, which at some point will split suddenly and violently, flinging the seeds metres from the maternal plant. Marantaceae Minnieroot (Ruellia tuberosa) Peyote (Lophophora williamsii) stamens move in response to touch See also Kinesis (biology) Nastic movements Plant perception (physiology) Taxis Thigmonasty Tropism Plant bioacoustics References Plant physiology Plant intelligence
Rapid plant movement
[ "Biology" ]
1,058
[ "Plant physiology", "Plant intelligence", "Plants" ]
1,606,301
https://en.wikipedia.org/wiki/Hunter%20process
The Hunter process was the first industrial process to produce pure metallic titanium. It was invented in 1910 by Matthew A. Hunter, a chemist born in New Zealand who worked in the United States. The process involves reducing titanium tetrachloride (TiCl4) with sodium (Na) in a batch reactor with an inert atmosphere at a temperature of 1,000 °C. Diluted hydrochloric acid is then used to leach the salt from the product. TiCl4(g) + 4 Na(l) → 4 NaCl(l) + Ti(s) Prior to the Hunter process, all efforts to produce Ti metal afforded highly impure material, often titanium nitride (which resembles a metal). The Hunter process was used until 1993, when it was replaced by the more economical Kroll process, which was developed in the 1940s. In the Kroll process, TiCl4 is reduced by magnesium instead of sodium. Both methods share the same initial step, obtaining TiCl4 from ore by chlorination and carbothermic reduction of the oxygen. The Kroll process is now the most commonly used titanium smelting process. The Hunter process was conducted in either one or two steps. If a single step was used the reaction equation is as above. Because of the large amount of heat generated by the reduction using sodium compared to using magnesium, and the difficulty in controlling the vapor pressure of liquid sodium, a two step process may instead be used. The two step processes consisted of reducing TiCl4 to TiCl2 with half the stoichiometric amount of sodium required to reduce TiCl4 to Ti. Next, the TiCl2 in molten sodium chloride is transferred to a different container with the additional sodium required to form Ti. The two step processes proceeded according to the following two reactions: TiCl4(g) + 2Na(l) → TiCl2(l, in NaCl) + 2NaCl(l) TiCl2(l, in NaCl) + 2Na(l) → Ti(s) + 2NaCl(l) The titanium produced by the Hunter process is less contaminated by iron and other elements and adheres to the reduction container walls less than in the Kroll process. The titanium produced by the Hunter process is in the form of powder called sponge fines. This form is useful as a raw material in powder metallurgy. The main limiting factor for the usefulness of the Hunter process is the difficulty of separating the produced NaCl from the titanium. The vapor pressure of NaCl produced in the Hunter process is lower than the vapor pressure of MgCl2 produced by the Kroll process. Thus it is difficult to separate the NaCl from the titanium using distillation in an efficient manner. Therefore, the NaCl is removed by leaching in an aqueous solution. Recovering the byproduct (NaCl) from this aqueous solution is a process that requires additional energy. These issues motivated the discontinuation of the Hunter process in industry in 1993. Research into sodium reduction continues to this day due to the superior form and purity of the metal deposit produced when compared with the Kroll process. References Industrial processes Titanium processes
Hunter process
[ "Chemistry" ]
657
[ "Titanium processes", "Metallurgical processes" ]
1,606,353
https://en.wikipedia.org/wiki/Kroll%20process
The Kroll process is a pyrometallurgical industrial process used to produce metallic titanium from titanium tetrachloride. As of 2001 William Justin Kroll's process replaced the Hunter process for almost all commercial production. Process In the Kroll process, titanium tetrachloride is reduced by liquid magnesium to give titanium metal: {TiCl4} + 2{Mg} ->[825~^{\circ}\mathrm{C}]{Ti} + 2{MgCl2} The reduction is conducted at 800–850 °C in a stainless steel retort. Complications result from partial reduction of the TiCl4, giving to the lower chlorides TiCl2 and TiCl3. The MgCl2 can be further refined back to magnesium. Appurtenant processes The resulting porous metallic titanium sponge is purified by leaching or vacuum distillation. The sponge is crushed, and pressed before it is melted in a consumable carbon electrode vacuum arc furnace, "backfilled with pure gettered argon of a pressure high enough to avoid a glow discharge". The melted ingot is allowed to solidify under vacuum. It is often remelted to remove inclusions and ensure uniformity. These melting steps add to the cost of the product. Titanium is about six times as expensive as stainless steel: Potter noted in 2023 that "Titanium is just fundamentally difficult and expensive to deal with. Turning titanium ingots into bars and sheets is a challenge due to titanium’s reactivity: it readily absorbs impurities, requiring “frequent surface removal and trimming to eliminate surface defects” which are “costly and involve significant yield loss.”" The appurtenant processes that turn Kroll's sponge into useful metal have "changed little since the 1950s." History and subsequent developments Many methods had been applied to the production of titanium metal, beginning with a report in 1887 by Nilsen and Pettersen using sodium, which was optimized into the commercial Hunter process. In this process (which ceased to be commercial in the 1990s) TiCl4 is reduced to the metal by sodium. In the 1920s Anton Eduard van Arkel working for Philips NV had described the thermal decomposition of titanium tetraiodide to give highly pure titanium. Titanium tetrachloride was found to reduce with hydrogen at high temperatures to give hydrides that can be thermally processed to the pure metal. With these three ideas as background, Kroll in Luxembourg developed both new reductants and new apparatus for the reduction of titanium tetrachloride. Its high reactivity toward trace amounts of water and other metal oxides presented challenges. Significant success came with the use of calcium as a reductant, but the resulting mixture still contained significant oxide impurities. Major success using magnesium at 1000 °C using a molybdenum clad reactor, was reported by Kroll to the Electrochemical Society in Ottawa. Kroll's titanium was highly ductile reflecting its high purity. The Kroll process displaced the Hunter process and continues to be the dominant technology for the production of titanium metal, as well as driving the majority of the world's production of magnesium metal. After moving to the United States, Kroll further developed the method for the production of zirconium at the Albany Research Center. See also Chloride process References Further reading P.Kar, Mathematical modeling of phase change electrodes with application to the FFC process, PhD thesis; UC, Berkeley, 2007. External links Titanium: Kroll Method: YouTube video uploaded by Innovations in Manufacturing at Oak Ridge National Laboratory Industrial processes Chemical processes Zirconium Titanium processes Metallurgical processes Materials science 20th-century inventions Luxembourgish inventions
Kroll process
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
770
[ "Applied and interdisciplinary physics", "Metallurgical processes", "Metallurgy", "Materials science", "Titanium processes", "Chemical processes", "nan", "Chemical process engineering" ]
1,606,866
https://en.wikipedia.org/wiki/Pitch%20Lake
The Pitch Lake is the largest natural deposit of bitumen in the world, estimated to contain 10 million tons. It is located in La Brea in southwest Trinidad, within the Siparia Regional Corporation. The lake covers about 0.405 square kilometres (100 acres) and is reported to be 76.2 metres (250 feet) deep. Pitch Lake is a popular tourist attraction, including a small museum, from where official tour guides can escort people across the lake. The lake is mined for asphalt by Lake Asphalt of Trinidad and Tobago. History The Pitch Lake has fascinated explorers and scientists, attracting tourists since its re-discovery by Sir Walter Raleigh in his expedition there in 1595. Raleigh himself found immediate use for the asphalt to caulk his ship. He referred to the pitch as "most excellent... It melteth not with the sun as the pitch of Norway". Raleigh was informed of the lake’s location by the native Amerindians, who had their own story about the origin of the lake. The story goes that the indigenous people were celebrating a victory over a rival tribe when they got carried away in their celebration. They proceeded to cook and eat the sacred hummingbird which they believed possessed the souls of their ancestors. According to legend, their winged god punished them by opening the earth and conjuring the pitch lake to swallow the entire village, and the lake became a permanent stain and a reminder of their sins. The local villages believe this legend due to the many Amerindian artifacts and a cranium that have been discovered, preserved, in the pitch. In the 1840s, Abraham Pineo Gesner first obtained kerosene from a sample of Pitch Lake bitumen. In 1887, Amzi Barber, an American businessman known as "The Asphalt King", secured a 42-year monopoly concession from the British Government for the Pitch Lake for his company, Barber Asphalt Paving Company. It was from this source that many of the first asphalt roads of New York City, Washington D.C., and other Eastern U.S. cities were paved. Geology The origin of The Pitch Lake is related to deep faults in connection with subduction under the Caribbean Plate related to Barbados Arc. The lake has not been studied extensively, but it is believed that the lake is at the intersection of two faults, which allows oil from a deep deposit to be forced up. The lighter elements in the oil evaporate under the hot tropical sun, leaving behind the heavier asphalt. Bacterial action on the asphalt at low pressures creates petroleum in asphalt. The researchers indicated that extremophiles inhabited the asphalt lake in populations ranging between 106 and 107 cells/gram. The Pitch Lake is one of several natural asphalt lakes in the world, including La Brea Tar Pits (Los Angeles), the McKittrick Tar Pits (McKittrick) and the Carpinteria Tar Pits (Carpinteria) in the U.S. state of California, and Lake Guanoco in the Republic of Venezuela. The regional geology of southern Trinidad consists of a trend of ridges, anticlines with shale diapiric cores, and sedimentary volcanoes. According to Woodside, "host muds and/or shales become over pressured and under compacted in relation to the surrounding sediments...mud or shale diapirs or mud volcanoes result because of the unstable semi-fluid nature of the methane-charged, undercompacted shales/muds." The mud volcanoes are aligned along east-northeast parallel trends. Woodside goes on to say, "The Asphalt Lake at Brighton represents a different kind of sedimentary volcanism in which gas and oil are acting on asphalt mixed with clay. This asphalt lake cuts across Miocene/Pliocene formations overlying a complicated thrust structure." The first wells were drilled into Pitch Lake oil seeps in 1866. Kerosene was distilled from the pitch in the lake from 1860 to 1865. The Guayaguayare No. 3 well was drilled in 1903, but the first commercial well was drilled at the west end of the lake in 1903. Oil was then discovered in Point Fortin-Perrylands area, and in 1911, the Tabaquite Field was discovered. The Forest Reserve Field was discovered in 1914 and the Penal Field in 1941. The first offshore well was drilled in 1954 at Soldado. Microbiology Evidence of an active microbiological ecosystem in Pitch Lake has been reported. The microbial diversity was found to be unique when compared to microbial communities analyzed at other hydrocarbon-rich environments, including La Brea tar pits in California, and an oil well and a mud volcano in Trinidad and Tobago. Archaeal and bacterial communities co-exist, with novel species having been discovered from Pitch Lake samples. Researchers have also observed novel fungal life forms which can grow on the available asphaltenes as a sole carbon and energy source. The microbiological activity is accompanied by a stronger evolution of gas consisting principally of methane with a considerable proportion of carbon dioxide, and which also contains hydrogen sulphide. See also Notable tar pits List of tar pits Asphalt volcano References External links The Wonderland of Trinidad, by Barber Asphalt Company—a Project Gutenberg eBook Asphalt lakes Landforms of Trinidad and Tobago
Pitch Lake
[ "Chemistry" ]
1,063
[ "Asphalt", "Asphalt lakes" ]
1,606,893
https://en.wikipedia.org/wiki/Semiconductor%20Research%20Corporation
Semiconductor Research Corporation (SRC), commonly known as SRC, is a high-technology research consortium active in the semiconductor industry. It is a leading semiconductor research consortium. Todd Younkin is the incumbent president and chief executive officer of the company. The consortium comprises more than twenty-five companies and government agencies with more than a hundred universities under contract performing research. History SRC was founded in 1982 by Semiconductor Industry Association as a consortium to fund research and development by semiconductor companies. In the past, it has funded university research projects in hardware and software co-design, new architectures, circuit design, transistors, memories, interconnects, and materials and has sponsored over 15,000 Bachelors, Masters, and Ph.D. students. Research SRC has funded research in areas such as automotive, advanced memory technologies, logic and processing, advanced packaging, edge intelligence, and communications. Programs Global Research Collaboration Program It is an industry-led international research program with eight sub-topics including artificial intelligence hardware; analog mixed-signal circuits; computer-aided design and test; environment safety and health; hardware security; logic and memory devices; nanomanufacturing materials and processes; and packaging. DARPA Partnerships JUMP 2.0 The JUMP 2.0 program is a research initiative that aims to further the development of information and communications technologies (ICT) in the United States. The program is structured into seven thematic centers, each focusing on high-risk, high-reward research projects. The primary areas of interest for JUMP 2.0 include the development of advanced artificial intelligence (AI) systems and architectures, the improvement of communication technologies for ICT systems, and the enhancement of sensing capabilities with embedded intelligence for rapid action generation. Additionally, the program investigates distributed computing systems and architectures within an energy-efficient compute and accelerator fabric, as well as innovations in memory devices and storage arrays for intelligent memory systems. JUMP 2.0 also explores advancements in electric and photonic interconnect fabrics, advanced packaging, and novel materials and devices for digital and analog applications. In collaboration with the National Science Foundation's Research Experiences for Undergraduates program, JUMP 2.0 supports undergraduate research in the field of semiconductors. To date, six sites have been established to provide research experiences for undergraduate students in this area. Joint University Microelectronics Program Joint University Microelectronics Program (JUMP) was a research program that ran from 2018 to 2022. JUMP focused on energy-efficient electronics, including actuation and sensing, signal processing, computing, and intelligent storage. STARnet STARnet was a collaborative university research program that ran from 2013 to 2017, focusing on state-of-the-art technology developments for microelectronics research and development. This program allocated at least $40 million annually to basic research funding. Focus Center Research Program The Focus Center Research Program (FCRP) began in 1998 and spanned multiple phases until its end in 2013. The research within the program was primarily concentrated on materials, structures, and devices, as well as circuits, systems, and software to develop new methods for device fabrication and integration for deeply-scaled transistors and architectures for high-performance mixed-signal circuits to meet military requirements. Industry guidance Semiconductor Research Corporation (SRC) published the Microelectronics and Advanced Packaging Technologies (MAPT) Roadmap in 2023. The technology consortium was selected by the Advanced Manufacturing Office of the National Institute of Standards and Technology (NIST), which is part of the U.S. Department of Commerce, to develop this roadmap with an emphasis on emerging MAPT technologies. The MAPT Roadmap was developed through a collaborative effort involving researchers from different organizations spanning industry, academia, and government. It outlines critical research priorities for the semiconductor industry and provides recommendations based on a comprehensive analysis of challenges, promising technologies, key findings, trends, and the necessity for foundational capabilities within the semiconductor research and development (R&D) ecosystem. In 2021, SRC and the Semiconductor Industry Association (SIA) published the Decadal Plan for Semiconductors. The plan calls for an additional $3.4 billion in federal research and development funding to address challenges and maintain the industry's technological advancement in areas such as smart sensing, memory and storage, communications, security, and energy efficiency. Recognition In 2005, SRC received the National Medal of Technology and Innovation awarded by the president of the United States for their collaborative high-tech university research and for creating the concept and methodology, named the International Technology Roadmap for Semiconductors. In 2015, SRC was inducted into Georgia Tech's Hill Society for sponsoring $103 million in research grants, contracts, and fellowships since 1983. References Information technology companies of the United States National Medal of Technology recipients Semiconductor technology 1982 establishments in the United States
Semiconductor Research Corporation
[ "Materials_science" ]
986
[ "Semiconductor technology", "Microtechnology" ]
1,607,028
https://en.wikipedia.org/wiki/Aqua%20vitae
Aqua vitae (Latin for "water of life") or aqua vita is an archaic name for a concentrated aqueous solution of ethanol. These terms could also be applied to weak ethanol without rectification. Usage was widespread during the Middle Ages and the Renaissance, although its origin is likely much earlier. This Latin term appears in a wide array of dialectical forms throughout all lands and people conquered by ancient Rome. The term is a generic name for all types of distillates, and eventually came to refer specifically to distillates of alcoholic beverages (liquors). Aqua vitae was typically prepared by distilling wine and in English texts was also called ardent spirits, spirit of wine, or spirits of wine, a name that could be applied to brandy that had been repeatedly distilled. The term was used by the 14th-century alchemist John of Rupescissa, who believed the then newly discovered substance of ethanol to be an imperishable and life-giving "fifth essence" or quintessence, and who extensively studied its medical properties. Aqua vitae was often an etymological source of terms applied to important locally produced distilled spirits. Examples include whisky (from the Gaelic uisce beatha), eau de vie in France, acquavite in Italy, and akvavit in Scandinavia, okowita in Poland, оковита (okovyta) in Ukraine, акавіта (akavita) in Belarus, and яковита (yakovita) in southern Russian dialects. See also Alchemy Aqua fortis Aqua regia History of ethanol Vodka References External links "Aqua vitae" definition from TheFreeDictionary.com Distilled drinks Alchemical substances it:Acquavite
Aqua vitae
[ "Chemistry" ]
374
[ "Distillation", "Alchemical substances", "Distilled drinks" ]
1,607,137
https://en.wikipedia.org/wiki/Alpha%20Columbae
Alpha Columbae or α Columbae, officially named Phact (), is a third magnitude star in the southern constellation of Columba. It has an apparent visual magnitude of 2.6, making it the brightest member of Columba. Based upon parallax measurements made during the Hipparcos mission, Alpha Columbae is located at a distance of around . Nomenclature α Columbae, Latinized to Alpha Columbae, is the star's Bayer designation. The traditional name of Phact (also rendered Phad, Phaet, Phakt) derives from the Arabic فاختة fākhitah 'ring dove'. It was originally applied to the constellation Cygnus and later transferred to this star. The etymology of its name hadāri (unknown meaning) has also been suggested. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Phact for this star. In Chinese, (), meaning Grandfather, refers to an asterism consisting of α Columbae and ε Columbae. Consequently, α Columbae itself is known as (, .). From this Chinese name, the name Chang Jin has appeared Properties This is believed to be a solitary star, although it has a faint optical companion at an angular separation of 13.5 arcseconds, making it a double star. The stellar classification of Alpha Columbae is B9Ve, matching a B-type main-sequence star. The spectrum shows it to be a Be star surrounded by a hot gaseous disk, which is generating emission lines because of hydrogen recombination. Like most if not all such stars, it is rotating rapidly with a projected rotational velocity of . The azimuthal equatorial velocity may be . It is a suspected Gamma Cassiopeiae type (GCAS) variable star, with its apparent magnitude varying from 2.62m to 2.66m. References B-type main-sequence stars Be stars Columba (constellation) Columbae, Alpha Durchmusterung objects 037795 026634 1956 Phact
Alpha Columbae
[ "Astronomy" ]
481
[ "Columba (constellation)", "Constellations" ]
1,607,154
https://en.wikipedia.org/wiki/Proton%20conductor
A proton conductor is an electrolyte, typically a solid electrolyte, in which H+ are the primary charge carriers. Composition Acid solutions exhibit proton-conductivity, while pure proton conductors are usually dry solids. Typical materials are polymers or ceramic. Typically, the pores in practical materials are small such that protons dominate direct current and transport of cations or bulk solvent is prevented. Water ice is a common example of a pure proton conductor, albeit a relatively poor one. A special form of water ice, superionic water, has been shown to conduct much more efficiently than normal water ice. Solid-phase proton conduction was first suggested by Alfred Rene Jean Paul Ubbelohde and S. E. Rogers. in 1950, although electrolyte proton currents have been recognized since 1806. Proton conduction has also been observed in the new type of proton conductors for fuel cells – protic organic ionic plastic crystals (POIPCs), such as 1,2,4-triazolium perfluorobutanesulfonate and imidazolium methanesulfonate. In particular, a high ionic conductivity of 10 mS/cm is reached at 185 °C in the plastic phase of imidazolium methanesulfonate. When in the form of thin membranes, proton conductors are an essential part of small, inexpensive fuel cells. The polymer nafion is a typical proton conductor in fuel cells. A jelly-like substance similar to Nafion residing in the ampullae of Lorenzini of sharks has proton conductivity only slightly lower than nafion. High proton conductivity has been reported among alkaline-earth cerates and zirconate based perovskite materials such as acceptor doped SrCeO3, BaCeO3 and BaZrO3. Relatively high proton conductivity has also been found in rare-earth ortho-niobates and ortho-tantalates as well as rare-earth tungstates. References Electrochemistry
Proton conductor
[ "Chemistry" ]
415
[ "Electrochemistry", "Physical chemistry stubs", "Electrochemistry stubs" ]
1,607,203
https://en.wikipedia.org/wiki/FM%20broadcasting
FM broadcasting is a method of radio broadcasting that uses frequency modulation (FM) of the radio broadcast carrier wave. Invented in 1933 by American engineer Edwin Armstrong, wide-band FM is used worldwide to transmit high-fidelity sound over broadcast radio. FM broadcasting offers higher fidelity—more accurate reproduction of the original program sound—than other broadcasting techniques, such as AM broadcasting. It is also less susceptible to common forms of interference, having less static and popping sounds than are often heard on AM. Therefore, FM is used for most broadcasts of music and general audio (in the audio spectrum). FM radio stations use the very high frequency range of radio frequencies. Broadcast bands Throughout the world, the FM broadcast band falls within the VHF part of the radio spectrum. Usually 87.5 to 108.0 MHz is used, or some portion of it, with few exceptions: In the former Soviet republics, and some former Eastern Bloc countries, the older 65.8–74 MHz band is also used. Assigned frequencies are at intervals of 30 kHz. This band, sometimes referred to as the OIRT band, is slowly being phased out. Where the OIRT band is used, the 87.5–108.0 MHz band is referred to as the CCIR band. In Japan, the band 76–95 MHz is used. In Brazil, until the late 2010s, FM broadcast stations only used the 88–108 MHz band, but with the phasing out of analog television, the 76-88 MHz band (old band channels 5 and 6 in VHF television) are allocated for old local MW stations which have moved to FM in agreement with ANATEL. The frequency of an FM broadcast station (more strictly its assigned nominal center frequency) is usually a multiple of 100 kHz. In most of South Korea, the Americas, the Philippines, and the Caribbean, only odd multiples are used. Some other countries follow this plan because of the import of vehicles, principally from the United States, with radios that can only tune to these frequencies. In some parts of Europe, Greenland, and Africa, only even multiples are used. In the United Kingdom, both odd and even are used. In Italy, multiples of 50 kHz are used. In most countries the maximum permitted frequency error of the unmodulated carrier is specified, which typically should be within 2 kHz of the assigned frequency. There are other unusual and obsolete FM broadcasting standards in some countries, with non-standard spacings of 1, 10, 30, 74, 500, and 300 kHz. To minimise inter-channel interference, stations operating from the same or nearby transmitter sites tend to keep to at least a 500 kHz frequency separation even when closer frequency spacing is technically permitted. The ITU publishes Protection Ratio graphs, which give the minimum spacing between frequencies based on their relative strengths. Only broadcast stations with large enough geographic separations between their coverage areas can operate on the same or close frequencies. Technology Modulation Frequency modulation or FM is a form of modulation which conveys information by varying the frequency of a carrier wave; the older amplitude modulation or AM varies the amplitude of the carrier, with its frequency remaining constant. With FM, frequency deviation from the assigned carrier frequency at any instant is directly proportional to the amplitude of the (audio) input signal, determining the instantaneous frequency of the transmitted signal. Because transmitted FM signals use significantly more bandwidth than AM signals, this form of modulation is commonly used with the higher (VHF or UHF) frequencies used by TV, the FM broadcast band, and land mobile radio systems. The maximum frequency deviation of the carrier is usually specified and regulated by the licensing authorities in each country. For a stereo broadcast, the maximum permitted carrier deviation is invariably ±75 kHz, although a little higher is permitted in the United States when SCA systems are used. For a monophonic broadcast, again the most common permitted maximum deviation is ±75 kHz. However, some countries specify a lower value for monophonic broadcasts, such as ±50 kHz. Bandwidth The bandwidth of an FM transmission is given by the Carson bandwidth rule which is the sum of twice the maximum deviation and twice the maximum modulating frequency. For a transmission that includes RDS this would be  = . This is also known as the necessary bandwidth. Pre-emphasis and de-emphasis Random noise has a triangular spectral distribution in an FM system, with the effect that noise occurs predominantly at the higher audio frequencies within the baseband. This can be offset, to a limited extent, by boosting the high frequencies before transmission and reducing them by a corresponding amount in the receiver. Reducing the high audio frequencies in the receiver also reduces the high-frequency noise. These processes of boosting and then reducing certain frequencies are known as pre-emphasis and de-emphasis, respectively. The amount of pre-emphasis and de-emphasis used is defined by the time constant of a simple RC filter circuit. In most of the world a 50 μs time constant is used. In the Americas and South Korea, 75 μs is used. This applies to both mono and stereo transmissions. For stereo, pre-emphasis is applied to the left and right channels before multiplexing. The use of pre-emphasis becomes a problem because many forms of contemporary music contain more high-frequency energy than the musical styles which prevailed at the birth of FM broadcasting. Pre-emphasizing these high-frequency sounds would cause excessive deviation of the FM carrier. Modulation control (limiter) devices are used to prevent this. Systems more modern than FM broadcasting tend to use either programme-dependent variable pre-emphasis; e.g., dbx in the BTSC TV sound system, or none at all. Pre-emphasis and de-emphasis was used in the earliest days of FM broadcasting. According to a BBC report from 1946, 100 μs was originally considered in the US, but 75 μs subsequently adopted. Stereo FM Long before FM stereo transmission was considered, FM multiplexing of other types of audio-level information was experimented with. Edwin Armstrong, who invented FM, was the first to experiment with multiplexing, at his experimental 41 MHz station W2XDG located on the 85th floor of the Empire State Building in New York City. These FM multiplex transmissions started in November 1934 and consisted of the main channel audio program and three subcarriers: a fax program, a synchronizing signal for the fax program and a telegraph order channel. These original FM multiplex subcarriers were amplitude modulated. Two musical programs, consisting of both the Red and Blue Network program feeds of the NBC Radio Network, were simultaneously transmitted using the same system of subcarrier modulation as part of a studio-to-transmitter link system. In April 1935, the AM subcarriers were replaced by FM subcarriers, with much improved results. The first FM subcarrier transmissions emanating from Major Armstrong's experimental station KE2XCC at Alpine, New Jersey occurred in 1948. These transmissions consisted of two-channel audio programs, binaural audio programs and a fax program. The original subcarrier frequency used at KE2XCC was 27.5 kHz. The IF bandwidth was ±5 kHz, as the only goal at the time was to relay AM radio-quality audio. This transmission system used 75 μs audio pre-emphasis like the main monaural audio and subsequently the multiplexed stereo audio. In the late 1950s, several systems to add stereo to FM radio were considered by the FCC. Included were systems from 14 proponents including Crosby, Halstead, Electrical and Musical Industries, Ltd (EMI), Zenith, and General Electric. The individual systems were evaluated for their strengths and weaknesses during field tests in Uniontown, Pennsylvania, using KDKA-FM in Pittsburgh as the originating station. The Crosby system was rejected by the FCC because it was incompatible with existing subsidiary communications authorization (SCA) services which used various subcarrier frequencies including 41 and 67 kHz. Many revenue-starved FM stations used SCAs for "storecasting" and other non-broadcast purposes. The Halstead system was rejected due to lack of high frequency stereo separation and reduction in the main channel signal-to-noise ratio. The GE and Zenith systems, so similar that they were considered theoretically identical, were formally approved by the FCC in April 1961 as the standard stereo FM broadcasting method in the United States and later adopted by most other countries. It is important that stereo broadcasts be compatible with mono receivers. For this reason, the left (L) and right (R) channels are algebraically encoded into sum (L+R) and difference (L−R) signals. A mono receiver will use just the L+R signal so the listener will hear both channels through the single loudspeaker. A stereo receiver will add the difference signal to the sum signal to recover the left channel, and subtract the difference signal from the sum to recover the right channel. The (L+R) signal is limited to 30 Hz to 15 kHz to protect a 19 kHz pilot signal. The (L−R) signal, which is also limited to 15 kHz, is amplitude modulated onto a 38 kHz double-sideband suppressed-carrier (DSB-SC) signal, thus occupying 23 kHz to 53 kHz. A 19 kHz ± 2 Hz pilot tone, at exactly half the 38 kHz sub-carrier frequency and with a precise phase relationship to it, as defined by the formula below, is also generated. The pilot is transmitted at 8–10% of overall modulation level and used by the receiver to identify a stereo transmission and to regenerate the 38 kHz sub-carrier with the correct phase. The composite stereo multiplex signal contains the Main Channel (L+R), the pilot tone, and the (L−R) difference signal. This composite signal, along with any other sub-carriers, modulates the FM transmitter. The terms composite, multiplex and even MPX are used interchangeably to describe this signal. The instantaneous deviation of the transmitter carrier frequency due to the stereo audio and pilot tone (at 10% modulation) is where A and B are the pre-emphasized left and right audio signals and =19 kHz is the frequency of the pilot tone. Slight variations in the peak deviation may occur in the presence of other subcarriers or because of local regulations. Another way to look at the resulting signal is that it alternates between left and right at 38 kHz, with the phase determined by the 19 kHz pilot signal. Most stereo encoders use this switching technique to generate the 38 kHz subcarrier, but practical encoder designs need to incorporate circuitry to deal with the switching harmonics. Converting the multiplex signal back into left and right audio signals is performed by a decoder, built into stereo receivers. Again, the decoder can use a switching technique to recover the left and right channels. In addition, for a given RF level at the receiver, the signal-to-noise ratio and multipath distortion for the stereo signal will be worse than for the mono receiver. For this reason many stereo FM receivers include a stereo/mono switch to allow listening in mono when reception conditions are less than ideal, and most car radios are arranged to reduce the separation as the signal-to-noise ratio worsens, eventually going to mono while still indicating a stereo signal is received. As with monaural transmission, it is normal practice to apply pre-emphasis to the left and right channels before encoding and to apply de-emphasis at the receiver after decoding. In the U.S. around 2010, using single-sideband modulation for the stereo subcarrier was proposed. It was theorized to be more spectrum-efficient and to produce a 4 dB s/n improvement at the receiver, and it was claimed that multipath distortion would be reduced as well. A handful of radio stations around the country broadcast stereo in this way, under FCC experimental authority. It may not be compatible with very old receivers, but it is claimed that no difference can be heard with most newer receivers. At present, the FCC rules do not allow this mode of stereo operation. Quadraphonic FM In 1969, Louis Dorren invented the Quadraplex system of single station, discrete, compatible four-channel FM broadcasting. There are two additional subcarriers in the Quadraplex system, supplementing the single one used in standard stereo FM. The baseband layout is as follows: 50 Hz to 15 kHz main channel (sum of all 4 channels) (LF+LR+RF+RR) signal, for mono FM listening compatibility. 23 to 53 kHz (sine quadrature subcarrier) (LF+LR) − (RF+RR) left minus right difference signal. This signal's modulation in algebraic sum and difference with the main channel is used for 2 channel stereo listener compatibility. 23 to 53 kHz (cosine quadrature 38 kHz subcarrier) (LF+RR) − (LR+RF) Diagonal difference. This signal's modulation in algebraic sum and difference with the main channel and all the other subcarriers is used for the Quadraphonic listener. 61 to 91 kHz (sine quadrature 76 kHz subcarrier) (LF+RF) − (LR+RR) Front-back difference. This signal's modulation in algebraic sum and difference with the main channel and all the other subcarriers is also used for the Quadraphonic listener. 105 kHz SCA subcarrier, phase-locked to 19 kHz pilot, for reading services for the blind, background music, etc. The normal stereo signal can be considered as switching between left and right channels at 38 kHz, appropriately band-limited. The quadraphonic signal can be considered as cycling through LF, LR, RF, RR, at 76 kHz. Early efforts to transmit discrete four-channel quadraphonic music required the use of two FM stations; one transmitting the front audio channels, the other the rear channels. A breakthrough came in 1970 when KIOI (K-101) in San Francisco successfully transmitted true quadraphonic sound from a single FM station using the Quadraplex system under Special Temporary Authority from the FCC. Following this experiment, a long-term test period was proposed that would permit one FM station in each of the top 25 U.S. radio markets to transmit in Quadraplex. The test results hopefully would prove to the FCC that the system was compatible with existing two-channel stereo transmission and reception and that it did not interfere with adjacent stations. There were several variations on this system submitted by GE, Zenith, RCA, and Denon for testing and consideration during the National Quadraphonic Radio Committee field trials for the FCC. The original Dorren Quadraplex System outperformed all the others and was chosen as the national standard for Quadraphonic FM broadcasting in the United States. The first commercial FM station to broadcast quadraphonic program content was WIQB (now called WWWW-FM) in Ann Arbor/Saline, Michigan under the guidance of Chief Engineer Brian Jeffrey Brown. Noise reduction Various attempts to add analog noise reduction to FM broadcasting were carried out in the 1970s and 1980s: A commercially unsuccessful noise reduction system used with FM radio in some countries during the late 1970s, Dolby FM was similar to Dolby B but used a modified 25 μs pre-emphasis time constant and a frequency selective companding arrangement to reduce noise. The pre-emphasis change compensates for the excess treble response that otherwise would make listening difficult for those without Dolby decoders. A similar system named High Com FM was tested in Germany between July 1979 and December 1981 by IRT. It was based on the Telefunken High Com broadband compander system, but was never introduced commercially in FM broadcasting. Yet another system was the CX-based noise reduction system FMX implemented in some radio broadcasting stations in the United States in the 1980s. Other subcarrier services FM broadcasting has included subsidiary communications authorization (SCA) services capability since its inception, as it was seen as another service which licensees could use to create additional income. Use of SCAs was particularly popular in the US, but much less so elsewhere. Uses for such subcarriers include radio reading services for the blind, which became common and remain so, private data transmission services (for example sending stock market information to stockbrokers or stolen credit card number denial lists to stores,) subscription commercial-free background music services for shops, paging ("beeper") services, alternative-language programming, and providing a program feed for AM transmitters of AM/FM stations. SCA subcarriers are typically 67 kHz and 92 kHz. Initially the users of SCA services were private analog audio channels which could be used internally or leased, for example Muzak-type services. There were experiments with quadraphonic sound. If a station does not broadcast in stereo, everything from 23 kHz on up can be used for other services. The guard band around 19 kHz (±4 kHz) must still be maintained, so as not to trigger stereo decoders on receivers. If there is stereo, there will typically be a guard band between the upper limit of the DSBSC stereo signal (53 kHz) and the lower limit of any other subcarrier. Digital data services are also available. A 57 kHz subcarrier (phase locked to the third harmonic of the stereo pilot tone) is used to carry a low-bandwidth digital Radio Data System signal, providing extra features such as station name, alternative frequency (AF), traffic data for satellite navigation systems and radio text (RT). This narrowband signal runs at only 1,187.5 bits per second, thus is only suitable for text. A few proprietary systems are used for private communications. A variant of RDS is the North American RBDS. In Germany the analog ARI system was used prior to RDS to alert motorists that traffic announcements were broadcast (without disturbing other listeners). Plans to use ARI for other European countries led to the development of RDS as a more powerful system. RDS is designed to be capable of use alongside ARI despite using identical subcarrier frequencies. In the United States and Canada, digital radio services are deployed within the FM band rather than using Eureka 147 or the Japanese standard ISDB. This in-band on-channel approach, as do all digital radio techniques, makes use of advanced compressed audio. The proprietary iBiquity system, branded as HD Radio, is authorized for "hybrid" mode operation, wherein both the conventional analog FM carrier and digital sideband subcarriers are transmitted. Transmission power The output power of an FM broadcasting transmitter is one of the parameters that governs how far a transmission will cover. The other important parameters are the height of the transmitting antenna and the antenna gain. Transmitter powers should be carefully chosen so that the required area is covered without causing interference to other stations further away. Practical transmitter powers range from a few milliwatts to 80 kW. As transmitter powers increase above a few kilowatts, the operating costs become high and only viable for large stations. The efficiency of larger transmitters is now better than 70% (AC power in to RF power out) for FM-only transmission. This compares to 50% before high efficiency switch-mode power supplies and LDMOS amplifiers were used. Efficiency drops dramatically if any digital HD Radio service is added. Reception distance VHF radio waves usually do not travel far beyond the visual horizon, so reception distances for FM stations are typically limited to . They can also be blocked by hills and to a lesser extent by buildings. Individuals with more-sensitive receivers or specialized antenna systems, or who are located in areas with more favorable topography, may be able to receive useful FM broadcast signals at considerably greater distances. The knife edge effect can permit reception where there is no direct line of sight between broadcaster and receiver. The reception can vary considerably depending on the position. One example is the Učka mountain range, which makes constant reception of Italian signals from Veneto and Marche possible in a good portion of Rijeka, Croatia, despite the distance being over 200 km (125 miles). Other radio propagation effects such as tropospheric ducting and Sporadic E can occasionally allow distant stations to be intermittently received over very large distances (hundreds of miles), but cannot be relied on for commercial broadcast purposes. Good reception across the country is one of the main advantages over DAB/+ radio. This is still less than the range of AM radio waves, which because of their lower frequencies can travel as ground waves or reflect off the ionosphere, so AM radio stations can be received at hundreds (sometimes thousands) of miles. This is a property of the carrier wave's typical frequency (and power), not its mode of modulation. The range of FM transmission is related to the transmitter's RF power, the antenna gain, and antenna height. Interference from other stations is also a factor in some places. In the U.S, the FCC publishes curves that aid in calculation of this maximum distance as a function of signal strength at the receiving location. Computer modelling is more commonly used for this around the world. Many FM stations, especially those located in severe multipath areas, use extra audio compression/processing to keep essential sound above the background noise for listeners, often at the expense of overall perceived sound quality. In such instances, however, this technique is often surprisingly effective in increasing the station's useful range. History Americas Brazil The first radio station to broadcast in FM in Brazil was Rádio Imprensa, which began broadcasting in Rio de Janeiro in 1955, on the 102.1 MHz frequency, founded by businesswoman Anna Khoury. Due to the high import costs of FM radio receivers, transmissions were carried out in circuit closed to businesses and stores, which played ambient music offered by radio. Until 1976, Rádio Imprensa was the only station operating in FM in Brazil. From the second half of the 1970s onwards, FM radio stations began to become popular in Brazil, causing AM radio to gradually lose popularity. In 2021, the Brazilian Ministry of Communications expanded the FM radio band from 87.5-108.0 MHz to 76.1-108.0 MHz to enable the migration of AM radio stations in Brazilian capitals and large cities. United States FM broadcasting began in the late 1930s, when it was initiated by a handful of early pioneer experimental stations, including W1XOJ/W43B/WGTR (shut down in 1953) and W1XTG/WSRS, both transmitting from Paxton, Massachusetts (now listed as Worcester, Massachusetts); W1XSL/W1XPW/W65H/WDRC-FM/WFMQ/WHCN, Meriden, Connecticut; and W2XMN, KE2XCC, and WFMN, Alpine, New Jersey (owned by Edwin Armstrong himself, closed down upon Armstrong's death in 1954). Also of note were General Electric stations W2XDA Schenectady and W2XOY New Scotland, New York—two experimental FM transmitters on 48.5 MHz—which signed on in 1939. The two began regular programming, as W2XOY, on November 20, 1940. Over the next few years this station operated under the call signs W57A, W87A and WGFM, and moved to 99.5 MHz when the FM band was relocated to the 88–108 MHz portion of the radio spectrum. General Electric sold the station in the 1980s. Today this station is WRVE. Other pioneers included W2XQR/W59NY/WQXQ/WQXR-FM, New York; W47NV/WSM-FM Nashville, Tennessee (signed off in 1951); W1XER/W39B/WMNE, with studios in Boston and later Portland, Maine, but whose transmitter was atop the highest mountain in the northeast United States, Mount Washington, New Hampshire (shut down in 1948); and W9XAO/W55M/WTMJ-FM Milwaukee, Wisconsin (went off air in 1950). A commercial FM broadcasting band was formally established in the United States as of January 1, 1941, with the first fifteen construction permits announced on October 31, 1940. These stations primarily simulcast their AM sister stations, in addition to broadcasting lush orchestral music for stores and offices, classical music to an upmarket listenership in urban areas, and educational programming. On June 27, 1945 the FCC announced the reassignment of the FM band to 90 channels from 88–106 MHz (which was soon expanded to 100 channels from 88–108 MHz). This shift, which the AM-broadcaster RCA had pushed for, made all the Armstrong-era FM receivers useless and delayed the expansion of FM. In 1961 WEFM (in the Chicago area) and WGFM (in Schenectady, New York) were reported as the first stereo stations. By the late 1960s, FM had been adopted for broadcast of stereo "A.O.R.—'Album Oriented Rock' Format", but it was not until 1978 that listenership to FM stations exceeded that of AM stations in North America. In most of the 70s FM was seen as highbrow radio associated with educational programming and classical music, which changed during the 1980s and 1990s when Top 40 music stations and later even country music stations largely abandoned AM for FM. Today AM is mainly the preserve of talk radio, news, sports, religious programming, ethnic (minority language) broadcasting and some types of minority interest music. This shift has transformed AM into the "alternative band" that FM once was. (Some AM stations have begun to simulcast on, or switch to, FM signals to attract younger listeners and aid reception problems in buildings, during thunderstorms, and near high-voltage wires. Some of these stations now emphasize their presence on the FM band.) Europe The medium wave band (known as the AM band because most stations using it employ amplitude modulation) was overcrowded in western Europe, leading to interference problems and, as a result, many MW frequencies are suitable only for speech broadcasting. Belgium, the Netherlands, Denmark and particularly Germany were among the first countries to adopt FM on a widespread scale. Among the reasons for this were: The medium wave band in Western Europe became overcrowded after World War II, mainly due to the best available medium wave frequencies used at high power levels by the Allied Occupation Forces, both for broadcasting entertainment to their troops and for broadcasting Cold War propaganda across the Iron Curtain. After World War II, broadcasting frequencies were reorganized and reallocated by delegates of the victorious countries in the Copenhagen Frequency Plan. German broadcasters were left with only two remaining AM frequencies and were forced to look to FM for expansion. Public service broadcasters in Ireland and Australia were far slower at adopting FM radio than those in either North America or continental Europe. Netherlands Hans Idzerda operated a broadcasting station, PCGG, at The Hague from 1919 to 1924, which employed narrow-band FM transmissions. United Kingdom In the United Kingdom the BBC conducted tests during the 1940s, then began FM broadcasting in 1955, with three national networks: the Light Programme, Third Programme and Home Service. These three networks used the sub-band 88.0–94.6 MHz. The sub-band 94.6–97.6 MHz was later used for BBC and local commercial services. Experimental stereo broadcasts started in the London area in January 1958. However, only when commercial broadcasting was introduced to the UK in 1973 did the use of FM pick up in Britain. With the gradual clearance of other users (notably Public Services such as police, fire and ambulance) and the extension of the FM band to 108.0 MHz between 1980 and 1995, FM expanded rapidly throughout the British Isles and effectively took over from LW and MW as the delivery platform of choice for fixed and portable domestic and vehicle-based receivers. In addition, Ofcom (previously the Radio Authority) in the UK issues on demand Restricted Service Licences on FM and also on AM (MW) for short-term local-coverage broadcasting which is open to anyone who does not carry a prohibition and can put up the appropriate licensing and royalty fees. In 2010 around 450 such licences were issued. When the BBC's radio networks were renamed Radio 2, Radio 3 and Radio 4 respectively in 1967 to coincide with the launch of Radio 1, the new station was the only one of the main four to not have an FM frequency allocated, which was the case for 21 years. Instead, Radio 1 shared airtime with Radio 2 FM, on Saturday afternoons, Sunday evenings, weekday evenings (10pm to midnight) and Bank Holidays, eventually having its own FM frequency starting in London in October 1987 on 104.8 MHz from Crystal Palace. Eventually in 1987, a frequency range of 97.6-99.8 MHz was allocated once police mobile radio transmitters were moved from band II, starting in London before being nationally completed by 1989. Radio 1 in London moved from its previous frequency to 98.8 MHz transmitted from the BBC's Wrotham site in Kent. Following this the BBC Radio 1 FM frequencies were rolled out to the rest of the UK. Italy Italy adopted FM broadcast widely in the early 1970s, but first experiments made by RAI dated back to 1950, when the "movement for free radio", developed by so-called "pirates", forced the recognition of free speech rights also through the use of "free radio media such as Broadcast transmitters", and took the case to the Constitutional Court of Italy. The court finally decided in favor of Free Radio. Just weeks after the court's final decision there was an "FM radio boom" involving small private radio stations across the country. By the mid-1970s, every city in Italy had a crowded FM radio spectrum. Greece Greece was another European country where the FM radio spectrum was used at first by the so-called "pirates" (both in Athens and Thessaloniki, the two major Greek cities) in the mid-1970s, before any national stations had started broadcasting on it; there were many AM (MW) stations in use for the purpose. No later than the end of 1977, the national public service broadcasting company EIRT (later also known as ERT) placed in service its first FM transmitter in the capital, Athens. By the end of the 1970s, most of Greek territory was covered by three National FM programs, and every city had many FM "pirates" as well. The adaptation of the FM band for privately owned commercial radio stations came far later, in 1987. Australia FM broadcasting started in Australian capital cities in 1947 on an "experimental" basis, using an ABC national network feed, consisting largely of classical music and Parliament, as a programme source. It had a very small audience and was shut down in 1961 ostensibly to clear the television band: TV channel 5 (102.250 video carrier) if allocated would fall within the VHF FM band (98–108 MHz). The official policy on FM at the time was to eventually introduce it on another band, which would have required FM tuners custom-built for Australia. This policy was finally reversed and FM broadcasting was reopened in 1975 using the VHF band, after the few encroaching TV stations had been moved. Subsequently, it developed steadily until in the 1980s many AM stations transferred to FM due to its superior sound quality and lower operating costs. Today, as elsewhere in the developed world, most urban Australian broadcasting is on FM, although AM talk stations are still very popular. Regional broadcasters still commonly operate AM stations due to the additional range the broadcasting method offers. Some stations in major regional centres simulcast on AM and FM bands. Digital radio using the DAB+ standard has been rolled out to capital cities. New Zealand Like Australia, New Zealand adopted the FM format relatively late. As was the case with privately owned AM radio in the late 1960s, it took a spate of 'pirate' broadcasters to persuade a control-oriented, technology-averse government to allow FM to be introduced after at least five years of consumer campaigning starting in the mid-1970s, particularly in Auckland. An experimental FM station, FM 90.7, was broadcast in Whakatāne in early 1982. Later that year, Victoria University of Wellington's Radio Active began full-time FM transmissions. Commercial FM licences were finally approved in 1983, with Auckland-based 91FM and 89FM being the first to take up the offer. Broadcasting was deregulated in 1989. Like many other countries in Africa and Asia that drive on the left, New Zealand imports vehicles from Japan. The standard radios in these vehicles operate on 76-to-90 MHz, which is not compatible with the 88-to-108 MHz range. Imported cars with Japanese radios can have FM expanders installed which down-convert the higher frequencies above 90 MHz. New Zealand has no indigenous car manufacturers. Trinidad and Tobago Trinidad and Tobago's first FM Radio station was 95.1FM, now rebranded as 951 Remix, which was launched in March 1976 by the TBC Radio Network. Turkey In Turkey, FM broadcasting began in the late 1960s, carrying several shows from the One television network which was transferred from the AM frequency (also known as MW in Turkey). In subsequent years, more MW stations were slowly transferred to FM, and by the end of the 1970s, most radio stations that were previously on MW had been moved to FM, though many talk, news and sport, but mostly religious stations, still remain on MW. Other countries Most other countries implemented FM broadcasting through the 1960s and expanded their use of FM through the 1990s. Because it takes a large number of FM transmitting stations to cover a geographically large country, particularly where there are terrain difficulties, FM is more suited to local broadcasting than for national networks. In such countries, particularly where there are economic or infrastructural problems, "rolling out" a national FM broadcast network to reach the majority of the population can be a slow and expensive process. Despite this, mostly in east European countries, national FM broadcast networks were established in the late 1960s and 1970s. In all Soviet-dependent countries except GDR, the OIRT band was used. First restricted to 68–73 MHz with 100 kHz channel spacing, then in the 1970s eventually expanded to 65.84–74.00 MHz with 30 kHz channel spacing. The use of FM for domestic radio encouraged listeners to acquire cheap FM-only receivers and so reduced the number able to listen to longer-range AM foreign broadcasters. Similar considerations led to domestic radio in South Africa switching to FM in the 1960s. ITU Conferences about FM The frequencies available for FM were decided by some important conferences of ITU. The milestone of those conferences is the Stockholm agreement of 1961 among 38 countries. A 1984 conference in Geneva made some modifications to the original Stockholm agreement particularly in the frequency range above 100 MHz. FM broadcasting switch-off In 2017, Norway became the first country to completely switch to Digital audio broadcasting, the exception being some local stations remaining on FM until 2022, and might be extended to 2031. The switchover to DAB+ meant that especially rural areas obtained a far more diverse radio content compared to the FM-only period; several new radio stations had started transmissions on DAB+ in the years before the FM switch-off. Switzerland is in the process of becoming the second country to switch from FM to DAB+. Public broadcaster SRG SSR shut down its entire FM infrastructure on 31 December 2024, citing low usage of FM (estimated to be 10% of the audience and falling) from widespread adoption of DAB+ and the cost of maintaining two broadcast infrastructures in parallel. Private broadcasters are undertaking a gradual shutdown of their FM transmitters to be completed by 31 December 2026. Small-scale use of the FM broadcast band Consumer use of FM transmitters In some countries, small-scale (Part 15 in United States terms) transmitters are available that can transmit a signal from an audio device (usually an MP3 player or similar) to a standard FM radio receiver; such devices range from small units built to carry audio to a car radio with no audio-in capability (often formerly provided by special adapters for audio cassette decks, which are no longer common on car radio designs) up to full-sized, near-professional-grade broadcasting systems that can be used to transmit audio throughout a property, including systems that synchronize holiday decorative lighting with music. Most such units transmit in full stereo, though some models designed for beginner hobbyists might not. Similar transmitters are often included in satellite radio receivers and some toys. Legality of these devices varies by country. The U.S. Federal Communications Commission and Industry Canada allow them. Starting on 1 October 2006, these devices became legal in most countries in the European Union. Devices made to the harmonized European specification became legal in the UK on 8 December 2006. The FM broadcast band is also used by some inexpensive wireless microphones sold as toys for karaoke or similar purposes, allowing the user to use an FM radio as an output rather than a dedicated amplifier and speaker. Professional-grade wireless microphones generally use bands in the UHF region so they can run on dedicated equipment without broadcast interference. Some wireless headphones transmit in the FM broadcast band, with the headphones tunable to only a subset of the broadcast band. Higher-quality wireless headphones use infrared transmission or UHF ISM bands such as 315 MHz, 863 MHz, 915 MHz, or 2.4 GHz instead of the FM broadcast band. Assistive listening Some assistive listening devices are based on FM radio, mostly using the 72.1 to 75.8 MHz band. Aside from the assisted listening receivers, only certain kinds of FM receivers can tune to this band. Microbroadcasting Low-power transmitters such as those mentioned above are also sometimes used for neighborhood or campus radio stations, though campus radio stations are often run over carrier current. This is generally considered a form of microbroadcasting. As a general rule, enforcement towards low-power FM stations is stricter than with AM stations, due to problems such as the capture effect, and as a result, FM microbroadcasters generally do not reach as far as their AM competitors. Clandestine use of FM transmitters FM transmitters have been used to construct miniature wireless microphones for espionage and surveillance purposes (covert listening devices or so-called "bugs"); the advantage to using the FM broadcast band for such operations is that the receiving equipment would not be considered particularly suspect. Common practice is to tune the bug's transmitter off the ends of the broadcast band, into what in the United States would be TV channel 6 (<87.9 MHz) or aviation navigation frequencies (>107.9 MHz); most FM radios with analog tuners have sufficient overcoverage to pick up these slightly-beyond-outermost frequencies, although many digitally tuned radios have not. Constructing a "bug" is a common early project for electronics hobbyists, and project kits to do so are available from a wide variety of sources. The devices constructed, however, are often too large and poorly shielded for use in clandestine activity. In addition, much pirate radio activity is broadcast in the FM range, because of the band's greater clarity and listenership, the smaller size and lower cost of equipment. See also FM broadcasting in Australia FM broadcasting in Canada FM broadcasting in Egypt FM broadcasting in India FM broadcasting in Japan FM broadcasting in New Zealand FM broadcasting in Pakistan FM broadcasting in the UK FM broadcasting in the United States Ripping music from FM broadcasts RDS (Radio Data System) List of FM radio stations in Bangalore Lists of radio stations in North America List of campus radio stations List of college radio stations in the United States Lists of radio stations in Ghana References External links Related technical content Compatible Four Channel FM System Introduction to FM MPX Frequency Modulation (FM) Tutorial Stereo Multiplexing for Dummies Graphs that show waveforms at different points in the FM Multiplex process Factbook list of stations worldwide Invention History – The Father of FM Audio Engineering Society FM Broadcast and TV Broadcast Aural Subcarriers – Clifton Laboratories Radio communications Broadcast engineering
FM broadcasting
[ "Engineering" ]
8,213
[ "Broadcast engineering", "Electronic engineering", "Telecommunications engineering", "Radio communications" ]
1,607,621
https://en.wikipedia.org/wiki/Methylcyclopentadienyl%20manganese%20tricarbonyl
Methylcyclopentadienyl manganese tricarbonyl (MMT or MCMT) is an organomanganese compound with the formula (C5H4CH3)Mn(CO)3. Initially marketed as a supplement for use in leaded gasoline, MMT was later used in unleaded gasoline to increase the octane rating. Following the implementation of the Clean Air Act (United States) (CAA) in 1970, MMT continued to be used alongside tetraethyl lead (TEL) in the US as leaded gasoline was phased out (prior to TEL finally being banned from US gasoline in 1995), and was also used in unleaded gasoline until 1977. Ethyl Corporation obtained a waiver from the U.S. EPA (Environmental Protection Agency) in 1995, which allows the use of MMT in US unleaded gasoline (not including reformulated gasoline) at a treat rate equivalent to 8.3 mg Mn/L (manganese per liter). MMT has been used in Canadian gasoline since 1976 (and in numerous other countries for many years) at a concentration up to 8.3 mg Mn/L (though the importation and interprovincial trade of gasoline containing MMT was restricted briefly during the period 1997–1998) and was introduced into Australia in 2000. It has been sold under the tradenames HiTEC 3000, Cestoburn and Ecotane. MMT is also used in China. History of usage in the United States Although initially marketed in 1958 as a smoke suppressant for gas turbines, MMT was further developed as an octane enhancer in 1974. When the United States Environmental Protection Agency (EPA) ordered the phase out of TEL in gasoline in 1973, new fuel additives were sought. TEL has been used in certain countries as an additive to increase the octane rating of automotive gasoline but has been phased out in all countries since July 2021. In 1977, the US Congress amended the CAA to require advance approval by the EPA for the continued use of fuel additives such as MMT, ethanol, ethyl tert-butyl ether (ETBE), etc. The new CAA amendment required a "waiver" to allow use of fuel additives made of any elements other than carbon, hydrogen, oxygen (within certain limits) and nitrogen. To obtain a waiver, the applicant was required to demonstrate that the fuel additive would not lead to a failure of vehicle emission control systems. Ethyl Corporation applied to the US EPA for a waiver for MMT in both 1978 and 1981; in both cases the applications were denied because of stated concerns that MMT might damage catalytic converters and increase hydrocarbon emissions. In 1988, Ethyl began a new series of discussions with the EPA to determine a program for developing the necessary data to support a waiver application. In 1990, Ethyl filed its third waiver application prompting an extensive four-year review process. In 1993, the U.S. EPA determined that use of MMT at 8.3 mg Mn/L would not cause, or contribute to, vehicle emission control system failures. Despite that finding, the EPA ultimately denied the waiver request in 1994 due to uncertainty related to health concerns regarding manganese emissions from the use of MMT. As a result of this ruling, Ethyl initiated a legal action claiming that the EPA had exceeded its authority by denying the waiver on these grounds. This was upheld by the US Court of Appeals and EPA subsequently granted a waiver which allows the use of MMT in US unleaded gasoline (not including reformulated gasoline) at a treat rate equivalent to 8.3 mg Mn/L. Implementation of this alternative to TEL has been controversial. Manganese compounds have, in general, very low toxicity, but their combustion products still irreversibly foul catalytic converters. Opposition from automobile manufacturers and some areas of the scientific community has reportedly prompted oil companies to stop voluntarily the usage of MMT in some of their countries of operation. MMT is currently manufactured in the U.S. by the Afton Chemical Corporation, a subsidiary of New Market Corporation. It is also produced and marketed as Cestoburn by Cestoil Chemical Inc. in Canada. Structure and synthesis MMT is manufactured by reduction of bis(methylcyclopentadienyl) manganese using triethylaluminium. The reduction is conducted under an atmosphere of carbon monoxide. The reaction is exothermic, and without proper cooling, can lead to catastrophic thermal runaway. MMT is a so-called half-sandwich complex, or more specifically a "piano-stool" complex (since the three CO ligands are like the legs of a stool). The manganese atom in MMT is coordinated with three carbonyl groups as well as to all five main carbon atoms of the methylcyclopentadienyl ring. These hydrophobic organic ligands make MMT highly lipophilic. A variety of related complexes are known, including ferrocene, which has also been used as an additive to gasoline. Many derivatives of MMT are known. Safety The human and environmental health impacts that may result from the use of MMT will be a function of exposure to either: (1) MMT in its original, unchanged, chemical form and/or (2) manganese combustion products emitted from vehicles operating on gasoline containing MMT as an octane improver. Pre-combustion storage and handling The general public has minimal direct exposure to MMT. As stated by the US EPA in their risk assessment on MMT, "except for accidental or occupational contacts, exposure to MMT itself was not thought likely to pose a significant risk to the general population." Similarly, the Australian National Industrial Chemicals Notification and Assessment Scheme (NICNAS) stated that "[m]inimal public exposure to MMT is likely as a result of spills and splashes of LRP [lead replacement petrol] and aftermarket additives". The MMT dossier registered in the European Chemical Agency's webpage indicates that before combustion in gasoline, MMT is classified as an acute toxicant by the oral, dermal, and inhalation routes of exposure under the European Union's Classification, Labeling and Packaging Regulation (EC/1272/2008), implementing the Global Harmonized System (GHS) of Classification and Labeling. The US ATSDR (Agency for Toxic Substances and Disease Registry) notes that MMT is very unstable in light and degrades to a mixture of less harmful substances and inorganic manganese in less than 2 minutes. Therefore, human exposure to MMT prior to combustion in gasoline would not likely occur at significant levels. The US OSHA (Occupational Health and Safety Administration) has not established a permissible exposure limit specifically for MMT. However, OSHA has set a permissible exposure limit at a ceiling of 5 mg/m3 for manganese and its compounds, while the National Institute for Occupational Safety and Health recommends workers not be exposed to more than 0.2 mg/m3, over an eight-hour time-weighted average. In Europe, the MMT DNELs (Derived No Effect Level) for workers by the inhalation and dermal routes of exposure are 0.6 mg/m3 and 0.11 mg/kg-day, respectively. The MMT DNELs for the general population by the inhalation and dermal routes of exposure are 0.11 mg/m3 and 0.062 mg/kg-day, respectively. Combustion products In 1994 (reaffirmed in 1998, 2001 and 2010), Health Canada concluded that "airborne manganese resulting from the combustion of MMT in gasoline powered vehicles is not entering the Canadian environment in quantities or under conditions that may constitute a health risk" and confirmed they were taking no action with respect to MMT. Similarly, the 2003 NICNAS report states that the airborne concentrations of manganese as a result of car emissions from vehicles using fuel containing MMT poses no health hazard. The assessment conducted by NICNAS asserts that "[m]anganese, the principle degradation by-product from combustion of MMT, is naturally occurring and ubiquitous in the environment. It is an essential nutrient of plants and animals. Environmental exposure to Mn compounds will mostly arise through the gaseous phase. Eventually, these will deposit to land and waters. The emission of Mn into the environment from use of fuels containing MMT is unlikely to develop to levels of concern and therefore poses a low risk for terrestrial or aquatic environments." Additional health studies, overseen by the US EPA, were conducted to explain the transport of manganese in the body. In studies published from 2007 through 2011, no significant health effects are anticipated from the use of MMT in gasoline. Overall combined risk assessment Based on the low potential for the release of concentrated MMT (before its combustion in gasoline) under normal storage and use, as well as its rapid photo-degradation properties, it has been concluded in multiple technical and global regulatory assessments that significant impacts to human health or the environment from MMT use are not anticipated. NICNAS concluded that there is "low occupational risk associated with MMT" both "for workers involved in formulating and distributing LRP or aftermarket fuel additives and those involved in automotive maintenance". Further, they also concluded that there is a "low risk" to the public from the use of MMT. Significant human or environmental exposures associated with manganese compounds (manganese phosphate, manganese sulfate and manganese dioxide) from the combustion of MMT are not expected. In Health Canada's risk assessment on the health implications of the manganese combustion products of MMT, it was concluded that manganese exposures from MMT use are unlikely to pose a risk to health for any sub-group of the population. NICNAS similarly concluded that chronic Mn exposures (from all sources combined) are unlikely to be significantly changed by the use of MMT as a fuel additive. In 2013, a risk assessment on MMT was developed by ARCADIS Consulting and verified by an independent panel, according to the methodology provided by the European Commission in compliance with the requirements of the European Fuel Quality Directive (2009/30/EC). The conclusions of a risk assessment are that "for MMT and its transformation products, when MMT is used as a fuel additive in petrol, no significant human health or environmental concerns related to exposure to either MMT or its transformation [combustion] products (manganese phosphate, manganese sulfate and manganese tetroxide) were identified at use at levels up to 18 mg Mn/L. Depending on the regional needs and the vehicle emission control technology available, an MMT treat rate in the range of 8.3 mg Mn/L to 18 mg Mn/L is scientifically justified and may deliver both environmental and economic benefits without significant adverse effects." T2 Laboratories explosion and fire On December 19, 2007 an explosion and fire occurred in the production of MMT in Florida, which killed four people and injured fourteen. See also List of gasoline additives References Organomanganese compounds Carbonyl complexes Antiknock agents Cyclopentadienyl complexes Half sandwich compounds
Methylcyclopentadienyl manganese tricarbonyl
[ "Chemistry" ]
2,284
[ "Half sandwich compounds", "Organometallic chemistry", "Cyclopentadienyl complexes" ]
1,607,622
https://en.wikipedia.org/wiki/History%20of%20ecology
Ecology is a new science and considered as an important branch of biological science, having only become prominent during the second half of the 20th century. Ecological thought is derivative of established currents in philosophy, particularly from ethics and politics. Its history stems all the way back to the 4th century. One of the first ecologists whose writings survive may have been Aristotle or perhaps his student, Theophrastus, both of whom had interest in many species of animals and plants. Theophrastus described interrelationships between animals and their environment as early as the 4th century BC. Ecology developed substantially in the 18th and 19th century. It began with Carl Linnaeus and his work with the economy of nature. Soon after came Alexander von Humboldt and his work with botanical geography. Alexander von Humboldt and Karl Möbius then contributed with the notion of biocoenosis. Eugenius Warming's work with ecological plant geography led to the founding of ecology as a discipline. Charles Darwin's work also contributed to the science of ecology, and Darwin is often attributed with progressing the discipline more than anyone else in its young history. Ecological thought expanded even more in the early 20th century. Major contributions included: Eduard Suess’ and Vladimir Vernadsky's work with the biosphere, Arthur Tansley's ecosystem, Charles Elton's Animal Ecology, and Henry Cowles ecological succession. Ecology influenced the social sciences and humanities. Human ecology began in the early 20th century and it recognized humans as an ecological factor. Later James Lovelock advanced views on earth as a macro-organism with the Gaia hypothesis. Conservation stemmed from the science of ecology. Important figures and movements include Shelford and the ESA, National Environmental Policy act, George Perkins Marsh, Theodore Roosevelt, Stephen A. Forbes, and post-Dust Bowl conservation. Later in the 20th century world governments collaborated on man’s effects on the biosphere and Earth’s environment. The history of ecology is intertwined with the history of conservation and restoration efforts. 18th and 19th century Ecological murmurs Arcadian and Imperial Ecology In the early Eighteenth century, preceding Carl Linnaeus, two rival schools of thought dominated the growing scientific discipline of ecology. First, Gilbert White a "parson-naturalist" is attributed with developing and endorsing the view of Arcadian ecology. Arcadian ecology advocates for a "simple, humble life for man" and a harmonious relationship with humans and nature. Opposing the Arcadian view is Francis Bacon's ideology, "imperial ecology". Imperialists work "to establish through the exercise of reason and by hard work, man’s dominance over nature". Imperial ecologists also believe that man should become a dominant figure over nature and all other organisms as "once enjoyed in the Garden of Eden". Both views continued their rivalry through the early eighteenth century until Carl Linnaeus's support of imperialism; and in short time due to Linnaeus's popularity, imperial ecology became the dominant view within the discipline. Carl Linnaeus and Systema Naturae Carl Linnaeus, a Swedish naturalist, is well known for his work with taxonomy but his ideas helped to lay the groundwork for modern ecology. He developed a two part naming system for classifying plants and animals. Binomial Nomenclature was used to classify, describe, and name different genera and species. The compiled editions of Systema Naturae developed and popularized the naming system for plants and animals in modern biology. Reid suggests "Linnaeus can fairly be regarded as the originator of systematic and ecological studies in biodiversity," due to his naming and classifying of thousands of plant and animal species. Linnaeus also influenced the foundations of Darwinian evolution, he believed that there could be change in or between different species within fixed genera. Linnaeus was also one of the first naturalists to place men in the same category as primates. The botanical geography and Alexander von Humboldt Throughout the 18th and the beginning of the 19th century, the great maritime powers such as Britain, Spain, and Portugal launched many world exploratory expeditions to develop maritime commerce with other countries, and to discover new natural resources, as well as to catalog them. At the beginning of the 18th century, about twenty thousand plant species were known, versus forty thousand at the beginning of the 19th century, and about 300,000 today. These expeditions were joined by many scientists, including botanists, such as the German explorer Alexander von Humboldt. Humboldt is often considered as father of ecology. He was the first to take on the study of the relationship between organisms and their environment. He exposed the existing relationships between observed plant species and climate, and described vegetation zones using latitude and altitude, a discipline now known as geobotany. Von Humboldt was accompanied on his expedition by the botanist Aimé Bonpland. In 1856, the Park Grass Experiment was established at the Rothamsted Experimental Station to test the effect of fertilizers and manures on hay yields. This is the longest-running field experiment in the world. The notion of biocoenosis: Wallace and Möbius Alfred Russel Wallace, contemporary and colleague of Darwin, was first to propose a "geography" of animal species. Several authors recognized at the time that species were not independent of each other, and grouped them into plant species, animal species, and later into communities of living beings or biocoenosis. The first use of this term is usually attributed to Karl Möbius in 1877, but already in 1825, the French naturalist Adolphe Dureau de la Malle used the term societé about an assemblage of plant individuals of different species. Warming and the foundation of ecology as discipline While Darwin recognized the role of competition as one among many selective forces, Eugen Warming devised a new discipline that took abiotic factors, that is drought, fire, salt, cold etc., as seriously as biotic factors in the assembly of biotic communities. Biogeography before Warming was largely of descriptive nature – faunistic or floristic. Warming's aim was, through the study of organism (plant) morphology and anatomy, i.e. adaptation, to explain why a species occurred under a certain set of environmental conditions. Moreover, the goal of the new discipline was to explain why species occupying similar habitats, experiencing similar hazards, would solve problems in similar ways, despite often being of widely different phylogenetic descent. Based on his personal observations in Brazilian cerrado, in Denmark, Norwegian Finnmark and Greenland, Warming gave the first university course in ecological plant geography. Based on his lectures, he wrote the book 'Plantesamfund', which was immediate translated to German, Polish and Russian, later to English as 'Oecology of Plants'. Through its German edition, the book had an immense effect on British and North American scientists like Arthur Tansley, Henry Chandler Cowles and Frederic Clements. Malthusian influence Thomas Robert Malthus was an influential writer on the subject of population and population limits in the early 19th century. His works were very important in shaping the ways in which Darwin saw the world worked. Malthus wrote: In An Essay on the Principle of Population Malthus argues for the reining in of rising population through 2 checks: Positive and Preventive checks. The first raising death rates, the later lowers birthing rates. Malthus also brings forth the idea that the world population will move past the sustainable number of people. This form of thought still continues to influences debates on birth and marriage rates to this theory brought forth by Malthus. The essay had a major influence on Charles Darwin and helped him to theories his theory of Natural Selection. This struggle proposed by Malthusian thought not only influenced the ecological work of Charles Darwin, but helped bring about an economic theory of world of ecology. Darwinism and the science of ecology It is often held that the roots of scientific ecology may be traced back to Darwin. This contention may look convincing at first glance inasmuch as On the Origin of Species is full of observations and proposed mechanisms that clearly fit within the boundaries of modern ecology (e.g. the cat-to-clover chain – an ecological cascade) and because the term ecology was coined in 1866 by a strong proponent of Darwinism, Ernst Haeckel. However, Darwin never used the word in his writings after this year, not even in his most "ecological" writings such as the foreword to the English edition of Hermann Müller's The Fertilization of Flowers (1883) or in his own treatise of earthworms and mull formation in forest soils (The formation of vegetable mould through the action of worms, 1881). Moreover, the pioneers founding ecology as a scientific discipline, such as Eugen Warming, A. F. W. Schimper, Gaston Bonnier, F.A. Forel, S.A. Forbes and Karl Möbius, made almost no reference to Darwin's ideas in their works. This was clearly not out of ignorance or because the works of Darwin were not widespread. Some such as S.A.Forbes studying intricate food webs asked questions as yet unanswered about the instability of food chains that might persist if dominant competitors were not adapted to have self-constraint. Others focused on the dominant themes at the beginning, concern with the relationship between organism morphology and physiology on one side and environment on the other, mainly abiotic environment, hence environmental selection. Darwin's concept of natural selection on the other hand focused primarily on competition. The mechanisms other than competition that he described, primarily the divergence of character which can reduce competition and his statement that "struggle" as he used it was metaphorical and thus included environmental selection, were given less emphasis in the Origin than competition. Despite most portrayals of Darwin conveying him as a non-aggressive recluse who let others fight his battles, Darwin remained all his life a man nearly obsessed with the ideas of competition, struggle and conquest – with all forms of human contact as confrontation. Although there is nothing incorrect in the details presented in the paragraph above, the fact that Darwinism used a particularly ecological view of adaptation and Haeckel's use and definitions of the term were steeped in Darwinism should not be ignored. According to ecologist and historian Robert P. McIntosh, "the relationship of ecology to Darwinian evolution is explicit in the title of the work in which ecology first appeared." A more elaborate definition by Haeckel in 1870 is translated on the frontispiece of the influential ecology text known as 'Great Apes' as "… ecology is the study of all those complex interrelations referred to by Darwin as the conditions of the struggle for existence." The issues brought up in the above paragraph are covered in more detail in the Early Beginnings section underneath that of History in the Wikipedia page on Ecology. Early 20th century ~ Expansion of ecological thought The biosphere – Eduard Suess and Vladimir Vernadsky By the 19th century, ecology blossomed due to new discoveries in chemistry by Lavoisier and de Saussure, notably the nitrogen cycle. After observing the fact that life developed only within strict limits of each compartment that makes up the atmosphere, hydrosphere, and lithosphere, the Austrian geologist Eduard Suess proposed the term biosphere in 1875. Suess proposed the name biosphere for the conditions promoting life, such as those found on Earth, which includes flora, fauna, minerals, matter cycles, et cetera. In the 1920s Vladimir I. Vernadsky, a Russian geologist who had defected to France, detailed the idea of the biosphere in his work "The biosphere" (1926), and described the fundamental principles of the biogeochemical cycles. He thus redefined the biosphere as the sum of all ecosystems. First ecological damages were reported in the 18th century, as the multiplication of colonies caused deforestation. Since the 19th century, with the Industrial Revolution, more and more pressing concerns have grown about the impact of human activity on the environment. The term ecologist has been in use since the end of the 19th century. The ecosystem: Arthur Tansley Over the 19th century, botanical geography and zoogeography combined to form the basis of biogeography. This science, which deals with habitats of species, seeks to explain the reasons for the presence of certain species in a given location. It was in 1935 that Arthur Tansley, the British ecologist, coined the term ecosystem, the interactive system established between the biocoenosis (the group of living creatures), and their biotope, the environment in which they live. Ecology thus became the science of ecosystems. Tansley's concept of the ecosystem was adopted by the energetic and influential biology educator Eugene Odum. Along with his brother, Howard T. Odum, Eugene P. Odum wrote a textbook which (starting in 1953) educated more than one generation of biologists and ecologists in North America. Ecological succession – Henry Chandler Cowles At the turn of the 20th century, Henry Chandler Cowles was one of the founders of the emerging study of "dynamic ecology", through his study of ecological succession at the Indiana Dunes, sand dunes at the southern end of Lake Michigan. Here Cowles found evidence of ecological succession in the vegetation and the soil with relation to age. Cowles was very much aware of the roots of the concept and of his (primordial) predecessors. Thus, he attributes the first use of the word to the French naturalist Adolphe Dureau de la Malle, who had described the vegetation development after forest clear-felling, and the first comprehensive study of successional processes to the Finnish botanist Ragnar Hult (1881). Animal Ecology – Charles Elton 20th century English zoologist and ecologist, Charles Elton, is commonly credited as "the father of animal ecology". Elton influenced by Victor Shelford's Animal Communities in Temperate America began his research on animal ecology as an assistant to his colleague, Julian Huxley, on an ecological survey of the fauna whilst taking part in the 1921 Oxford University Spitsbergen expedition. Elton's most famous studies were conducted during his time as a biological consultant to the Hudson Bay Company to help understand the fluctuations in the company's fur harvests. Elton studied the population fluctuations and dynamics of snowshoe hare, Canadian lynx, and other mammals of the region. Elton is also considered the first to coin the terms, food chain and food cycle in his famous book Animal Ecology. Elton is also attributed with contributing to disciplines of: invasion ecology, community ecology, and wildlife disease ecology. G. Evelyn Hutchinson – father of modern ecology George "G" Evelyn Hutchinson was a 20th-century ecologist who is commonly recognized as the "Father of Modern Ecology". Hutchinson is of English descent but spent most of professional career studying in New Haven, Connecticut at Yale University. Throughout his career, over six decades, Hutchinson contributed to the sciences of limnology, entomology, genetics, biogeochemistry, mathematical theory of population dynamics and many more. Hutchinson is also attributed as being the first to infuse science with theory within the discipline of ecology. Hutchinson was also one of the first credited with combining ecology with mathematics. Another major contribution of Hutchinson was his development of the current definition of an organism's "niche" – as he recognized the role of an organism within its community. Finally, along with his great impact within the discipline of ecology throughout his professional years, Hutchinson also left a lasting impact in ecology through his many students he inspired. Foremost among them were Robert H. MacArthur, who received his PhD under Hutchinson, and Raymond L. Lindeman, who finished his PhD dissertation during a fellowship under him. MacArthur became the leader of theoretical ecology and, with E. O. Wilson, developed island biography theory. Raymond Lindeman was instrumental in the development of modern ecosystem science. 20th century transition to modern ecology "What is ecology?” was a question that was asked in almost every decade of the 20th century. Unfortunately, the answer most often was that it was mainly a point of view to be used in other areas of biology and also "soft", like sociology, for example, rather than "hard", like physics. Although autecology (essentially physiological ecology) could progress through the typical scientific method of observation and hypothesis testing, synecology (the study of animal and plant communities) and genecology (evolutionary ecology), for which experimentation was as limited as it was for, say, geology, continued with much the same inductive gathering of data as did natural history studies. Most often, patterns, present and historical, were used to develop theories having explanatory power, but which had little actual data in support. Darwin's theory, as much as it is a foundation of modern biology, is a prime example. G. E. Hutchinson, identified above as the "father of modern ecology", through his influence raised the status of much of ecology to that of a rigorous science. By shepherding of Raymond Lindemann's work on the trophic-dynamic concept of ecosystems through the publication process after Lindemann's untimely death, Hutchinson set the groundwork for what became modern ecosystem science. With his two famous papers in the late1950s, "Closing remarks", and "Homage to Santa Rosalia", as they are now known, Hutchinson launched the theoretical ecology which Robert MacArthur championed. Ecosystem science became rapidly and sensibly associated with the "Big Science"—and obviously "hard" science—of atomic testing and nuclear energy. It was brought in by Stanley Auerbach, who established the Environmental Sciences Division at Oak Ridge National Laboratory, to trace the routes of radionuclides through the environment, and by the Odum brothers, Howard and Eugene, much of whose early work was supported by the Atomic Energy Commission. Eugene Odum's textbook, Fundamentals of Ecology, has become something of a bible today. When, in the 1960s, the International Biological Program (IBP) took on an ecosystem character, ecology, with its foundation in systems science, forever entered the realm of Big Science, with projects having large scopes and big budgets. Just two years after the publication of Silent Spring in 1962, ecosystem ecology was trumpeted as THE science of the environment in a series of articles in a special edition of BioScience. Theoretical ecology took a different path to established its legitimacy, especially at eastern universities and certain West Coast campuses. It was the path of Robert MacArthur, who used simple mathematics in his "Three Influential Papers, also published in the late 1950s, on population and community ecology. Although the simple equations of theoretical ecology at the time, were unsupported by data, they still were still deemed to be "heuristic". They were resisted by a number of traditional ecologists, however, whose complaints of "intellectual censorship" of studies that did not fit into the hypothetico-deductive structure of the new ecology might be seen as evidence of the stature to which the Hutchinson-MacArthur approach had risen by the 1970s. MacArthur's untimely death in 1972 was also about the time that postmodernism and the "Science Wars" came to ecology. The names of Kuhn, Wittgenstein, Popper, Lakatos, and Feyerbrend began to enter into arguments in the ecological literature. Darwin's theory of adaptation through natural selection was accused of being tautological. Questions were raised over whether ecosystems were cybernetic and whether ecosystem theory was of any use in application to environmental management. Most vituperative of all was the debate that arose over MacArthur-style ecology. Matters came to a head after a symposium organized by acolytes of MacArthur in homage to him and a second symposium organized by what was disparagingly called the "Tallahassee Mafia" at Wakulla Springs in Florida. The homage volume, published in 1975, had an extensive chapter written by Jared Diamond, who at the time taught kidney physiology at the UCLA School of Medicine, that presented a series of "assembly rules" to explain the patterns of bird species found on island archipelagos, such as Darwin's famous finches on the Galapagos Islands. The Wakulla conference was organized by a group of dissenters led by Daniel Simberloff and Donald Strong, Jr., who were described by David Quammen in his book as arguing that those patterns "might be nothing more than the faces we see in the moon, in clouds, in Rorschach inkblots". Their point was that Diamond's work (and that of others) did not fall within the criterion of falsifiability, laid down for science by the philosopher, Karl Popper. A reviewer of the exchanges between the two camps in an issue of Synthese found "images of hand-to-hand combat or a bar-room brawl" coming to mind. The Florida State group suggested a method that they developed, that of "null" models, to be used much in the way that all scientists use null hypotheses to verify that their results might not have been obtained merely by chance. It was most sharply rebuked by Diamond and Michel Gilpin in the symposium volume and Jonathan Roughgarden in the American Naturalist. There was a parallel controversy adding heat to above that became known in conservation circles as SLOSS (Single Large or Several Small reserves). Diamond had also proposed that, according to the theory of island geography developed by MacArthur and E. O. Wilson, nature preserves should be designed to be as large as possible and maintained as a unified entity. Even cutting a road through a natural area, in Diamond's interpretation of MacArthur and Wilson's theory, would lead to the loss of species, due to the smaller areas of the remaining pieces. Simberloff, meanwhile, who had defaunated mangrove islands off the Florida coast in his award-winning experimental study under E. O. Wilson and tested the fit of the species-area curve of island biogeography theory to the fauna that returned, had gathered data that showed quite the opposite: that many smaller fragments together sometimes held more species that the original whole. It led to considerable vituperation on the pages of Science. In the end, in a somewhat Kuhnian fashion, the arguments probably will finally be settled (or not) by the passing of the participants. However, ecology continues apace as a rigorous, even experimental science. Null models, admittedly difficult to perfect, are in use, and, although a leading conservation scientist recently lauded island biogeography theory as "one of the most elegant and important theories in contemporary ecology, towering above thousands of lesser ideas and concept", he nevertheless finds that "the species-area curve is a blunt tool in many contexts" and "now seems simplistic to the point of being cartoonish". Timeline of ecologists Ecological Influence on the Social Sciences and Humanities Human ecology Human ecology began in the 1920s, through the study of changes in vegetation succession in the city of Chicago. It became a distinct field of study in the 1970s. This marked the first recognition that humans, who had colonized all of the Earth's continents, were a major ecological factor. Humans greatly modify the environment through the development of the habitat (in particular urban planning), by intensive exploitation activities such as logging and fishing, and as side effects of agriculture, mining, and industry. Besides ecology and biology, this discipline involved many other natural and social sciences, such as anthropology and ethnology, economics, demography, architecture and urban planning, medicine and psychology, and many more. The development of human ecology led to the increasing role of ecological science in the design and management of cities. In recent years human ecology has been a topic that has interested organizational researchers. Hannan and Freeman (Population Ecology of Organizations (1977), American Journal of Sociology) argue that organizations do not only adapt to an environment. Instead it is also the environment that selects or rejects populations of organizations. In any given environment (in equilibrium) there will only be one form of organization (isomorphism). Organizational ecology has been a prominent theory in accounting for diversities of organizations and their changing composition over time. James Lovelock and the Gaia hypothesis The Gaia theory, proposed by James Lovelock, in his work Gaia: A New Look at Life on Earth, advanced the view that the Earth should be regarded as a single living macro-organism. In particular, it argued that the ensemble of living organisms has jointly evolved an ability to control the global environment – by influencing major physical parameters as the composition of the atmosphere, the evaporation rate, the chemistry of soils and oceans – so as to maintain conditions favorable to life. The idea has been supported by Lynn Margulis who extended her endosymbiotic theory which suggests that cell organelles originated from free living organisms to the idea that individual organisms of many species could be considered as symbionts within a larger metaphorical "super-organism". This vision was largely a sign of the times, in particular the growing perception after the Second World War that human activities such as nuclear energy, industrialization, pollution, and overexploitation of natural resources, fueled by exponential population growth, were threatening to create catastrophes on a planetary scale, and has influenced many in the environmental movement since then. History and relationship between ecology and conservation and environmental movements Environmentalists and other conservationists have used ecology and other sciences (e.g., climatology) to support their advocacy positions. Environmentalist views are often controversial for political or economic reasons. As a result, some scientific work in ecology directly influences policy and political debate; these in turn often direct ecological research and inquiry. The history of ecology, however, should not be conflated with that of environmental thought. Ecology as a modern science traces only from Darwin's publication of Origin of Species and Haeckel's subsequent naming of the science needed to study Darwin's theory. Awareness of humankind's effect on its environment has been traced to Gilbert White in 18th-century Selborne, England. Awareness of nature and its interactions can be traced back even farther in time. Ecology before Darwin, however, is analogous to medicine prior to Pasteur's discovery of the infectious nature of disease. The history is there, but it is only partly relevant. Neither Darwin nor Haeckel, it is true, did self-avowed ecological studies. The same can be said for researchers in a number of fields who contributed to ecological thought well into the 1940s without avowedly being ecologists. Raymond Pearl's population studies are a case in point. Ecology in subject matter and techniques grew out of studies by botanists and plant geographers in the late 19th and early 20th centuries that paradoxically lacked Darwinian evolutionary perspectives. Until Mendel's studies with peas were rediscovered and melded into the Modern Synthesis, Darwinism suffered in credibility. Many early plant ecologists had a Lamarckian view of inheritance, as did Darwin, at times. Ecological studies of animals and plants, preferably live and in the field, continued apace however. Conservation and environmental movements – 20th Century When the Ecological Society of America (ESA) was chartered in 1915, it already had a conservation perspective. Victor E. Shelford, a leader in the society's formation, had as one of its goals the preservation of the natural areas that were then the objects of study by ecologists, but were in danger of being degraded by human incursion. Human ecology had also been a visible part of the ESA at its inception, as evident by publications such as: "The Control of Pneumonia and Influenza by the Weather," "An Overlook of the Relations of Dust to Humanity," "The Ecological Relations of the Polar Eskimo," and "City Street Dust and Infectious Diseases," in early pages of Ecology and Ecological Monographs. The ESA's second president, Ellsworth Huntington, was a human ecologist. Stephen Forbes, another early president, called for "humanizing" ecology in 1921, since man was clearly the dominant species on the Earth. This auspicious start actually was the first of a series of fitful progressions and reversions by the new science with regard to conservation. Human ecology necessarily focused on man-influenced environments and their practical problems. Ecologists in general, however, were trying to establish ecology as a basic science, one with enough prestige to make inroads into Ivy League faculties. Disturbed environments, it was thought, would not reveal nature's secrets. Interest in the environment created by the American Dust Bowl produced a flurry of calls in 1935 for ecology to take a look at practical issues. Pioneering ecologist C. C. Adams wanted to return human ecology to the science. Frederic E. Clements, the dominant plant ecologist of the day, reviewed land use issues leading to the Dust Bowl in terms of his ideas on plant succession and climax. Paul Sears reached a wide audience with his book, Deserts on the March. World War II, perhaps, caused the issue to be put aside. The tension between pure ecology, seeking to understand and explain, and applied ecology, seeking to describe and repair, came to a head after World War II. Adams again tried to push the ESA into applied areas by having it raise an endowment to promote ecology. He predicted that "a great expansion of ecology" was imminent "because of its integrating tendency." Ecologists, however, were sensitive to the perception that ecology was still not considered a rigorous, quantitative science. Those who pushed for applied studies and active involvement in conservation were once more discreetly rebuffed. Human ecology became subsumed by sociology. It was sociologist Lewis Mumford who brought the ideas of George Perkins Marsh to modern attention in the 1955 conference, "Man’s Role in Changing the Face of the Earth." That prestigious conclave was dominated by social scientists. At it, ecology was accused of "lacking experimental methods" and neglecting "man as an ecological agent." One participant dismissed ecology as "archaic and sterile." Within the ESA, a frustrated Shelford started the Ecologists' Union when his Committee on Preservation of Natural Conditions ceased to function due to the political infighting over the ESA stance on conservation. In 1950, the fledgling organization was renamed and incorporated as the Nature Conservancy, a name borrowed from the British government agency for the same purpose. Two events, however, brought ecology's course back to applied problems. One was the Manhattan Project. It had become the Nuclear Energy Commission after the war. It is now the Department of Energy (DOE). Its ample budget included studies of the impacts of nuclear weapon use and production. That brought ecology to the issue, and it made a "Big Science" of it. Ecosystem science, both basic and applied, began to compete with theoretical ecology (then called evolutionary ecology and also mathematical ecology). Eugene Odum, who published a very popular ecology textbook in 1953, became the champion of the ecosystem. In his publications, Odum called for ecology to have an ecosystem and applied focus. The second event was the publication of Silent Spring. Rachel Carson's book brought ecology as a word and concept to the public. Her influence was instant. A study committee, prodded by the publication of the book, reported to the ESA that their science was not ready to take on the responsibility being given to it. Carson's concept of ecology was very much that of Gene Odum. As a result, ecosystem science dominated the International Biological Program of the 1960s and 1970s, bringing both money and prestige to ecology. Silent Spring was also the impetus for the environmental protection programs that were started in the Kennedy and Johnson administrations and passed into law just before the first Earth Day. Ecologists' input was welcomed. Former ESA President Stanley Cain, for example, was appointed an Assistant Secretary in the Department of the Interior. The environmental assessment requirement of the 1969 National Environmental Policy Act (NEPA), "legitimized ecology," in the words of one environmental lawyer. An ESA President called it "an ecological 'Magna Carta.'" A prominent Canadian ecologist declared it a "boondoggle." NEPA and similar state statutes, if nothing else, provided much employment for ecologists. Therein was the issue. Neither ecology nor ecologists were ready for the task. Not enough ecologists were available to work on impact assessment, outside of the DOE laboratories, leading to the rise of "instant ecologists," having dubious credentials and capabilities. Calls began to arise for the professionalization of ecology. Maverick scientist Frank Egler, in particular, devoted his sharp prose to the task. Again, a schism arose between basic and applied scientists in the ESA, this time exacerbated by the question of environmental advocacy. The controversy, whose history has yet to receive adequate treatment, lasted through the 1970s and 1980s, ending with a voluntary certification process by the ESA, along with lobbying arm in Washington. Post-Earth Day, besides questions of advocacy and professionalism, ecology also had to deal with questions having to do with its basic principles. Many of the theoretical principles and methods of both ecosystem science and evolutionary ecology began to show little value in environmental analysis and assessment. Ecologist, in general, started to question the methods and logic of their science under the pressure of its new notoriety. Meanwhile, personnel with government agencies and environmental advocacy groups were accused of religiously applying dubious principles in their conservation work. Management of endangered Spotted Owl populations brought the controversy to a head. Conservation for ecologists created travails paralleling those nuclear power gave former Manhattan Project scientists. In each case, science had to be reconciled with individual politics, religious beliefs, and worldviews, a difficult process. Some ecologists managed to keep their science separate from their advocacy; others unrepentantly became avowed environmentalists. Roosevelt & American conservation Theodore Roosevelt was interested in nature from a young age. He carried his passion for nature into his political policies. Roosevelt felt it was necessary to preserve the resources of the nation and its environment. In 1902 he created the federal reclamation service, which reclaimed land for agriculture. He also created the Bureau of Forestry. This organization, headed by Gifford Pinchot, was formed to manage and maintain the nations timberlands. Roosevelt signed the Act for the Preservation of American Antiquities in 1906. This act allowed for him to "declare by public proclamation historic landmarks, historic and prehistoric structures, and other objects of historic and scientific interest that are situated upon lands owned or controlled by the Government of the United States to be national monuments." Under this act he created up to 18 national monuments. During his presidency, Roosevelt established 51 Federal Bird Reservations, 4 National Game Preserves, 150 National Forests, and 5 National Parks. Overall he protected over 200 million acres of land. Ecology and global policy Ecology became a central part of the World's politics as early as 1971, UNESCO launched a research program called Man and Biosphere, with the objective of increasing knowledge about the mutual relationship between humans and nature. A few years later it defined the concept of Biosphere Reserve. In 1972, the United Nations held the first international Conference on the Human Environment in Stockholm, prepared by Rene Dubos and other experts. This conference was the origin of the phrase "Think Globally, Act Locally". The next major events in ecology were the development of the concept of biosphere and the appearance of terms "biological diversity"—or now more commonly biodiversity—in the 1980s. These terms were developed during the Earth Summit in Rio de Janeiro in 1992, where the concept of the biosphere was recognized by the major international organizations, and risks associated with reductions in biodiversity were publicly acknowledged. Then, in 1997, the dangers the biosphere was facing were recognized all over the world at the conference leading to the Kyoto Protocol. In particular, this conference highlighted the increasing dangers of the greenhouse effect – related to the increasing concentration of greenhouse gases in the atmosphere, leading to global changes in climate. In Kyoto, most of the world's nations recognized the importance of looking at ecology from a global point of view, on a worldwide scale, and to take into account the impact of humans on the Earth's environment. See also Humboldtian science References Further reading Egerton, F. N. (2001-2016). A History of the Ecological Sciences. Bulletin of the Ecological Society of America, 57 parts. link. von Humboldt, A. (1805). Essai sur la géographie des plantes, accompagné d’un tableau physique des régions équinoxiales, fondé sur les mésures exécutées, depuis le dixième degré de latitude boréale jusqu’au dixième degré de latitude australe, pendant les années 1799, 1800, 1801, 1802, et 1903 par A. De Humboldt et A. Bonpland. Paris: Chez Levrault, Schoelle et Cie. Sherborn Fund Facsimile No.1. von Humboldt, A. (1805). Voyage de Humboldt et Bonpland. Voyage aux régions équinoxiales du nouveau continent. 5e partie. "Essai sur la géographie des plantes". Paris. Facs intégral de l’édition Paris 1905-1834 par Amsterdam: Theatrum orbis terrarum Ltd., 1973. von Humboldt, A. (1807). Essai sur la géographie des plantes. Facs.ed. London 1959. His essay on "On Isothermal Lines" was published serially in English translation in the Edinburgh Philosophical Journal from 1820 to 1822. Ecology Ecology
History of ecology
[ "Biology" ]
7,682
[ "History of biology by subdiscipline", "Ecology" ]
1,607,648
https://en.wikipedia.org/wiki/Postulates%20of%20special%20relativity
Albert Einstein derived the theory of special relativity in 1905, from principle now called the postulates of special relativity. Einstein's formulation is said to only require two postulates, though his derivation implies a few more assumptions. The idea that special relativity depended only on two postulates, both of which seemed to follow from the theory and experiment of the day, was one of the most compelling arguments for the correctness of the theory (Einstein 1912: "This theory is correct to the extent to which the two principles upon which it is based are correct. Since these seem to be correct to a great extent, ...") Postulates of special relativity 1. First postulate (principle of relativity) The laws of physics take the same form in all inertial frames of reference. 2. Second postulate (invariance of c) As measured in any inertial frame of reference, light is always propagated in empty space with a definite velocity c that is independent of the state of motion of the emitting body. Or: the speed of light in free space has the same value c in all inertial frames of reference. The two-postulate basis for special relativity is the one historically used by Einstein, and it is sometimes the starting point today. As Einstein himself later acknowledged, the derivation of the Lorentz transformation tacitly makes use of some additional assumptions, including spatial homogeneity, isotropy, and memorylessness. Hermann Minkowski also implicitly used both postulates when he introduced the Minkowski space formulation, even though he showed that c can be seen as a space-time constant, and the identification with the speed of light is derived from optics. Alternative derivations of special relativity Historically, Hendrik Lorentz and Henri Poincaré (1892–1905) derived the Lorentz transformation from Maxwell's equations, which served to explain the negative result of all aether drift measurements. By that the luminiferous aether becomes undetectable in agreement with what Poincaré called the principle of relativity (see History of Lorentz transformations and Lorentz ether theory). A more modern example of deriving the Lorentz transformation from electrodynamics (without using the historical aether concept at all), was given by Richard Feynman. George Francis FitzGerald already made an argument similar to Einstein's in 1889, in response to the Michelson-Morley experiment seeming to show both postulates to be true. He wrote that a length contraction is "almost the only hypothesis that can reconcile" the apparent contradictions. Lorentz independently came to similar conclusions, and later wrote "the chief difference being that Einstein simply postulates what we have deduced". Following these derivations, many alternative derivations have been proposed, based on various sets of assumptions. It has often been argued (such as by Vladimir Ignatowski in 1910, or Philipp Frank and Hermann Rothe in 1911, and many others in subsequent years) that a formula equivalent to the Lorentz transformation, up to a non-negative free parameter, follows from just the relativity postulate itself, without first postulating the universal light speed. These formulations rely on the aforementioned various assumptions such as isotropy. The numerical value of the parameter in these transformations can then be determined by experiment, just as the numerical values of the parameter pair c and the Vacuum permittivity are left to be determined by experiment even when using Einstein's original postulates. Experiment rules out the validity of the Galilean transformations. When the numerical values in both Einstein's and other approaches have been found then these different approaches result in the same theory. Insufficiency of the two standard postulates Einstein's 1905 derivation is not complete. A break in Einstein's logic occurs where, after having established "the law of the constancy of the speed of light" for empty space, he invokes the law in situations where space is no longer empty. For the derivation to apply to physical objects requires an additional postulate or "bridging hypothesis", that the geometry derived for empty space also applies when a space is populated. This would be equivalent to stating that we know that the introduction of matter into a region, and its relative motion, have no effect on lightbeam geometry. Such a statement would be problematic, as Einstein rejected the notion that a process such as light-propagation could be immune to other factors (1914: "There can be no doubt that this principle is of far-reaching significance; and yet, I cannot believe in its exact validity. It seems to me unbelievable that the course of any process (e.g., that of the propagation of light in a vacuum) could be conceived of as independent of all other events in the world.") Including this "bridge" as an explicit third postulate might also have damaged the theory's credibility, as refractive index and the Fizeau effect would have suggested that the presence and behaviour of matter does seem to influence light-propagation, contra the theory. If this bridging hypothesis had been stated as a third postulate, it could have been claimed that the third postulate (and therefore the theory) were falsified by the experimental evidence. The 1905 system as "null theory" Without a "bridging hypothesis" as a third postulate, the 1905 derivation is open to the criticism that its derived relationships may only apply in vacuo, that is, in the absence of matter. The controversial suggestion that the 1905 theory, derived by assuming empty space, might only apply to empty space, appears in Edwin F. Taylor and John Archibald Wheeler's book "Spacetime Physics" (Box 3-1: "The Principle of Relativity Rests on Emptiness"). A similar suggestion that the reduction of GR geometry to SR's flat spacetime over small regions may be "unphysical" (because flat pointlike regions cannot contain matter capable of acting as physical observers) was acknowledged but rejected by Einstein in 1914 ("The equations of the new theory of relativity reduce to those of the original theory in the special case where the gμν can be considered constant ... the sole objection that can be raised against the theory is that the equations we have set up might, perhaps, be void of any physical content. But no one is likely to think in earnest that this objection is justified in the present case"). Einstein revisited the problem in 1919 ("It is by no means settled a priori that a limiting transition of this kind has any possible meaning. For if gravitational fields do play an essential part in the structure of the particles of matter, the transition to the limiting case of constant gμν would, for them, lose its justification, for indeed, with constant gμν there could not be any particles of matter.") A further argument for unphysicality can be gleaned from Einstein's solution to the "hole problem" under general relativity, in which Einstein rejects the physicality of coordinate-system relationships in truly empty space. Alternative relativistic models Einstein's special theory is not the only theory that combines a form of light speed constancy with the relativity principle. A theory along the lines of that proposed by Heinrich Hertz (in 1890) allows for light to be fully dragged by all objects, giving local c-constancy for all physical observers. The logical possibility of a Hertzian theory shows that Einstein's two standard postulates (without the bridging hypothesis) are not sufficient to allow us to arrive uniquely at the solution of special relativity (although special relativity might be considered the most minimalist solution). Einstein agreed that the Hertz theory was logically consistent ("It is on the basis of this hypothesis that Hertz developed an electrodynamics of moving bodies that is free of contradictions."), but dismissed it on the grounds of a poor agreement with the Fizeau result, leaving special relativity as the only remaining option. Given that SR was similarly unable to reproduce the Fizeau result without introducing additional auxiliary rules (to address the different behaviour of light in a particulate medium), this was perhaps not a fair comparison. Mathematical formulation of the postulates In the rigorous mathematical formulation of special relativity, we suppose that the universe exists on a four-dimensional spacetime M. Individual points in spacetime are known as events; physical objects in spacetime are described by worldlines (if the object is a point particle) or worldsheets (if the object is larger than a point). The worldline or worldsheet only describes the motion of the object; the object may also have several other physical characteristics such as energy-momentum, mass, charge, etc. In addition to events and physical objects, there are a class of inertial frames of reference. Each inertial frame of reference provides a coordinate system for events in the spacetime M. Furthermore, this frame of reference also gives coordinates to all other physical characteristics of objects in the spacetime; for instance, it will provide coordinates for the momentum and energy of an object, coordinates for an electromagnetic field, and so forth. We assume that given any two inertial frames of reference, there exists a coordinate transformation that converts the coordinates from one frame of reference to the coordinates in another frame of reference. This transformation not only provides a conversion for spacetime coordinates , but will also provide a conversion for all other physical coordinates, such as a conversion law for momentum and energy , etc. (In practice, these conversion laws can be efficiently handled using the mathematics of tensors.) We also assume that the universe obeys a number of physical laws. Mathematically, each physical law can be expressed with respect to the coordinates given by an inertial frame of reference by a mathematical equation (for instance, a differential equation) which relates the various coordinates of the various objects in the spacetime. A typical example is Maxwell's equations. Another is Newton's first law. 1. First Postulate (Principle of relativity) Under transitions between inertial reference frames, the equations of all fundamental laws of physics stay form-invariant, while all the numerical constants entering these equations preserve their values. Thus, if a fundamental physical law is expressed with a mathematical equation in one inertial frame, it must be expressed by an identical equation in any other inertial frame, provided both frames are parameterised with charts of the same type. (The caveat on charts is relaxed, if we employ connections to write the law in a covariant form.) 2. Second Postulate (Invariance of c) There exists an absolute constant with the following property. If A, B are two events which have coordinates and in one inertial frame , and have coordinates and in another inertial frame , then if and only if . Informally, the Second Postulate asserts that objects travelling at speed c in one reference frame will necessarily travel at speed c in all reference frames. This postulate is a subset of the postulates that underlie Maxwell's equations in the interpretation given to them in the context of special relativity. However, Maxwell's equations rely on several other postulates, some of which are now known to be false (e.g., Maxwell's equations cannot account for the quantum attributes of electromagnetic radiation). The second postulate can be used to imply a stronger version of itself, namely that the spacetime interval is invariant under changes of inertial reference frame. In the above notation, this means that for any two events A, B. This can in turn be used to deduce the transformation laws between reference frames; see Lorentz transformation. The postulates of special relativity can be expressed very succinctly using the mathematical language of pseudo-Riemannian manifolds. The second postulate is then an assertion that the four-dimensional spacetime M is a pseudo-Riemannian manifold equipped with a metric g of signature (1,3), which is given by the Minkowski metric when measured in each inertial reference frame. This metric is viewed as one of the physical quantities of the theory; thus it transforms in a certain manner when the frame of reference is changed, and it can be legitimately used in describing the laws of physics. The first postulate is an assertion that the laws of physics are invariant when represented in any frame of reference for which g is given by the Minkowski metric. One advantage of this formulation is that it is now easy to compare special relativity with general relativity, in which the same two postulates hold but the assumption that the metric is required to be Minkowski is dropped. The theory of Galilean relativity is the limiting case of special relativity in the limit (which is sometimes referred to as the non-relativistic limit). In this theory, the first postulate remains unchanged, but the second postulate is modified to: If A, B are two events which have coordinates and in one inertial frame , and have coordinates and in another inertial frame , then . Furthermore, if , then . The physical theory given by classical mechanics, and Newtonian gravity is consistent with Galilean relativity, but not special relativity. Conversely, Maxwell's equations are not consistent with Galilean relativity unless one postulates the existence of a physical aether. In a number of cases, the laws of physics in special relativity (such as the equation ) can be deduced by combining the postulates of special relativity with the hypothesis that the laws of special relativity approach the laws of classical mechanics in the non-relativistic limit. Notes
Postulates of special relativity
[ "Physics" ]
2,783
[ "Special relativity", "Theory of relativity" ]
1,607,826
https://en.wikipedia.org/wiki/Jost%20B%C3%BCrgi
Jost Bürgi (also Joost, Jobst; Latinized surname Burgius or Byrgius; 28 February 1552 – 31 January 1632), active primarily at the courts in Kassel and Prague, was a Swiss clockmaker, mathematician, and writer. Life Bürgi was born in 1552 Lichtensteig, Toggenburg, at the time a subject territory of the Abbey of St. Gall (now part of the canton of St. Gallen, Switzerland). Not much is known about his life or education before his employment as astronomer and clockmaker at the court of William IV in Kassel in 1579; it has been theorized that he acquired his mathematical knowledge at Strasbourg, among others from Swiss mathematician Conrad Dasypodius, but there are no facts to support this. Although an autodidact, he was already during his lifetime considered as one of the most excellent mechanical engineers of his generation. His employer, William IV, Landgrave of Hesse-Kassel, in a letter to Tycho Brahe praised Bürgi as a "second Archimedes" (quasi indagine Archimedes alter est). Another autodidact, Nicolaus Reimers, in 1587 translated Copernicus' De Revolutionibus Orbium Coelestium into German for Bürgi. A copy of the translation survived in Graz, it is thus called "Grazer Handschrift". In 1604, he entered the service of emperor Rudolf II in Prague. Here, he befriended Johannes Kepler. Bürgi constructed a table of sines (Canon Sinuum), which was supposedly very accurate, but since the table itself is lost, it is difficult to be sure of its real accuracy (for instance, Valentinus Otho's Opus Palatinum had parts which were not as accurate as it was claimed). An introduction to some of Bürgi's methods survives in a copy by Kepler; it discusses the basics of Algebra (or Coss as it was known at the time), and of decimal fractions. Some authors consider Bürgi as one of the inventors of logarithms. His legacy also includes the engineering achievement contained in his innovative mechanical astronomical models. During his years in Prague he worked closely with the astronomer Johannes Kepler at the court of Rudolf II. Bürgi as a clockmaker It is undocumented where he learned his clockmaking skills, but eventually he became the most innovative clock and scientific instrument maker of his time. Among his major horological inventions were the cross-beat escapement, and the remontoire, two mechanisms which improved the accuracy of mechanical clocks of the time by orders of magnitude. This allowed for the first time clocks to be used as scientific instruments, with enough accuracy to time the passing of stars (and other heavenly bodies) in the crosshairs of telescopes to start accurately charting stellar positions. Working as an instrument maker for the court of William IV, Landgrave of Hesse-Kassel in Kassel he played a pivotal role in developing the first astronomical charts. He invented logarithms as a working tool for himself for his astronomical calculations, but as a "craftsman/scholar" rather than a "book scholar" he failed to publish his invention for a long time. In 1592, Rudolf II, Holy Roman Emperor in Prague received from his uncle, the Landgrave of Hesse-Kassel, a Bürgi globe and insisted that Bürgi deliver it personally. From then on Bürgi commuted between Kassel and Prague, and finally entered the service of the emperor in 1604 to work for the imperial astronomer Johannes Kepler. Works The most significant artifacts designed and built by Bürgi surviving in museums are: Several mechanized celestial globes, now located at the Musée des Arts et Métiers in Paris, the Swiss National Museum in Zürich, the Orangerie in Kassel (2 pcs., 1580–1595) and the Duchess Anna Amalia Library in Weimar Several clocks at the Orangerie in Kassel, the Mathematisch-Physikalischer Salon in Dresden and the Kunsthistorisches Museum in Vienna including one that incorporates a mechanised celestial globe made of quartz () and one displaying planetary motion () Sextants made for Kepler at the National Technical Museum in Prague A mechanical model of the irregularities of the motion of the Moon around the Earth () at the Orangerie in Kassel Mechanized armillary sphere in Upsala, Sweden Bürgi as a mathematician Bürgi's work on trigonometry By 1586, Bürgi was able to calculate sines at arbitrary precision, using several algorithms, one of which he called Kunstweg. He supposedly used these algorithms to calculate a «Canon Sinuum», a table of sines to 8 places in steps of 2 arc seconds. Nothing more is known on this table, and some authors have speculated that its range was only over 45 degrees. Such tables were extremely important for navigation at sea. Johannes Kepler called the Canon Sinuum the most precise known table of sines. Bürgi explained his algorithms in his work Fundamentum Astronomiae which he presented to Emperor Rudolf II in 1592. Iterative table calculation through Bürgi's algorithm essentially works as follows: cells sum up the values of the two previous cells in the same column. The final cell's value is divided by two, and the next iteration starts. Finally, the values of the last column get normalized. Rather accurate approximations of sines are obtained after few iterations. Only recently, Folkerts et al. proved that this simple process converges indeed towards the true sines. Another of Buergi's algorithms uses differences in order to build up a table, and this was an anticipation of the famous Tables du cadastre. Bürgi's work on logarithms Bürgi constructed a table of progressions what is now understood as antilogarithms independently of John Napier, through a method distinct from Napier's. Napier published his discovery in 1614, and this publication was widely disseminated in Europe by the time Bürgi published at the behest of Johannes Kepler. Bürgi may have constructed his table of progressions around 1600, but Bürgi's work is not a theoretical basis for logarithms, although his table serves the same purpose as Napier's. One source claims that Bürgi did not develop a clear notion of a logarithmic function and can therefore not be viewed as an inventor of logarithms. Bürgi's method is different from that of Napier and was clearly invented independently. Kepler wrote about Bürgi's logarithms in the introduction to his Rudolphine Tables (1627): "... as aids to calculation Justus Byrgius was led to these very logarithms many years before Napier's system appeared; but being an indolent man, and very uncommunicative, instead of rearing up his child for the public benefit he deserted it at birth." Honors The lunar crater Byrgius is named in Bürgi's honor. Notes External links Bürgi, Jost from Oliver Knill History pages Bürgi's Progress Tabulen (1620): logarithmic tables without logarithms from LOCOMAT The Loria Collection of Mathematical Tables 1552 births 1632 deaths 16th-century Swiss mathematicians 16th-century Swiss writers 17th-century Swiss mathematicians 17th-century Swiss writers Astronomical instrument makers Swiss clockmakers
Jost Bürgi
[ "Astronomy" ]
1,550
[ "Astronomical instrument makers", "Astronomical instruments" ]
1,607,968
https://en.wikipedia.org/wiki/Castle%20thunder%20%28sound%20effect%29
Castle thunder is a sound effect that consists of the sound of a loud thunderclap during a rainstorm. It was originally recorded for the 1931 film Frankenstein, and has since been used in dozens of films, television programs, and commercials. History After its use in Frankenstein, the Castle Thunder was used in dozens of films from the 1930s through the 1980s, including Citizen Kane (1941), Bambi (1942), You Only Live Twice (1967), Young Frankenstein (1974), Star Wars (1977), Ghostbusters (1984), Back to the Future (1985), and Big Trouble in Little China (1986). Use of the effect in subsequent years has declined because the quality of the original analog recording does not sufficiently hold up in modern sound mixes. The effect appears in Disney productions (largely from the 1940s to 1980s), and Hanna-Barbera cartoons, including the original Scooby-Doo animated series. It can also be heard at the Haunted Mansion attraction at Disney theme parks. The sound can be found on a few sound effects libraries distributed by Sound Ideas (such as the Soundelux Master Collection, the Network Sound Effects Library, the 20th Century Fox Sound Effects Library and the Hanna-Barbera SoundFX Library). See also Wilhelm scream Howie scream Tarzan's jungle call Goofy holler References External links Common variants of the sound effect Video compilation of castle thunder in modern animation How the crash and roll of castle thunder matches the science of thunderstorms In-jokes Sound effects 1931 works Lightning
Castle thunder (sound effect)
[ "Physics" ]
310
[ "Physical phenomena", "Electrical phenomena", "Lightning" ]
1,608,081
https://en.wikipedia.org/wiki/Diamond%20simulant
A diamond simulant, diamond imitation or imitation diamond is an object or material with gemological characteristics similar to those of a diamond. Simulants are distinct from synthetic diamonds, which are actual diamonds exhibiting the same material properties as natural diamonds. Enhanced diamonds are also excluded from this definition. A diamond simulant may be artificial, natural, or in some cases a combination thereof. While their material properties depart markedly from those of diamond, simulants have certain desired characteristics—such as dispersion and hardness—which lend themselves to imitation. Trained gemologists with appropriate equipment are able to distinguish natural and synthetic diamonds from all diamond simulants, primarily by visual inspection. The most common diamond simulants are high-leaded glass (i.e., rhinestones) and cubic zirconia (CZ), both artificial materials. A number of other artificial materials, such as strontium titanate and synthetic rutile have been developed since the mid-1950s, but these are no longer in common use. Introduced at the end of the 20th century, the lab-grown product moissanite has gained popularity as an alternative to diamond. The high price of gem-grade diamonds, as well as significant ethical concerns of the diamond trade, have created a large demand for diamond simulants. Desired and differential properties In order to be considered for use as a diamond simulant, a material must possess certain diamond-like properties. The most advanced artificial simulants have properties which closely approach diamond, but all simulants have one or more features that clearly and (for those familiar with diamond) easily differentiate them from diamond. To a gemologist, the most important of differential properties are those that foster non-destructive testing; most of these are visual in nature. Non-destructive testing is preferred because most suspected diamonds are already cut into gemstones and set in jewelry, and if a destructive test (which mostly relies on the relative fragility and softness of non-diamonds) fails, it may damage the simulant—an unacceptable outcome for most jewelry owners, as even if a stone is not a diamond, it may still be of value. Following are some of the properties by which diamond and its simulants can be compared and contrasted. Durability and density The Mohs scale of mineral hardness is a non-linear scale of common minerals' resistances to scratching. Diamond is at the top of this scale (hardness 10), as it is one of the hardest naturally occurring materials known. (Some artificial substances, such as aggregated diamond nanorods, are harder.) Since a diamond is unlikely to encounter substances that can scratch it, other than another diamond, diamond gemstones are typically free of scratches. Diamond's hardness also is visually evident (under the microscope or loupe) by its highly lustrous facets (described as adamantine) which are perfectly flat, and by its crisp, sharp facet edges. For a diamond simulant to be effective, it must be very hard relative to most gems. Most simulants fall far short of diamond's hardness, so they can be separated from diamond by their external flaws and poor polish. In the recent past, the so-called "window pane test" was commonly thought to be an assured method of identifying diamond. It is a potentially destructive test wherein a suspect diamond gemstone is scraped against a pane of glass, with a positive result being a scratch on the glass and none on the gemstone. The use of hardness points and scratch plates made of corundum (hardness 9) are also used in place of glass. Hardness tests are inadvisable for three reasons: glass is fairly soft (typically 6 or below) and can be scratched by a large number of materials (including many simulants); diamond has four directions of perfect and easy cleavage (planes of structural weakness along which the diamond could split) which could be triggered by the testing process; and many diamond-like gemstones (including older simulants) are valuable in their own right. The specific gravity (SG) or density of a gem diamond is fairly constant at 3.52. Most simulants are far above or slightly below this value, which can make them easy to identify if unset. High-density liquids such as diiodomethane can be used for this purpose, but these liquids are all highly toxic and therefore are usually avoided. A more practical method is to compare the expected size and weight of a suspect diamond to its measured parameters: for example, a cubic zirconia (SG 5.6–6) will be 1.7 times the expected weight of an equivalently sized diamond. Optics and color Diamonds are usually cut into brilliants to bring out their brilliance (the amount of light reflected back to the viewer) and fire (the degree to which colorful prismatic flashes are seen). Both properties are strongly affected by the cut of the stone, but they are a function of diamond's high refractive index (RI—the degree to which incident light is bent upon entering the stone) of 2.417 (as measured by sodium light, 589.3 nm) and high dispersion (the degree to which white light is split into its spectral colors as it passes through the stone) of 0.044, as measured by the sodium B and G line interval. Thus, if a diamond simulant's RI and dispersion are too low, it will appear comparatively dull or "lifeless"; if the RI and dispersion are too high, the effect will be considered unreal or even tacky. Very few simulants have closely approximating RI and dispersion, and even the close simulants can be separated by an experienced observer. Direct measurements of RI and dispersion are impractical (a standard gemological refractometer has an upper limit of about RI 1.81), but several companies have devised reflectivity meters to gauge a material's RI indirectly by measuring how well it reflects an infrared beam. Perhaps equally important is optic character. Diamond and other cubic (and also amorphous) materials are isotropic, meaning that light entering a stone behaves the same way regardless of direction. Conversely, most minerals are anisotropic, which produces birefringence, or double refraction of light entering the material in all directions other than an optic axis (a direction of single refraction in a doubly refractive material). Under low magnification, this birefringence is usually detectable as a visual doubling of a cut gemstone's rear facets or internal flaws. An effective diamond simulant should therefore be isotropic. Under longwave (365 nm) ultraviolet light, diamond may fluoresce a blue, yellow, green, mauve, or red of varying intensity. The most common fluorescence is blue, and such stones may also phosphoresce yellow—this is thought to be a unique combination among gemstones. There is usually little if any response to shortwave ultraviolet, in contrast to many diamond simulants. Similarly, because most diamond simulants are artificial, they tend to have uniform properties: in a multi-stone diamond ring, one would expect the individual diamonds to fluoresce differently (in different colors and intensities, with some likely to be inert). If all the stones fluoresce in an identical manner, they are unlikely to be mined diamonds (although this result can occur with synthetic diamonds). Most "colorless" diamonds are actually tinted yellow or brown to some degree, whereas some artificial simulants are completely colorless—the equivalent of a perfect "D" in diamond color terminology. This "too good to be true" factor is important to consider; colored diamond simulants meant to imitate fancy diamonds are more difficult to spot in this regard, but the simulants' colors rarely approximate. In most diamonds (even colorless ones) a characteristic absorption spectrum can be seen (by a direct-vision spectroscope), consisting of a fine line at 415 nm. The dopants used to impart color in artificial simulants may be detectable as a complex rare-earth absorption spectrum, which is never seen in diamond. Also present in most diamonds are certain internal and external flaws or inclusions, the most common of which are fractures and solid foreign crystals. Artificial simulants are usually internally flawless, and any flaws that are present are characteristic of the manufacturing process. The inclusions seen in natural simulants will often be unlike those ever seen in diamond, most notably liquid "feather" inclusions. The diamond cutting process will often leave portions of the original crystal's surface intact. These are termed naturals and are usually on the girdle of the stone; they take the form of triangular, rectangular, or square pits (etch marks) and are seen only in diamond. Thermal and electrical Diamond is an extremely effective thermal conductor and usually an electrical insulator. The former property is widely exploited in the use of an electronic thermal probe to separate diamonds from their imitations. These probes consist of a pair of battery-powered thermistors mounted in a fine copper tip. One thermistor functions as a heating device while the other measures the temperature of the copper tip: if the stone being tested is a diamond, it will conduct the tip's thermal energy rapidly enough to produce a measurable temperature drop. As most simulants are thermal insulators, the thermistor's heat will not be conducted. This test takes about 2–3 seconds. The only possible exception is moissanite, which has a thermal conductivity similar to diamond: older probes can be fooled by moissanite, but newer thermal and electrical conductivity testers are sophisticated enough to differentiate the two materials. The latest development is nano diamond coating, an extremely thin layer of diamond material. If not tested properly it may show the same characteristics as a diamond. A diamond's electrical conductance is only relevant to blue or gray-blue stones, because the interstitial boron responsible for their color also makes them semiconductors. Thus, a suspected blue diamond can be affirmed if it completes an electric circuit successfully. Artificial simulants Diamond has been imitated by artificial materials for hundreds of years; advances in technology have seen the development of increasingly better simulants with properties ever nearer those of diamond. Although most of these simulants were characteristic of a certain time period, their large production volumes ensured that all continue to be encountered with varying frequency in jewelry of the present. Nearly all were first conceived for intended use in high technology, such as active laser mediums, varistors, and bubble memory. Due to their limited present supply, collectors may pay a premium for the older types. Summary table The "refractive index(es)" column shows one refractive index for singly refractive substances, and a range for doubly refractive substances. 1700 onwards The formulation of flint glass using lead, alumina, and thallium to increase RI and dispersion began in the late Baroque period. Flint glass is fashioned into brilliants, and when freshly cut they can be surprisingly effective diamond simulants. Known as rhinestones, pastes, or strass, glass simulants are a common feature of antique jewelry; in such cases, rhinestones can be valuable historical artifacts in their own right. The great softness (below hardness 6) imparted by the lead means a rhinestone's facet edges and faces will quickly become rounded and scratched. Together with conchoidal fractures, and air bubbles or flow lines within the stone, these features make glass imitations easy to spot under only moderate magnification. In contemporary production it is more common for glass to be molded rather than cut into shape: in these stones the facets will be concave and facet edges rounded, and mold marks or seams may also be present. Glass has also been combined with other materials to produce composites. 1900–1947 The first crystalline artificial diamond simulants were synthetic white sapphire (Al2O3, pure corundum) and spinel (MgO·Al2O3, pure magnesium aluminium oxide). Both have been synthesized in large quantities since the first decade of the 20th century via the Verneuil or flame-fusion process, although spinel was not in wide use until the 1920s. The Verneuil process involves an inverted oxyhydrogen blowpipe, with purified feed powder mixed with oxygen that is carefully fed through the blowpipe. The feed powder falls through the oxy-hydrogen flame, melts, and lands on a rotating and slowly descending pedestal below. The height of the pedestal is constantly adjusted to keep its top at the optimal position below the flame, and over a number of hours the molten powder cools and crystallizes to form a single pedunculated pear or boule crystal. The process is an economical one, with crystals of up to 9 centimeters (3.5 inches) in diameter grown. Boules grown via the modern Czochralski process may weigh several kilograms. Synthetic sapphire and spinel are durable materials (hardness 9 and 8) that take a good polish; however, due to their much lower RI when compared to diamond (1.762–1.770 for sapphire, 1.727 for spinel), they are "lifeless" when cut. (Synthetic sapphire is also anisotropic, making it even easier to spot.) Their low RIs also mean a much lower dispersion (0.018 and 0.020), so even when cut into brilliants they lack the fire of diamond. Nevertheless, synthetic spinel and sapphire were popular diamond simulants from the 1920s until the late 1940s, when newer and better simulants began to appear. Both have also been combined with other materials to create composites. Commercial names once used for synthetic sapphire include Diamondette, Diamondite, Jourado Diamond''', and Thrilliant. Names for synthetic spinel included Corundolite, Lustergem, Magalux, and Radiant. 1947–1970 The first of the optically "improved" simulants was synthetic rutile (TiO2, pure titanium oxide). Introduced in 1947–48, synthetic rutile possesses plenty of life when cut—perhaps too much life for a diamond simulant. Synthetic rutile's RI and dispersion (2.8 and 0.33) are so much higher than diamond that the resultant brilliants look almost opal-like in their display of prismatic colors. Synthetic rutile is also doubly refractive: although some stones are cut with the table perpendicular to the optic axis to hide this property, merely tilting the stone will reveal the doubled back facets. The continued success of synthetic rutile was also hampered by the material's inescapable yellow tint, which producers were never able to remedy. However, synthetic rutile in a range of different colors, including blues and reds, were produced using various metal oxide dopants. These and the near-white stones were extremely popular if unreal stones. Synthetic rutile is also fairly soft (hardness ~6) and brittle, and therefore wears poorly. It is synthesized via a modification of the Verneuil process, which uses a third oxygen pipe to create a tricone burner; this is necessary to produce a single crystal, due to the much higher oxygen losses involved in the oxidation of titanium. The technique was invented by Charles H. Moore, Jr. at the South Amboy, New Jersey–based National Lead Company (later NL Industries). National Lead and Union Carbide were the primary producers of synthetic rutile, and peak annual production reached 750,000 carats (150 kg). Some of the many commercial names applied to synthetic rutile include: Astryl, Diamothyst, Gava or Java Gem, Meredith, Miridis, Rainbow Diamond, Rainbow Magic Diamond, Rutania, Titangem, Titania, and Ultamite. National Lead was also where research into the synthesis of another titanium compound—strontium titanate (SrTiO3, pure tausonite)—was conducted. Research was done during the late 1940s and early 1950s by Leon Merker and Langtry E. Lynd, who also used a tricone modification of the Verneuil process. Upon its commercial introduction in 1955, strontium titanate quickly replaced synthetic rutile as the most popular diamond simulant. This was due not only to strontium titanate's novelty, but to its superior optics: its RI (2.41) is very close to that of diamond, while its dispersion (0.19), although also very high, was a significant improvement over synthetic rutile's psychedelic display. Dopants were also used to give synthetic titanate a variety of colors, including yellow, orange to red, blue, and black. The material is also isotropic like diamond, meaning there is no distracting doubling of facets as seen in synthetic rutile. Strontium titanate's only major drawback (if one excludes excess fire) is fragility. It is both softer (hardness 5.5) and more brittle than synthetic rutile—for this reason, strontium titanate was also combined with more durable materials to create composites. It was otherwise the best simulant around at the time, and at its peak annual production was 1.5 million carats (300 kg). Due to patent coverage, all US production was by National Lead, while large amounts were produced overseas by Nakazumi Company of Japan. Commercial names for strontium titanate included Brilliante, Diagem, Diamontina, Fabulite, and Marvelite. 1970–1976 From about 1970 strontium titanate began to be replaced by a new class of diamond imitations: the "synthetic garnets". These are not true garnets in the usual sense because they are oxides rather than silicates, but they do share natural garnet's crystal structure (both are cubic and therefore isotropic) and the general formula A3B2C3O12. While in natural garnets C is always silicon, and A and B may be one of several common elements, most synthetic garnets are composed of uncommon rare-earth elements. They are the only diamond simulants (aside from rhinestones) with no known natural counterparts: gemologically they are best termed artificial rather than synthetic, because the latter term is reserved for human-made materials that can also be found in nature. Although a number of artificial garnets were successfully grown, only two became important as diamond simulants. The first was yttrium aluminium garnet (YAG; Y3Al5O12) in the late 1960s. It was (and still is) produced by the Czochralski, or crystal-pulling, process, which involves growth from the melt. An iridium crucible surrounded by an inert atmosphere is used, wherein yttrium oxide and aluminium oxide are melted and mixed together at a carefully controlled temperature near 1980 °C. A small seed crystal is attached to a rod, which is lowered over the crucible until the crystal contacts the surface of the melted mixture. The seed crystal acts as a site of nucleation; the temperature is kept steady at a point where the surface of the mixture is just below the melting point. The rod is slowly and continuously rotated and retracted, and the pulled mixture crystallizes as it exits the crucible, forming a single crystal in the form of a cylindrical boule. The crystal's purity is extremely high, and it typically measures 5 cm (2 inches) in diameter and 20 cm (8 inches) in length, and weighs 9,000 carats (1.75 kg). YAG hardness (8.25) and lack of brittleness were great improvements over strontium titanate, and although its RI (1.83) and dispersion (0.028) were fairly low, they were enough to give brilliant-cut YAGs perceptible fire and good brilliance (although still much lower than diamond). A number of different colors were also produced with the addition of dopants, including yellow, red, and a vivid green, which was used to imitate emerald. Major producers included Shelby Gem Factory of Michigan, Litton Systems, Allied Chemical, Raytheon, and Union Carbide; annual global production peaked at 40 million carats (8000 kg) in 1972, but fell sharply thereafter. Commercial names for YAG included Diamonair, Diamonique, Gemonair, Replique, and Triamond. While market saturation was one reason for the fall in YAG production levels, another was the recent introduction of the other artificial garnet important as a diamond simulant, gadolinium gallium garnet (GGG; Gd3Ga5O12). Produced in much the same manner as YAG (but with a lower melting point of 1750 °C), GGG had an RI (1.97) close to, and a dispersion (0.045) nearly identical to diamond. GGG was also hard enough (hardness 7) and tough enough to be an effective gemstone, but its ingredients were also much more expensive than YAG's. Equally hindering was GGG's tendency to turn dark brown upon exposure to sunlight or other ultraviolet source: this was due to the fact that most GGG gems were fashioned from impure material that was rejected for technological use. The SG of GGG (7.02) is also the highest of all diamond simulants and amongst the highest of all gemstones, which makes loose GGG gems easy to spot by comparing their dimensions with their expected and actual weights. Relative to its predecessors, GGG was never produced in significant quantities; it became more or less unheard of by the close of the 1970s. Commercial names for GGG included Diamonique II and Galliant. Since 1976 Cubic zirconia or CZ (ZrO2; zirconium dioxide—not to be confused with zircon, a zirconium silicate) quickly dominated the diamond simulant market following its introduction in 1976, and it remains the most gemologically and economically important simulant. CZ had been synthesized since 1930 but only in ceramic form: the growth of single-crystal CZ would require an approach radically different from those used for previous simulants due to zirconia's extremely high melting point (2750 °C), unsustainable by any crucible. The solution found involved a network of water-filled copper pipes and radio-frequency induction heating coils; the latter to heat the zirconia feed powder, and the former to cool the exterior and maintain a retaining "skin" under 1 millimeter thick. CZ was thus grown in a crucible of itself, a technique called cold crucible (in reference to the cooling pipes) or skull crucible (in reference to either the shape of the crucible or of the crystals grown). At standard pressure zirconium oxide would normally crystallize in the monoclinic rather than cubic crystal system: for cubic crystals to grow, a stabilizer must be used. This is usually Yttrium(III) oxide or calcium oxide. The skull crucible technique was first developed in 1960s France, but was perfected in the early 1970s by Soviet scientists under V. V. Osiko at the Lebedev Physical Institute in Moscow. By 1980 annual global production had reached 50 million carats (10,000 kg). The hardness (8–8.5), RI (2.15–2.18, isotropic), dispersion (0.058–0.066), and low material cost make CZ the most popular simulant of diamond. Its optical and physical constants are however variable, owing to the different stabilizers used by different producers. There are many formulations of stabilized cubic zirconia. These variations change the physical and optical properties markedly. While the visual likeness of CZ is close enough to diamond to fool most who do not handle diamond regularly, CZ will usually give certain clues. For example: it is somewhat brittle and is soft enough to possess scratches after normal use in jewelry; it is usually internally flawless and completely colorless (whereas most diamonds have some internal imperfections and a yellow tint); its SG (5.6–6) is high; and its reaction under ultraviolet light is a distinctive beige. Most jewelers will use a thermal probe to test all suspected CZs, a test which relies on diamond's superlative thermal conductivity (CZ, like almost all other diamond simulants, is a thermal insulator). CZ is made in a number of different colors meant to imitate fancy diamonds (e.g., yellow to golden brown, orange, red to pink, green, and opaque black), but most of these do not approximate the real thing. Cubic zirconia can be coated with diamond-like carbon to improve its durability, but will still be detected as CZ by a thermal probe. CZ had virtually no competition until the 1998 introduction of moissanite (SiC; silicon carbide). Moissanite is superior to cubic zirconia in two ways: its hardness (8.5–9.25) and low SG (3.2). The former property results in facets that are sometimes as crisp as a diamond's, while the latter property makes simulated moissanite somewhat harder to spot when unset (although still disparate enough to detect). However, unlike diamond and cubic zirconia, moissanite is strongly birefringent. This manifests as the same "drunken vision" effect seen in synthetic rutile, although to a lesser degree. All moissanite is cut with the table perpendicular to the optic axis in order to hide this property from above, but when viewed under magnification at only a slight tilt the doubling of facets (and any inclusions) is readily apparent. The inclusions seen in moissanite are also characteristic: most will have fine, white, subparallel growth tubes or needles oriented perpendicular to the stone's table. It is conceivable that these growth tubes could be mistaken for laser drill holes that are sometimes seen in diamond (see diamond enhancement), but the tubes will be noticeably doubled in moissanite due to its birefringence. Like synthetic rutile, current moissanite production is also plagued by an as yet inescapable tint, which is usually a brownish green. A limited range of fancy colors have been produced as well, the two most common being blue and green. Natural simulants Natural minerals that (when cut) optically resemble white diamonds are rare, because the trace impurities usually present in natural minerals tend to impart color. The earliest simulants of diamond were colorless quartz (A form of silica, which also form obsidian, glass and sand), rock crystal (a type of quartz), topaz, and beryl (goshenite); they are all common minerals with above-average hardness (7–8), but all have low RIs and correspondingly low dispersions. Well-formed quartz crystals are sometimes offered as "diamonds", a popular example being the so-called "Herkimer diamonds" mined in Herkimer County, New York. Topaz's SG (3.50–3.57) also falls within the range of diamond. From a historical perspective, the most notable natural simulant of diamond is zircon. It is also fairly hard (7.5), but more importantly shows perceptible fire when cut, due to its high dispersion of 0.039. Colorless zircon has been mined in Sri Lanka for over 2,000 years; prior to the advent of modern mineralogy, colorless zircon was thought to be an inferior form of diamond. It was called "Matara diamond" after its source location. It is still encountered as a diamond simulant, but differentiation is easy due to zircon's anisotropy and strong birefringence (0.059). It is also notoriously brittle and often shows wear on the girdle and facet edges. Much less common than colorless zircon is colorless scheelite. Its dispersion (0.026) is also high enough to mimic diamond, but although it is highly lustrous its hardness is much too low (4.5–5.5) to maintain a good polish. It is also anisotropic and fairly dense (SG 5.9–6.1). Synthetic scheelite produced via the Czochralski process is available, but it has never been widely used as a diamond simulant. Due to the scarcity of natural gem-quality scheelite, synthetic scheelite is much more likely to simulate it than diamond. A similar case is the orthorhombic carbonate cerussite, which is so fragile (very brittle with four directions of good cleavage) and soft (hardness 3.5) that it is never seen set in jewelry, and only occasionally seen in gem collections because it is so difficult to cut. Cerussite gems have an adamantine luster, high RI (1.804–2.078), and high dispersion (0.051), making them attractive and valued collector's pieces. Aside from softness, they are easily distinguished by cerussite's high density (SG 6.51) and anisotropy with extreme birefringence (0.271). Due to their rarity fancy-colored diamonds are also imitated, and zircon can serve this purpose too. Applying heat treatment to brown zircon can create several bright colors: these are most commonly sky-blue, golden yellow, and red. Blue zircon is very popular, but it is not necessarily color stable; prolonged exposure to ultraviolet light (including the UV component in sunlight) tends to bleach the stone. Heat treatment also imparts greater brittleness to zircon and characteristic inclusions. Another fragile candidate mineral is sphalerite (zinc blende). Gem-quality material is usually a strong yellow to honey brown, orange, red, or green; its very high RI (2.37) and dispersion (0.156) make for an extremely lustrous and fiery gem, and it is also isotropic. But here again, its low hardness (2.5–4) and perfect dodecahedral cleavage preclude sphalerite's wide use in jewelry. Two calcium-rich members of the garnet group fare much better: these are grossularite (usually brownish orange, rarely colorless, yellow, green, or pink) and andradite. The latter is the rarest and most costly of the garnets, with three of its varieties—topazolite (yellow), melanite (black), and demantoid (green)—sometimes seen in jewelry. Demantoid (literally "diamond-like") especially has been prized as a gemstone since its discovery in the Ural Mountains in 1868; it is a noted feature of antique Russian and Art Nouveau jewelry. Titanite or sphene is also seen in antique jewelry; it is typically some shade of chartreuse and has a luster, RI (1.885–2.050), and dispersion (0.051) high enough to be mistaken for diamond, yet it is anisotropic (a high birefringence of 0.105–0.135) and soft (hardness 5.5). Discovered the 1960s, the rich green tsavorite variety of grossular is also very popular. Both grossular and andradite are isotropic and have relatively high RIs (around 1.74 and 1.89 respectively) and high dispersions (0.027 and 0.057), with demantoid's exceeding diamond. However, both have a low hardness (6.5–7.5) and invariably possess inclusions atypical for diamond—the byssolite "horsetails" seen in demantoid are one striking example. Furthermore, most are very small, typically under 0.5 carats (100 mg) in weight. Their lusters range from vitreous to subadamantine, to almost metallic in the usually opaque melanite, which has been used to simulate black diamond. Some natural spinel is also deep black and could serve this same purpose. Composites Because strontium titanate and glass are too soft to survive use as a ring stone, they have been used in the construction of composite or doublet diamond simulants. The two materials are used for the bottom portion (pavilion) of the stone, and in the case of strontium titanate, a much harder material—usually colorless synthetic spinel or sapphire—is used for the top half (crown). In glass doublets, the top portion is made of almandine garnet; it is usually a very thin slice which does not modify the stone's overall body color. There have even been reports of diamond-on-diamond doublets, where a creative entrepreneur has used two small pieces of rough to create one larger stone. In strontium titanate and diamond-based doublets, an epoxy is used to adhere the two halves together. The epoxy may fluoresce under UV light, and there may be residue on the stone's exterior. The garnet top of a glass doublet is physically fused to its base, but in it and the other doublet types there are usually flattened air bubbles seen at the junction of the two halves. A join line is also readily visible whose position is variable; it may be above or below the girdle, sometimes at an angle, but rarely along the girdle itself. The most recent composite simulant involves combining a CZ core with an outer coating of laboratory created amorphous diamond. The concept effectively mimics the structure of a cultured pearl (which combines a core bead with an outer layer of pearl coating), only done for the diamond market. See also Diamond clarity Diamond cut Fullerene Imitation pearl Footnotes References Hall, Cally (1994). Gemstones. pp. 63, 70, 121. Eyewitness Handbooks; Kyodo Printing Co., Singapore. Nassau, Kurt (1980). Gems Made by Man, pp. 203–241. Gemological Institute of America; Santa Monica, California. O'Donoghue, Michael, and Joyner, Louise (2003). Identification of Gemstones, pp. 12–19. Butterworth-Heinemann, Great Britain. Pagel-Theisen, Verena (2001). Diamond Grading ABC: The Manual (9th ed.), pp. 298–313. Rubin & Son n.v.; Antwerp, Belgium. Schadt, H. (1996). Goldsmith's Art: 5000 Years of Jewelry and Hollowware, p. 141. Arnoldsche Art Publisher: Stuttgart & New York. Webster, Robert, and Read, Peter G. (Ed.) (2000). Gems: Their Sources, Descriptions and Identification'' (5th ed.), pp. 65–71. Butterworth-Heinemann, Great Britain. Crystals Glass art
Diamond simulant
[ "Chemistry", "Materials_science" ]
7,446
[ "Crystallography", "Crystals" ]
1,608,293
https://en.wikipedia.org/wiki/Zosuquidar
Zosuquidar (development code LY-335979) is an experimental antineoplastic drug. Zosquidir inhibits P-glycoproteins. Other drugs with this mechanism include tariquidar and laniquidar. P-glycoproteins are trans-membrane proteins that pump foreign substances out of cells in an ATP dependent fashion. Cancers overexpressing P-glycoproteins are able to pump out therapeutic molecules before they are able to reach their target, effectively making the cancer multi-drug resistant. Zosuquidar inhibits P-glycoproteins, inhibiting the efflux pump and restoring sensitivity to chemotherapeutic agents. Zosuqidar was initially characterized by Syntex Corporation, which was acquired by Roche in 1990. Roche licensed the drug to Eli Lilly in 1997. It was granted orphan drug status by the FDA in 2006 for AML. In 2010, it was announced that a phase III clinical trial for the treatment of acute myeloid leukemia (AML) and myelodysplastic syndrome did not meet its primary endpoint and Eli Lilly discontinued its development. Synthesis When dibenzosuberone [1210-35-1] (1) is treated with difluorocarbene (generated in situ from lithium chlorodifluoroacetate), a cyclopropanation occurs to give 10,11-difluoromethanodibenzosuberone [167155-75-1] (2). Reduction of the ketone with borohydride proceeds to afford the derivative wherein the fused cyclpropyl and alcohol are on the same side of the seven-membered ring to give 1,1-Difluorocyclopropane Dibenzosuberol [797790-94-4]&[172925-68-7] (3). This is halogenated with 48% HBr to give the product where both groups are now positioned anti [312905-19-4] (4). Displacement of the bromide with pyrazine [290-37-9] gives the quat [312905-15-0] (5). Sodium borohydride was able to reduce the aromaticity in the sidechain giving the corresponding piperazine, i.e. Fb=[167155-78-4] HCl=PC9799090 (6). The reaction of 5-hydroxyquinoline [578-67-6] (7) with (R)-glycidyl nosylate (8) affords (R)-1-(5-Quinolinyloxy)-2,3-epoxypropane [123750-60-7] [118629-64-4] (8). The convergent synthesis between 6 & 9 gives Zosuquidar in good yield. References Experimental cancer drugs Organofluorides Cyclopropanes Quinolines Piperazines Abandoned drugs
Zosuquidar
[ "Chemistry" ]
662
[ "Drug safety", "Abandoned drugs" ]
1,608,314
https://en.wikipedia.org/wiki/Minerva%20Reefs
The Minerva Reefs () are a group of two submerged atolls located in the Pacific Ocean between Fiji, Niue and Tonga. The islands are the subject of a territorial dispute between Fiji and Tonga, and in addition were briefly claimed by American Libertarians as the centre of a micronation, the Republic of Minerva. Name The reefs were named after the whaleship Minerva, wrecked on what became known as South Minerva after setting out from Sydney in 1829. Many other ships would follow, for example Strathcona, which was sailing north soon after completion in Auckland in 1914. In both cases most of the crew saved themselves in whaleboats or rafts and reached the Lau Islands in Fiji. History The reefs were first known to Europeans by the crew of the brig Rosalia, commanded by Lieutenant John Garland, which was shipwrecked there in 1807. The Oriental Navigator for 1816 recorded Garland's discovery under the name Rosaretta Shoal, warning that it was “a dangerous shoal, on which the Rosaretta, a prize belonging to his Majesty's ship Cornwallis, was wrecked on her passage from Pisco, in Peru, to Port Jackson, in 1807”. It noted that it was “composed of hard coarse sand and coral”, a description that must have come from Garland's report. It also said that “from the distressed situation of the prize-master, Mr. Garland”, the shoal's extent could not be ascertained, and concluded: “The situation is not to be considered as finally determined”. It cited different coordinates from those given by Garland: 30°10 South, longitude 173°45' East. The reefs were put on the charts by Captain John Nicholson of LMS Haweis in December 1818 as reported in The Sydney Gazette 30 January 1819. Captain H. M. Denham of surveyed the reefs in 1854 and renamed them after the Australian whaler Minerva which ran aground on South Minerva Reef on 9 September 1829. Republic of Minerva In 1972, real-estate millionaire Michael Oliver, of the Phoenix Foundation, sought to establish a libertarian country on the reefs. Oliver formed a syndicate, the Ocean Life Research Foundation, which had considerable finances for the project and had offices in New York City and London. In 1971, the organization constructed a steel tower on the reef. The Republic of Minerva issued a declaration of independence on 19 January 1972. Morris Davis was elected as the President of Minerva. However, the islands were also claimed by Tonga. An expedition consisting of 90 prisoners was sent to enforce the claim by building an artificial island with permanent structures above the high-tide mark. Arriving on 18 June 1972, the Flag of the Tonga was raised on the following day on North Minerva and on South Minerva on 21 June 1972. King Tāufaʻāhau Tupou IV announced the annexation of the islands on 26 June; North Minerva was to be renamed Teleki Tokelau, with South Minerva becoming Teleki Tonga. In September 1972, South Pacific Forum recognized Tonga as the only possible owner of the Minerva Reefs, but did not explicitly recognize Tonga's claimed sovereign title. In 1982, a group of Americans led again by Morris Davis tried to occupy the reefs, but were forced off by Tongan troops after three weeks. According to Reason, Minerva has been "more or less reclaimed by the sea". Territorial dispute In 2005, Fiji declared that it did not recognize any maritime water claims by Tonga to the Minerva Reefs under the UNCLOS agreements. In November 2005, Fiji lodged a complaint with the International Seabed Authority concerning Tonga's maritime waters claims surrounding Minerva. Tonga lodged a counter claim. In 2010 the Fijian Navy destroyed navigation lights at the entrance to the lagoon. In late May 2011, they again destroyed navigational equipment installed by Tongans. In early June 2011, two Royal Tongan Navy ships were sent to the reef to replace the equipment, and to reassert Tonga's claim to the territory. Fijian Navy ships in the vicinity reportedly withdrew as the Tongans approached. In an effort to settle the dispute, the government of Tonga revealed a proposal in early July 2014 to give the Minerva Reefs to Fiji in exchange for the Lau Group of islands. In a statement to the Tonga Daily News, Lands Minister Lord Maʻafu Tukuiʻaulahi announced that he would make the proposal to Fiji's Minister for Foreign Affairs, Ratu Inoke Kubuabola. Some Tongans have Lauan ancestors and many Lauans have Tongan ancestors; Tonga's Lands Minister is named after Enele Ma'afu, the Tongan Prince who originally claimed parts of Lau for Tonga. Geography Area: North Reef diameter about , South Reef diameter of about . Terrain: two atolls on dormant volcanic seamounts. Both Minerva Reefs are about southwest of the Tongatapu Group. The atolls are on a common submarine platform from below sea level. North Minerva is circular in shape and has a diameter of about . There is a small sand bar around the atoll, awash at high tide, and a small entrance into the flat lagoon with a somewhat deep harbor. South Minerva is parted into The East Reef and the West Reef, both circular with a diameter of about . Remnants of shipwrecks and platforms remain on the atolls, plus functioning navigation beacons. Geologically, the Minerva Reefs are of a limestone base formed from uplifted coral formations elevated by now-dormant volcanic activity. The climate is subtropical with a distinct warm period (December–April), during which the temperatures rise above , and a cooler period (May–November), with temperatures rarely rising above . The temperature increases from , and the annual rainfall is from as one moves from Cardea in the south to the more northerly islands closer to the Equator. The mean daily humidity is 80 percent. Both North and South Minerva Reefs are used as anchorages by private yachts traveling between New Zealand and Tonga or Fiji. North Minerva (Tongan: Teleki Tokelau) offers the more protected anchorage, with a single, easily negotiated, west-facing pass that offers access to the large, calm lagoon with extensive sandy areas. South Minerva (Tongan: Teleki Tonga) is in shape similar to an infinity symbol, with its eastern lobe partially open to the ocean on the northern side. Shipwrecks The reefs have been the site of several shipwrecks. The brig Rosalía was wrecked on the Minerva Reefs on 19 September 1807. After being captured by HMS Cornwallis at the Peruvian port of Ilo on 13 July, the Rosalía, 375 tons, was dispatched to Port Jackson with seven men on board under the command of Lieutenant John Garland, master of the Cornwallis. Captain John Piper, Commandant at Norfolk Island, reported the arrival of the shipwrecked crew to Governor William Bligh in Sydney in a letter of 12 October 1807. On September 9, 1829, a whaling ship from Australia called the Minerva wrecked on the reef. On July 7, 1962, the Tuaikaepau ('Slow But Sure'), a Tongan vessel on its way to New Zealand, struck the reefs. This wooden vessel was built in 1902 at the same yard as the Strathcona. The crew and passengers survived by living in the remains of a Japanese freighter. There they remained for three months and several died. Without tools, Captain Tēvita Fifita built a small boat using wood recovered from the ship. With this raft, named Malolelei ('Good Day'), he and several others sailed to Fiji in one week. See also List of reefs Micronation References Further reading Interview with Oliver at Stay Free Magazine External links Cruising Yachties Experience at Minerva (2003) Photo Album of Minerva (2007) Photo Album and underwater images of North Minerva Reef (2009) Website of the "Principality of Minerva" micronation, which claims the Minerva Reefs "The Danger and Bounty of the Minerva Reefs" "On passage from Minerva Reef, November 2, 2003" Coral reefs Reefs of the Pacific Ocean Islands of Tonga Tourist attractions in Tonga Territorial disputes of Tonga Territorial disputes of Fiji Fiji–Tonga relations Micronations Artificial islands States and territories established in 1972 1972 in Oceania Atolls of Oceania
Minerva Reefs
[ "Biology" ]
1,673
[ "Biogeomorphology", "Coral reefs" ]
1,608,469
https://en.wikipedia.org/wiki/Cupellation
Cupellation is a refining process in metallurgy in which ores or alloyed metals are treated under very high temperatures and subjected to controlled operations to separate noble metals, like gold and silver, from base metals, like lead, copper, zinc, arsenic, antimony, or bismuth, present in the ore. The process is based on the principle that precious metals typically oxidise or react chemically at much higher temperatures than base metals. When they are heated at high temperatures, the precious metals remain apart, and the others react, forming slags or other compounds. Since the Early Bronze Age, the process was used to obtain silver from smelted lead ores. By the Middle Ages and the Renaissance, cupellation was one of the most common processes for refining precious metals. By then, fire assays were used for assaying minerals: testing fresh metals such as lead and recycled metals to determine their purity for jewellery and coin making. Cupellation is still in use today. Process Large-scale cupellation Native silver is a rare element. Although it exists as such, it is usually found in nature combined with other metals, or in minerals that contain silver compounds, generally in the form of sulfides such as galena (lead sulfide) or cerussite (lead carbonate). So the primary production of silver requires the smelting and then cupellation of argentiferous lead ores. Lead melts at 327 °C, lead oxide at 888 °C, and silver melts at 960 °C. To separate the silver, the alloy is melted again at the high temperature of 960 °C to 1000 °C in an oxidizing environment. The lead oxidises to lead monoxide, then known as litharge, which captures the oxygen from the other metals present. The liquid lead oxide is removed or absorbed by capillary action into the hearth linings. This chemical reaction may be viewed as (s) + 2 (s) + (g) → 2 (absorbed) + Ag(l) The base of the hearth was dug in the form of a saucepan and covered with an inert and porous material rich in calcium or magnesium such as shells, lime, or bone ash. The lining had to be calcareous because lead reacts with silica (clay compounds) to form viscous lead silicate that prevents the needed absorption of litharge, whereas calcareous materials do not react with lead. Some of the litharge evaporates, and the rest is absorbed by the porous earth lining to form "litharge cakes". Litharge cakes are usually circular or concavo-convex, about 15 cm in diameter. They are the most common archaeological evidence of cupellation in the Early Bronze Age. By analyzing their chemical composition, archaeologists can discern what kind of ore was treated, its main components, and the chemical conditions used in the process. This permits insights about production process, trade, social needs or economic situations. Small-scale cupellation Small-scale cupellation is based on the same principle as the one done in a cupellation hearth; the main difference lies in the amount of material to be tested or obtained. The minerals have to be crushed, roasted and smelted to concentrate the metallic components to separate the noble metals. By the Renaissance the use of the cupellation processes was diverse: assay of ores from the mines, testing the amount of silver in jewels or coins or for experimental purposes. It was carried out in small shallow recipients known as cupels. As the main purpose of small-scale cupellation was to assay and test minerals and metals, the matter to be tested must be carefully weighed. The assays were made in the cupellation or assay furnace, which needs to have windows and bellows to ascertain that the air oxidises the lead, as well as to be sure and prepared to take away the cupel when the process is complete. Pure lead must be added to the matter being tested to guarantee the further separation of the impurities. After the litharge has been absorbed by the cupel, buttons of silver were formed and settled in the middle of the cupel. If the alloy also contained a certain amount of gold, it settled with the silver, and both had to be separated by parting. Cupels The primary tool for small-scale cupellation was the cupel. Cupels were manufactured in a careful manner. They used to be small vessels shaped in the form of an inverted truncated cone, made of bone ashes. According to Georg Agricola, the best material was obtained from burned antlers of deer, although fish spines could also work. Ashes have to be ground into a fine and homogeneous powder and mixed with some sticky substance to mould the cupels. Moulds were made out of copper with no bottoms, so that the cupels could be taken off. A shallow depression in the centre of the cupel was made with a rounded pestle. Cupel sizes depend on the amount of material to be assayed. This same shape has been maintained until the present. Archaeological investigations as well as archaeometallurgical analysis and written texts from the Renaissance have demonstrated the existence of different materials for their manufacture; they could be made also with mixtures of bones and wood ashes, of poor quality, or moulded with a mixture of this kind in the bottom with an upper layer of bone ashes. Different recipes depend on the expertise of the assayer or on the special purpose for which it was made (assays for minting, jewelry, testing purity of recycled material or coins). Archaeological evidence shows that at the beginnings of small-scale cupellation, potsherds or clay cupels were used. History The first known use of silver was in the Near East in Anatolia and Mesopotamia during the 4th and 3rd millennium BC, the Early Bronze Age. Archaeological findings of silver and lead objects together with litharge pieces and slag have been studied in a variety of sites. Although this has been interpreted as silver being extracted from lead ores, it has been also suggested that lead was added to collect silver from visible silver minerals embedded in host rock. In both cases silver would be retrieved from lead metal by cupellation. During the following Iron Age, cupellation was done by fusing the base metals with a surplus of lead. The bullion or product of this fusion was then heated in a cupellation furnace to separate the noble metals. Mines such as Rio Tinto, near Huelva in Spain, became an important political and economic site around the Mediterranean Sea, as well as Laurion in Greece. Around 500 BC control over the mines of Laurion gave Athens political advantage and power in the Mediterranean so that they were able to defeat the Persians. During Roman times, the empire needed large quantities of lead to support the Roman civilization over a great territory; they searched for open lead-silver mines in areas they conquered. Silver coinage became the normalised medium of exchange, hence silver production and mine control gave economic and political power. In Roman times it was worth mining lead ores if their content of silver was 0.01% or more. The origin of the use of cupellation for analysis is not known. One of the earliest written references to cupels is Theophilus Divers Ars in the 12th century AD. The process changed little until the 16th century. Small-scale cupellation may be considered the most important fire assay developed in history, and perhaps the origin of chemical analysis. Most of the written evidence comes from the Renaissance in the 16th century. Vannoccio Biringuccio, Georg Agricola and Lazarus Ercker, among others, wrote about the art of mining and testing the ores, as well as detailed descriptions of cupellation. Their descriptions and assumptions have been identified in diverse archaeological findings through Medieval and Renaissance Europe. By these times the amount of fire assays increased considerably, mainly because of testing ores in the mines to identify the availability of its exploitation. A primary use of cupellation was related to minting activities, and it was also used in testing jewelry. Since the Renaissance, cupellation became a standardised method of analysis that has changed little, demonstrating its efficiency. Its development touched the spheres of economy, politics, warfare and power in ancient times. New World The huge amount of Pre-Hispanic silver adornments known especially from Peru, Bolivia and Ecuador raises the question whether the pre-Hispanic civilizations obtained the raw material from native ores or from argentiferous-lead ores. Although native silver may be available in America, it is as rare as in the Old World. From colonial texts it is known that silver mines were open in colonial times by the Spaniards from Mexico to Argentina, the main ones being those of Tasco, Mexico, and Potosí in Bolivia. Some kind of blast furnaces called huayrachinas were described in colonial texts, as native technology furnaces used in Peru and Bolivia to smelt the ores that come from the silver mines owned by the Spaniards. Although it is not conclusive, it is believed that these kinds of furnaces were used before the Spanish Conquest. Ethnoarchaeological and archaeological work in Porco Municipality, Potosí, Bolivia, has suggested pre-European use of huayrachinas. There are no specific archaeological accounts about silver smelting or mining in the Andes prior to the Incas. Silver and lead artefacts have been found in the Peruvian central highlands dated in the pre-Inca and Inca periods. From the presence of lead in silver artefacts, archaeologists suggest that cupellation occurred there. See also Alchemy Archaeometry Bottom-blown oxygen converter History of alchemy History of chemistry Philosopher's stone References Bibliography Bayley, J. 1995. Precious Metal Refining, in Historical Metallurgy Society Datasheets: https://web.archive.org/web/20160418021923/http://hist-met.org/hmsdatasheet02.pdf (accessed January 13, 2010) Bayley, J. 2008 Medieval precious metal refining: archaeology and contemporary texts compared, in Martinón-Torres, M and Rehren, Th (eds) Archaeology, history and science: integrating approaches to ancient materials by. Left Coast Press: 131-150. Bayley, J.,Eckstein, K. 2006. Roman and medieval litharge cakes: structure and composition, in J. Pérez-Arantegui (ed) Proc. 34th Int. Symposium on Archaeometry. Institución Fernando el Católito, CSIC, Zaragoza: 145-153. (PDF) Bayley, J., Rehren, Th. 2007. Towards a functional and Typological classification of crucibles, in La Niece, S and Craddock, P (eds) Metals and Mines. Studies in Archaeometallurgy. Archetype Books: 46-55 Bayley, J., Crossley, D. and Ponting, M. (eds). 2008. Metals and Metalworking. A research framework for archaeometallurgy. Historical Metallurgy Society 6. Craddock, P. T. 1991. Mining and smelting in Antiquity, in Bowman, S. (ed), Science and the Past, London: British Museum Press: 57-73.. Craddock, P. T. 1995. Early metal mining and production. Edinburgh: Edinburgh University Press. Hoover, H. and Hoover, H. 1950 [1556]. Georgius Agricola De Re Metallica. New York: Dover. Howe, E., Petersen, U. 1994. Silver and Lead in late Prehistory of the Montaro Valley, Peru. In Scott, D., and Meyers P. (eds.) Archaeometry of Pre-Columbian Sites and Artifacts: 183-197. The Getty Conservation Institute. Laurion and Thorikos (accessed January 15, 2010) Jones, G.D. 1980. The Roman Mines at Riotinto, in The Journal of Roman Studies 70: 146-165. Society for the promotion of Roman Studies. Jones, D. (ed) 2001. Archaeometallurgy. Centre for Archaeological Guidelines. English Heritage publications. London. Karsten, H., Hauptmann, H., Wright, H., Whallon, R. 1998. Evidence of fourth millennium BC silver production at Fatmali-Kalecik, East Anatolia. in Metallurgica Antiqua: in honour of Hans-Gert Bachmann and Robert Maddin by Bachmann, H. G, Maddin, Robert, Rehren, Thilo, Hauptmann, Andreas, Muhly, James David, Deutsches Bergbau-Museum: 57-67 Kassianidou, V. 2003. Early Extraction of Silver from Complex Polymetallic Ores, in Craddock, P.T. and Lang, J (eds) Mining and Metal production through the Ages. London, British Museum Press: 198-206 Lechtman, H. 1976. A metallurgical site survey in the Peruvian Andes, in Journal of field Archaeology 3 (1): 1-42. Martinón-Torres, M., Rehren, Th. 2005a. Ceramic materials in fire assay practices: a case study of 16th-century laboratory equipment, in M. I. Prudencio, M. I. Dias and J. C. Waerenborgh (eds), Understanding people through their pottery, 139-149 (Trabalhos de Arqueologia 42). Lisbon: Instituto Portugues de Arqueologia. Martinón-Torres, M., Rehren, Th. 2005b. Alchemy, chemistry and metallurgy in Renaissance Europe. A wider context for fire assay remains, in Historical Metallurgy: journal of the Historical Metallurgy Society, 39(1): 14-31. Martinón-Torres, M., Rehren, Th., Thomas, N., Mongiatti, A. 2009. Identifying materials, recipes and choices: Some suggestions for the study of Archaeological cupels. In Giumla-Mair, A. et al., Archaeometallurgy in Europe: 1-11 Milan: AIM Pernicka, E., Rehren, Th., Schmitt-Strecker, S. 1998. Late Uruk silver production by cupellation at Habuba Kabira, Syria in Metallurgica Antiqua : in honour of Hans-Gert Bachmann and Robert Maddin by Bachmann, H. G, Maddin, Robert, Rehren, Thilo, Hauptmann, Andreas, Muhly, James David, Deutsches Bergbau-Museum: 123-134. Rehren, Th.1996. Alchemy and Fire Assay – An Analytical Approach, in Historical Metallurgy 30: 136-142. Rehren, Th. 2003. Crucibles as reaction vessels in ancient metallurgy, in P.T. Craddock and J. Lang (eds), Mining and Metal Production through the Ages, 207-215. London. The British Museum Press. Rehren, Th., Eckstein, K 2002. The development of analytical cupellation in the Middle Ages, in E Jerem and K T Biró (eds) Archaeometry 98. Proceedings of the 31 st Symposium, Budapest, April 26 – May 3, 1998 (Oxford BAR International Series 1043 – Central European Series 1), 2: 445-448. Rehren, Th., Schneider, J., Bartels, Chr. 1999. Medieval lead-silver smelting in the Siegerland, West Germany. In Historical Metallurgy: journal of the Historical Metallurgy Society. 33: 73-84. Sheffield: Historical Metallurgy Society. Tylecote, R.F. 1992. A History of Metallurgy. Second Edition Maney for the Institute of Materials. London. Van Buren, M., Mills, B. 2005. Huayrachinas and Tocochimbos: Traditional Smelting Technology of the Southern Andes, in Latin American Antiquity 16(1):3-25 Wood J. R., Hsu, Y-T and Bell, C. 2021 Sending Laurion Back to the Future: Bronze Age Silver and the Source of Confusion, Internet Archaeology 56. https://doi.org/10.11141/ia.56.9 External links Porco-Potosí archaeological project people.hsc.edu whc.unesco.org searchworks.stanford.edu gabrielbernat.es riotinto.com galileo.rice.edu Söderberg, A. 2011. Eyvind Skáldaspillir's silver - refining and standards in pre-monetary economies in the light of finds from Sigtuna and Gotland. Situne Dei 2011. Edberg, R. Wikström, A. (eds). Sigtuna. Alchemical tools Jewellery making Metallurgical processes Archaeometallurgy Firing techniques
Cupellation
[ "Chemistry", "Materials_science" ]
3,585
[ "Metallurgical processes", "Archaeometallurgy", "Metallurgy" ]
1,608,493
https://en.wikipedia.org/wiki/Experimenter%27s%20regress
In science, experimenter's regress refers to a loop of dependence between theory and evidence. In order to judge whether a new piece of evidence is correct we rely on theory-based predictions, and to judge the value of competing theories we rely on existing evidence. Cognitive bias affects experiments, and experiments determine which theory is valid. This issue is particularly important in new fields of science where there is no consensus regarding the values of various competing theories, and where the extent of experimental errors is not well known. If experimenter's regress acts a positive feedback system, it can be a source of pathological science. An experimenter's strong belief in a new theory produces confirmation bias, and any biased evidence they obtain then strengthens their belief in that particular theory. Neither individual researchers nor entire scientific communities are immune to this effect: see N-rays and polywater. Experimenter's regress is a typical relativistic phenomenon in the Empirical Programme of Relativism (EPOR). EPOR is very much concerned with a focus on social interactions, by looking at particular (local) cases and controversial issues in the context in which they happen. In EPOR, all scientific knowledge is perceived to be socially constructed and is thus "not given by nature". In his article Son of seven sexes: The Social Destruction of a Physical Phenomenon, Harry Collins argued that scientific experiments are subject to what he calls "experimenter's regress". The outcome of a phenomenon that is studied for the first time is always uncertain and judgment in these situations, about what matters, requires considerable experience, tacit and practical knowledge. When a scientist runs an experiment, and the experiment yields a result, they can never be sure whether this is the result which they had expected. The result looks good because they know that their experimental protocol was correct; or the result looks wrong, and therefore there must be something wrong with their experimental protocol. The scientist, in other words, has to get the right answer in order to know that the experiment is working, or know that the experiment is working to get the right answer. In his book Changing Order Collins defines the paradox of Experimenter's regress as follows: Experimenter's regress occurs at the "research frontier" where the outcome of research is uncertain, for the scientist is dealing with "novel phenomena". Collins puts it this way: "usually, successful practice of an experimental skill is evident in a successful outcome to an experiment, but where the detection of a novel phenomenon is in question, it is not clear what should count as a 'successful outcome' – detection or non detection of the phenomenon" (Collins 1981: 34). In new fields of research where no paradigm has yet evolved and where no consensus exists as what counts as proper research, experimenter's regress is a problem that often occurs. Also, in situations where there is much controversy over a discovery or claim due to opposing interests, dissenters will often question experimental evidence that founds a theory. Because, for Collins, all scientific knowledge is socially constructed, there are no purely cognitive reasons or objective criteria that determine whether a claim is valid or not. The regress must be broken by "social negotiation" between scientists in the respective field. In the case of Gravitational Radiation, Collins notices that Weber, the scientist who is said to have discovered the phenomenon, could refute all the critique and had "a technical answer for every other point" but he was not able to convince other scientists and in the end he was not taken seriously anymore. The problems that come with "experimenter's regress" can never be fully avoided because scientific outcomes in EPOR are seen as negotiable and socially constructed. Acceptance of claims boils down to persuasion of other people in the community. Experimenter's regress can always become a problem in a world where "the natural world in no way constrains what is believed to be". Moreover, it is difficult to falsify a claim by replicating an experiment; aside from the practical issues of time, money, access to facilities, etc., an experimental outcome may depend on precise conditions, or tacit knowledge (i.e. unarticulated knowledge) that was not included in the published experimental methods. Tacit knowledge can never be fully articulated or translated into a set of rules. Some commentators have argued that Collins's "experimenter's regress" is foreshadowed by Sextus Empiricus' argument that "if we shall judge the intellects by the senses, and the senses by the intellect, this involves circular reasoning inasmuch as it is required that the intellects should be judged first in order that the intellects may be tested [hence] we possess no means by which to judge objects" (quoted after Godin & Gingras 2002: 140). Others have extended Collins's argument to the cases of theoretical practice ("theoretician's regress"; Kennefick 2000) and computer simulation studies ("simulationist's regress"; Gelfert 2011; Tolk 2017). See also Experimenter's bias References External links Experimenter's Regress on The Stanford Encyclopedia of Philosophy Brown, Matthew J. (2008): "Inquiry, Evidence, and Experiment: The "Experimenter's Regress" Dissolved", abstract, with link to full text. Scientific method Experimental bias
Experimenter's regress
[ "Mathematics" ]
1,113
[ "Experimental bias", "Statistical concepts" ]
1,608,521
https://en.wikipedia.org/wiki/TestNG
TestNG is a testing framework for the Java programming language created by Cedric_Beust and inspired by JUnit and NUnit. The design goal of TestNG is to cover a wider range of test categories: unit, functional, end-to-end, integration, etc., with more powerful and easy-to-use functionalities. Features TestNG's main features include: Annotation support. Support for data-driven/parameterized testing (with @DataProvider and/or XML configuration). Support for multiple instances of the same test class (with @Factory) Flexible execution model. TestNG can be run either by Ant via build.xml (with or without a test suite defined), or by an IDE plugin with visual results. There isn't a TestSuite class, while test suites, groups and tests selected to run are defined and configured by XML files. Concurrent testing: run tests in arbitrarily big thread pools with various policies available (all methods in their own thread, one thread per test class, etc.), and test whether the code is multithread safe. Embeds BeanShell for further flexibility. Default JDK functions for runtime and logging (no dependencies). Dependent methods for application server testing. Distributed testing: allows distribution of tests on slave machines. Data provider A data provider in TestNG is a method in a test class, which provides an array of varied actual values to dependent test methods. Example: //This method will provide data to any test method that declares that its Data Provider is named "provider1". @DataProvider(name = "provider1") public Object[][] createData1() { return new Object[][] { { "Cedric", new Integer(36) }, { "Anne", new Integer(37) } }; } // This test method declares that its data should be supplied by the Data Provider named "provider1". @Test(dataProvider = "provider1") public void verifyData1(String n1, Integer n2) { System.out.println(n1 + " " + n2); } // A data provider which returns an iterator of parameter arrays. @DataProvider(name = "provider2") public Iterator<Object[]> createData() { return new MyIterator(...); } // A data provider with an argument of the type java.lang.reflect.Method. // It is particularly useful when several test methods use the same // provider and you want it to return different values depending on // which test method it is serving. @DataProvider(name = "provider3") public Object[][] createData(Method m) { System.out.println(m.getName()); return new Object[][] { new Object[] { "Cedric" } }; } The returned type of a data provider can be one of the following two types: An array of array of objects (Object[][]) where the first dimension's size is the number of times the test method will be invoked and the second dimension size contains an array of objects that must be compatible with the parameter types of the test method. An Iterator<Object[]>. The only difference with Object[][] is that an Iterator lets you create your test data lazily. TestNG will invoke the iterator and then the test method with the parameters returned by this iterator one by one. This is particularly useful if you have a lot of parameter sets to pass to the method and you don't want to create all of them upfront. Tool support TestNG is supported, out-of-the-box or via plug-ins, by each of the three major Java IDEs - Eclipse, IntelliJ IDEA, and NetBeans. It also comes with a custom task for Apache Ant and is supported by the Maven build system. The Hudson continuous integration server has built-in support for TestNG and is able to track and chart test results over time. Most Java code coverage tools, such as Cobertura, work seamlessly with TestNG. Note: TestNG support for Eclipse is only embedded in the Eclipse Marketplace for Eclipse versions up to 2018-09 (4.9). For later versions of Eclipse, TestNG must be manually installed as per instructions in the TestNG site. Reporting TestNG generates test reports in HTML and XML formats. The XML output can be transformed by the Ant JUnitReport task to generate reports similar to those obtained when using JUnit. Since version 4.6, TestNG also provides a reporter API that permits third-party report generators, such as ReportNG, PDFngreport and TestNG-XSLT, to be used. Comparison with JUnit TestNG has a longstanding rivalry with another testing tool JUnit. Each framework has differences and respective advantages. Stack Overflow discussions reflect this controversy. Annotations In JUnit 5, the @BeforeAll and @AfterAll methods have to be declared as static in most circumstances. TestNG does not have this constraint. TestNG includes four additional setup/teardown annotation pairs for the test suite and groups: @BeforeSuite, @AfterSuite, @BeforeTest, @AfterTest, @BeforeGroup and @AfterGroup, @BeforeMethod and @AfterMethod. TestNG also provides support to automate testing an application using selenium. Parameterized testing Parameterized testing is implemented in both tools, but in quite different ways. TestNG has two ways for providing varying parameter values to a test method: by setting the testng.xml, and by defining a @DataProvider method. In JUnit 5, the @ParameterizedTest annotation allows parameterized testing. This annotation is combined with another annotation declaring the source of parameterized arguments, such as @ValueSource or @EnumSource. Using @ArgumentsSource allows the user to implement a more dynamic ArgumentsProvider. In JUnit 4, @RunWith and @Parameters are used to facilitate parameterized tests, where the @Parameters method has to return a List[] with the parameterized values, which will be fed into the test class constructor. Conclusion Different users often prefer certain features of one framework or another. JUnit is more widely popular and often shipped with mainstream IDEs by default. TestNG is noted for extra configuration options and capability for different kinds of testing. Which one more suitable depends on the use context and requirements. See also List of unit testing frameworks JUnit xUnit References External links TestNG Home page Unit testing frameworks Java platform Software using the Apache license
TestNG
[ "Technology" ]
1,426
[ "Computing platforms", "Java platform" ]
1,608,622
https://en.wikipedia.org/wiki/Isogamy
Isogamy is a form of sexual reproduction that involves gametes of the same morphology (indistinguishable in shape and size), and is found in most unicellular eukaryotes. Because both gametes look alike, they generally cannot be classified as male or female. Instead, organisms that reproduce through isogamy are said to have different mating types, most commonly noted as "+" and "−" strains. Etymology The etymology of isogamy derives from the Greek adjective isos (meaning equal) and the Greek verb gameo (meaning to have sex/to reproduce), eventually meaning "equal reproduction" which refers to a hypothetical initial model of equal contribution of resources by both gametes to a zygote in contrast to a later evolutional stage of anisogamy. The term isogamy was first used in the year 1891. Characteristics of isogamous species Isogamous species often have two mating types (heterothallism), but sometimes can occur between two haploid individuals that are mitotic descendents (homothallism). Some isogamous species have more than two mating types, but the number is usually lower than ten. In some extremely rare cases, such as in some basidiomycete species, a species can have thousands of mating types. Under the strict definition of isogamy, fertilization occurs when two gametes fuse to form a zygote. Sexual reproduction between two cells that does not involve gametes (e.g. conjugation between two mycelia in basidiomycete fungi), is often called isogamy, although it is not technically isogametic reproduction in the strict sense. Evolution As the first stage in the evolution of sexual reproduction in all known lifeforms, isogamy is thought to have evolved just once, in a single unicellular eukaryote species, the common ancestor of all eukaryotes. It is generally accepted that isogamy is an ancestral state for anisogamy. Isogamous reproduction evolved independently in several lineages of plants and animals into anisogamy (species with gametes of male and female types) and subsequently into oogamy (species in which the female gamete is much larger than the male and has no ability to move). This pattern may have been driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Since it appeared, isogamy has remained the norm in unicellular eukaryote species, and it is possible that isogamy is also evolutionarily stable in multicellular species. Occurrence Almost all unicellular eukaryotes are isogamous. Among multicellular organisms, isogamy is restricted to fungi and eukaryotic algae. Many species of green algae are isogamous. It is typical in the genera Ulva, Hydrodictyon, Tetraspora, Zygnema, Spirogyra, Ulothrix, and Chlamydomonas. Many fungi are also isogamous, including single-celled species such as Saccharomyces cerevisiae and Schizosaccharomyces pombe. In some multicellular fungi, such as basidiomycetes, sexual reproduction takes place between two mycelia, but there is no exchange of gametes. There are no known examples of isogamous metazoans, red algae or land plants. See also Biology Anisogamy Evolution of sexual reproduction Gamete Mating in fungi Meiosis Oogamy Sex Social anthropology Hypergamy Hypogamy Notes References Reproduction Germ cells Charophyta
Isogamy
[ "Biology" ]
761
[ "Biological interactions", "Behavior", "Reproduction" ]
1,608,716
https://en.wikipedia.org/wiki/Mass%20General%20Brigham
Mass General Brigham (MGB) (formerly Partners HealthCare) is a not-for-profit, integrated health care system that engages in medical research, teaching, and patient care. It is the largest hospital-based research enterprise in the United States, with annual funding of more than $2 billion. The system's annual revenue was nearly $18 billion in 2022. It is also an educational institution, founded by Brigham and Women's Hospital and Massachusetts General Hospital. The system provides clinical care through two academic hospitals, three specialty hospitals, seven community hospitals, home care services, a health insurance plan, and a robust network of specialty practices, urgent care facilities, and outpatient clinics/surgical centers. It is the largest private employer in Massachusetts. In 2023, the system reported that from 2017–2021 its overall economic impact was $53.4 billion – more than the annual state budget. History Mass General Brigham was founded by the academic medical centers (AMCs) which give it its name: Massachusetts General Hospital (colloquially referred to as "Mass General") and Brigham and Women's Hospital ("the Brigham"). Both hospitals were founded in the early 1800s, are based in Boston, and serve as major teaching hospitals of Harvard Medical School. In 1994, fueled by economic and political pressure to cut costs on patient care and health care education, the two hospitals merged to create a new parent corporation: Partners HealthCare. The two entities continued to operate largely independently, and remained competitors in multiple areas, until 2019. In 2015, Partners launched an electronic health record (EHR) system, allowing doctors, nurses, and other caregivers easier access patients' medical history. The effort computerized millions of health records across the system, creating one record for each Partners patient, allowing information to be more easily shared among caregivers. In 2016, the system moved to into their current headquarters, located in Somerville's Assembly Row. The building allowed Mass General Brigham to merge 14 other offices. In 2019, 25 years after the founding of Partners, the health system made the decision to fully integrate the organization under the new name "Mass General Brigham". Mass General Brigham has 2.5 million patients annually, generating $18 billion in operating revenue and more than $2 billion in research funding. Brigham and Women's and Massachusetts General are consistently ranked among the best hospitals in America, while Massachusetts Eye and Ear, McLean, and Spaulding are also among the nation's best in their respective specialties. Its current President and CEO is Dr. Anne Klibanski. Board of Directors The system's current Board of Directors consists of the following members: Executive Committee of the Board Scott M. Sperling (Chairman) – Co-Chief Executive Officer at Thomas H. Lee Partners John Fish – CEO of Suffolk Construction Company Jonathan Kraft – President of The Kraft Group Board Members Robert Atchinson – Co-Founder of Adage Capital Management Marc Casper – Chairman/President/CEO of Thermo Fisher Scientific Yolonda Colson, MD – Chief for the Division of Thoracic Surgery at Massachusetts General Hospital Zara Cooper, MD – Brigham & Women's Hospital, Assoc Professor of Surgery at Harvard Medical School Anne Finucane – Vice Chair of Bank of America, Board Chair of Bank of America Europe. Benjamin Gomez – Head of Capital Markets at BNP Paribas Real Estate Spain Tiffany Gueye – Fmr Chief Operating Officer at Blue Meridian Partners Susan Hockfield – Professor of Neuroscience and President Emerita at MIT Albert A. Holman, III – Founder and President of Chestnut Partners, Inc David W. Ives – Fmr Chairman, Northshore International Insurance Services, Inc Anne Klibanski, MD – President & CEO of Mass General Brigham Carl J. Martignetti – President of Martignetti Companies Nitin Nohria – Fmr Dean of Harvard Business School Diane B. Patrick – Senior Counsel at Ropes & Gray Phillip Ragon – Founder/Owner/CEO of InterSystems Corporation Pamela Reeve – Fmr CEO and current chair of multiple publicly traded & non-profit companies Paula Ness Speers – Partner and Managing Director, Health Advances James D. Taiclet – President & CEO of Lockheed Martin Alexander L. Thorndike – President of Choate Investment Advisors Carol Vallone – Board Chair at McLean Hospital, Advisory Director at Berkshire Partners Composition Current members of Mass General Brigham include: Affiliated Organizations Nobel Laureates There are at least 22 Nobel Prize winners affiliated with Mass General Brigham institutions. History of Firsts The following is a lists of medical firsts and milestones accomplished by Mass General Brigham institutions: 1811: Massachusetts General Hospital opens and becomes the first teaching hospital of Harvard Medical School. 1818: The Asylum for the Insane, a division of Mass General, opens as the first hospital in New England to treat mental illness. In 1892, it is renamed McLean Hospital, which is known today as the flagship mental health hospital of Harvard Medical School and Mass General Brigham. 1832: The Boston Lying-in Hospital was founded in Boston, MA, as one of the nation's first maternity hospitals dedicated to women unable to afford in-home medical care. It is the first of Brigham and Women's Hospital predecessor institutions. 1837: The first North American book on tumors was written by MGH co-founder Dr. John Collins Warren. 1841: MGH's Warren Library became the first general hospital library in the U.S. 1846: William T.G. Morton, MD, and John Collins Warren, MD, of Mass General perform the first successful public demonstration of surgical ether anesthesia. 1846: The "first truly significant medical patent ever issued" was U.S. Patent No. 4848. It was given to Drs. Charles T. Jackson and William T. G. Morton for the discovery of sulfuric ether as a surgical anesthetic. 1847: MGH's Dr. John Barnard Swett Jackson became the first professor of pathology in the U.S. 1870: MGH's Dr. James Clarke White opened the first ward in North America dedicated to skin diseases; the following year, he became the first American professor of dermatology. 1888: MGH opened the Bradlee Operating Theater, the first aseptic operating room in U.S. 1896: Walter J. Dodd, an apothecary and photographer at MGH, produced the first X-ray exposure in a U.S. hospital. 1900: Two 1878 graduates of the Massachusetts General Hospital Training School for Nurses, Sophia Palmer and Mary E. P. Davis, founded the American Journal of Nursing, the first independent nursing publication to be owned and operated by nurses. 1905: Though MGH's Ida M. Cannon and Dr. Richard Cabot are credited with establishing the first Social Service department located within a hospital. 1914: MGH physician Dr. Paul Dudley White introduced the use of the electrocardiogram (ECG) in the U.S. 1914: A pioneering allergy clinic was instituted by MGH's Dr. Joseph L. Goodale, who was "the first to make a skin test with substances other than pollen." 1921: MGH physician Dr. Ernest Amory Codman founded the Registry of Bone Sarcoma, the first national registry of its kind in the U.S. 1923: The first successful heart valve surgery in the world is performed at the Peter Bent Brigham Hospital by Elliot C. Cutler, MD. 1926: Harvey Cushing, MD, performs the first surgery using an electrosurgical generator in an operating room at the Peter Bent Brigham Hospital. 1929: The first polio victim is saved using the newly developed Drinker respirator (iron lung) at the Peter Bent Brigham Hospital. 1934: Under the leadership of Dr. Richard C. Cabot, MGH became the first hospital in the country to offer a pastoral care training program. 1937: A man of many "firsts", MGH endocrinologist Dr. Fuller Albright described what came to be known as Albright Syndrome. 1939: MGH's Dr. Edward D. Churchill, who performed the first successful pericardiectomy in the United States, developed the technique of segmental resection of the lung for certain infections like bronchiectasis. 1940: MGH's Ada Plumer became the first "official IV [intravenous] nurse" in the U.S. Until that time, it had been a medical role. 1942: MGH's Dr. Saul Hertz and MIT physicist Dr. Arthur Roberts used radioactive iodine for the first time as a therapeutic agent in the diagnosis and treatment of Graves' disease, helping to usher in the field of nuclear medicine. 1949: Carl Walter, MD, invents and perfects a way to collect, store and transfuse blood at the Peter Bent Brigham Hospital. 1954: The first successful human organ transplant, a kidney transplant, is accomplished at Peter Bent Brigham Hospital. Joseph Murray, MD, receives the Nobel Prize for this work. 1955: Drs. Wilma Jeanne Canada and Leonard W. Cronkhite, Jr., both residents in radiology at MGH, were the first to recognize the syndrome that bears their name, Cronkhite–Canada Syndrome. 1960: Dwight Harken, MD inserts the first prosthetic aortic valve directly into a human heart at the site of the biological valve. He also implants the first "demand" pacemaker and pioneers the use of the first pacemakers at the Peter Bent Brigham Hospital. 1962: Joseph Murray, MD, performs the world's first successful kidney transplantation from an unrelated cadaver donor. The procedure included the first clinical use of the immunosuppressive drug azathioprine. 1962: MGH's Dr. Ronald Malt and his team led the first successful limb replantation after twelve-year-old Everett "Red" Knowles's arm had been severed in an accident. 1963: MGH's Dr. Charles Huggins helped revolutionize blood bank procedures through his invention of the cytoglomerator, enabling freezing and storing red blood cells for extended periods. 1968: The first telemedicine system, which linked a medical station at Boston's Logan Airport with doctors at MGH, was established. 1969: Considered the "father of modern-day tracheal surgery" in the United States, MGH's Dr. Hermes Grillo developed original operations for disorders that were once considered uncorrectable. 1971: Three MGH physicians: Drs. Howard Ulfelder, Arthur L. Herbst and David C. Poskanzer, were the first to discover the link between the vaginal clear cell adenocarcinoma and the drug DES (diethylstilbestrol), at one time prescribed to prevent miscarriages. 1974: MGH dermatologists Drs. Thomas Fitzpatrick and John Parrish introduced the field of photochemotherapy to treat skin disorders such as psoriasis. 1976: Brigham and Women's Hospital researchers launch the Nurses' Health Study, enrolling 122,000 women in America's first and largest women's health study. 1978: MGH's Dr. Jeffrey B. Cooper, with colleagues at MGH and MIT, developed the "Boston Anesthesia System," the first anesthesia machine engineered by way of human-factors studies, and the first with computer-based operations. 1981: Faulker Hospital makes history by being the first to successfully transfuse a patient with "rejuvenated blood". 1981: MGH surgeon Dr. John F. Burke, along with Dr. Ioannis V. Yannas, from the Massachusetts Institute of Technology's department of mechanical engineering, invented the first commercially reproducible, synthetic human skin. 1983: MGH neurogeneticist Dr. James Gusella lead a team that found a genetic marker for Huntington's disease. 1983: Dr. Allan Goroll, a pioneer of modern primary care, collaborated with his MGH colleagues on the first textbook in that field. 1989: MGH became the first hospital in the U.S. whose library had an online catalog 1991: Dr. Jack Belliveau, researcher in MGH's Athinoula A. Martinos Center for Biomedical Imaging, reported the first demonstration of functional MRI (fMRI). 1995: The Brigham performs the nation's first triple organ transplant, removing three organs from a single donor—two lungs and a heart—and transplanting them into three patients. 1999: MGH's Dr. Thomas Spitzer and colleagues reported on the first-ever organ transplant carried out with the intention of stopping antirejection therapy. 2000: In what is believed to be a first in organ transplantation, the Brigham performs a quadruple transplant, harvesting four organs from a single donor—a kidney, two lungs and a heart—and transplanting them into four patients. 2004: The Brigham performs the nation's first implant of the new Intrinsic dual-chamber implantable cardioverter-defibrillator (ICD). 2007: MGH surgeons performed the first total hip replacement using a joint socket lined with a novel material invented at MGH. 2009: Brigham surgeons complete the second partial facial transplant in the United States. 2011: A multidisciplinary team at Brigham and Women's Hospital, led by Bohdan Pomahac, MD, performs the first full-face transplant in the U.S. 2015: An international team led by MGH researchers identified the first gene that causes mitral valve prolapse. 2016: The Brigham performs the first bilateral arm transplant on a patient injured during military service. 2016: MGH was the first hospital in the country where a liver transplant was performed using what doctors loosely call "liver in a box", a portable device. 2016: A surgical team led by MGHers Drs. Curtis L. Cetrulo, Jr. and Dicken S.C. Ko performed the country's first genitourinary vascularized composite allograft (penile) transplant. 2017: MGH's Dr. Bradley E. Bernstein, along with colleagues from MGH, Mass. Eye and Ear, and the Broad Institute at MIT, created the first atlas of head and neck cancer. Innovations and ventures Mass General Brigham is the largest hospital system-based research enterprise in America, with an annual research budget exceeding $2 billion. It is the top system for National Institutes of Health (NIH) funding in the world, receiving $1.04 billion from NIH in 2022. The system's funding for research has grown from $1.5 billion in 2012 to $2.3 billion in 2023, with nearly 2/3 of the funds coming from outside of Massachusetts. Research revenues in 2022 were $2.2 billion. In 2023, the system said it had over 2,700 ongoing clinical trials, focused on accelerating new treatments and therapies. Among the system's recent innovations: Visudyne for macular degeneration, Enbrel for rheumatoid arthritis, Eloctate and Alprolix for hemophilia, Entyvio for crohn's disease, and total joint replacements such as Durasul, Longevity, E1, and Vicacit-E. Expansion and influence In May 2000, CEO Dr. Samuel Thier and William C. Van Faasen, CEO of Blue Cross Blue Shield of Massachusetts—the state's biggest health insurer—agreed to a deal that raised insurance costs all across Massachusetts. They agreed that Van Faasen would substantially increase insurance payments to Mass General Brigham doctors and hospitals, largely correcting the underpayments of the previous 10 years. However, Partners issued a statement saying that Thier pledged only that he would treat all insurers equally. According to Boston Globe investigative journalists, Blue Cross and other insurers increased the rate they paid Mass General Brigham by 75 percent between 2000 and 2008, though CEO James J. Mongan argued insurance rates in Massachusetts have gone up at roughly the same rate as the national average. In 2013, Mass General Brigham's plan to take over 378-bed South Shore Hospital in Weymouth was reviewed due to fears that the expansion plan is anticompetitive, a conduct Mass General Brigham had been accused of over the past four years in other cases. In 2015, the system abandoned their plans to invest $200 million into the hospital. In April 2017, the United States District Court for the District of Massachusetts announced that Partners HealthCare System and one of its hospitals, Brigham and Women's Hospital, agreed to pay a $10 million fine to resolve allegations that a stem cell research lab fraudulently obtained federal grant funding. Federal prosecutors commended the Brigham for disclosing allegations of fraudulent research at the lab and for taking steps to prevent future recurrences of such conduct. In May 2017, Partners announced they would be cutting more than $600 million in expenses over the next three years in an effort to control higher costs and to become more efficient. The cost-cutting initiative was called Partners 2.0, and the plan looked to reduce costs in research, care delivery, revenue collection, and supply chain. The plan began on October 1, 2017 and eliminated jobs. The company lost $108 million in 2016, but was profitable in 2017 despite industry turmoil. In February 2018, Partners announced that 100 coders would have their jobs outsourced to India in a cost saving move. This was all part of the non-profit hospital and physicians network's three-year plan to reduce $500 million to $800 million in overhead costs. CEO Dr. David Torchiana said the job cuts were a financial necessity, adding that most sectors outsource call centers and back-office functions. During the SARS-CoV-2 pandemic, Partners HealthCare, who reported operating income of $484 million (3.5% operating margin) in fiscal year 2019, refused hazard pay to its healthcare workers despite lack of proper PPE. However, they did not layoff or furlough any employees during the pandemic, while cutting executive salaries. The system explained it does not calibrate pay and benefits based upon patients' conditions, because a core part of its mission is delivering the same high-quality care to all patients regardless of the severity of their condition. Partners also provided employees with pay and benefits for those unable to work due to COVID-related illness, eight weeks of pay for those temporarily without work, and hotel rooms for employees. Mass General Brigham reported a loss of operations of $432 million (−2.6% operating margin) in fiscal year 2022 due to historic cost inflation, significant workforce shortages, and a worsening capacity crisis. Many health care systems and hospitals nationwide are experiencing the worst year financially since the start of the COVID-19 pandemic. In response, the system announced its plan for a long-term sustainable future, which includes the following initiatives: Advancing integration to improve patient care and identify efficiencies, addressing the labor shortage by building workforce pipelines, and reducing expenses. See also Partners Harvard Medical International Steward Health Care System References External links Partners International Medical Services Spaudling Rehabilitation Network Partners Healthcare At Home 1994 establishments in Massachusetts Healthcare in Boston Hospital networks in the United States Life sciences industry Massachusetts General Hospital Non-profit organizations based in Boston Medical and health organizations based in Massachusetts
Mass General Brigham
[ "Biology" ]
3,996
[ "Life sciences industry" ]
1,608,805
https://en.wikipedia.org/wiki/Paned%20window%20%28computing%29
A paned window is a window in a graphical user interface that has multiple parts, layers, or sections. Examples of this include a code browser in a typical integrated development environment; a file browser with multiple panels; a tiling window manager; or a web page that contains multiple frames. Simple console applications use an edit pane for accepting input and an output pane for displaying output. The term task pane is used by Microsoft to identify any area cordoned off from the main screen area of an application and used for a specific function, such as changing the displayed font in a word processor. Three-pane interface A Three-pane interface is a category of graphical user interface in which the screen or window is divided into three panes displaying information. This information typically falls into a hierarchal relationship of master-detail with an embedded inspector window. Microsoft's Outlook Express email client popularized a mailboxes / mailbox contents / email text layout that became the norm until web-based user interfaces rose in popularity during the mid-2000s. Even today, many webmail scripts emulate this interface style. References Microsoft Windows Graphical user interface elements
Paned window (computing)
[ "Technology" ]
235
[ "Computing platforms", "Microsoft Windows", "Components", "Graphical user interface elements" ]
1,608,886
https://en.wikipedia.org/wiki/Tests%20of%20special%20relativity
Special relativity is a physical theory that plays a fundamental role in the description of all physical phenomena, as long as gravitation is not significant. Many experiments played (and still play) an important role in its development and justification. The strength of the theory lies in its unique ability to correctly predict to high precision the outcome of an extremely diverse range of experiments. Repeats of many of those experiments are still being conducted with steadily increased precision, with modern experiments focusing on effects such as at the Planck scale and in the neutrino sector. Their results are consistent with the predictions of special relativity. Collections of various tests were given by Jakob Laub, Zhang, Mattingly, Clifford Will, and Roberts/Schleif. Special relativity is restricted to flat spacetime, i.e., to all phenomena without significant influence of gravitation. The latter lies in the domain of general relativity and the corresponding tests of general relativity must be considered. Experiments paving the way to relativity The predominant theory of light in the 19th century was that of the luminiferous aether, a stationary medium in which light propagates in a manner analogous to the way sound propagates through air. By analogy, it follows that the speed of light is constant in all directions in the aether and is independent of the velocity of the source. Thus an observer moving relative to the aether must measure some sort of "aether wind" even as an observer moving relative to air measures an apparent wind. First-order experiments Beginning with the work of François Arago (1810), a series of optical experiments had been conducted, which should have given a positive result for magnitudes of first order in (i.e., of ) and which thus should have demonstrated the relative motion of the aether. Yet the results were negative. An explanation was provided by Augustin Fresnel (1818) with the introduction of an auxiliary hypothesis, the so-called "dragging coefficient", that is, matter is dragging the aether to a small extent. This coefficient was directly demonstrated by the Fizeau experiment (1851). It was later shown that all first-order optical experiments must give a negative result due to this coefficient. In addition, some electrostatic first-order experiments were conducted, again having negative results. In general, Hendrik Lorentz (1892, 1895) introduced several new auxiliary variables for moving observers, demonstrating why all first-order optical and electrostatic experiments have produced null results. For example, Lorentz proposed a location variable by which electrostatic fields contract in the line of motion and another variable ("local time") by which the time coordinates for moving observers depend on their current location. Second-order experiments The stationary aether theory, however, would give positive results when the experiments are precise enough to measure magnitudes of second order in (i.e., of ). Albert A. Michelson conducted the first experiment of this kind in 1881, followed by the more sophisticated Michelson–Morley experiment in 1887. Two rays of light, traveling for some time in different directions were brought to interfere, so that different orientations relative to the aether wind should lead to a displacement of the interference fringes. But the result was negative again. The way out of this dilemma was the proposal by George Francis FitzGerald (1889) and Lorentz (1892) that matter is contracted in the line of motion with respect to the aether (length contraction). That is, the older hypothesis of a contraction of electrostatic fields was extended to intermolecular forces. However, since there was no theoretical reason for that, the contraction hypothesis was considered ad hoc. Besides the optical Michelson–Morley experiment, its electrodynamic equivalent was also conducted, the Trouton–Noble experiment. By that it should be demonstrated that a moving condenser must be subjected to a torque. In addition, the Experiments of Rayleigh and Brace intended to measure some consequences of length contraction in the laboratory frame, for example the assumption that it would lead to birefringence. Though all of those experiments led to negative results. (The Trouton–Rankine experiment conducted in 1908 also gave a negative result when measuring the influence of length contraction on an electromagnetic coil.) To explain all experiments conducted before 1904, Lorentz was forced to again expand his theory by introducing the complete Lorentz transformation. Henri Poincaré declared in 1905 that the impossibility of demonstrating absolute motion (principle of relativity) is apparently a law of nature. Refutations of complete aether drag The idea that the aether might be completely dragged within or in the vicinity of Earth, by which the negative aether drift experiments could be explained, was refuted by a variety of experiments. Oliver Lodge (1893) found that rapidly whirling steel disks above and below a sensitive common path interferometric arrangement failed to produce a measurable fringe shift. Gustaf Hammar (1935) failed to find any evidence for aether dragging using a common-path interferometer, one arm of which was enclosed by a thick-walled pipe plugged with lead, while the other arm was free. The Sagnac effect showed that aether wind caused by earth drag cannot be demonstrated. The existence of the aberration of light was inconsistent with aether drag hypothesis. The assumption that aether drag is proportional to mass and thus only occurs with respect to Earth as a whole was refuted by the Michelson–Gale–Pearson experiment, which demonstrated the Sagnac effect through Earth's motion. Lodge expressed the paradoxical situation in which physicists found themselves as follows: "...at no practicable speed does ... matter [have] any appreciable viscous grip upon the ether. Atoms must be able to throw it into vibration, if they are oscillating or revolving at sufficient speed; otherwise they would not emit light or any kind of radiation; but in no case do they appear to drag it along, or to meet with resistance in any uniform motion through it." Special relativity Overview Eventually, Albert Einstein (1905) drew the conclusion that established theories and facts known at that time only form a logical coherent system when the concepts of space and time are subjected to a fundamental revision. For instance: Maxwell-Lorentz's electrodynamics (independence of the speed of light from the speed of the source), the negative aether drift experiments (no preferred reference frame), Moving magnet and conductor problem (only relative motion is relevant), the Fizeau experiment and the aberration of light (both implying modified velocity addition and no complete aether drag). The result is special relativity theory, which is based on the constancy of the speed of light in all inertial frames of reference and the principle of relativity. Here, the Lorentz transformation is no longer a mere collection of auxiliary hypotheses but reflects a fundamental Lorentz symmetry and forms the basis of successful theories such as Quantum electrodynamics. There is a large number of possible tests of the predictions and the second postulate: Fundamental experiments The effects of special relativity can phenomenologically be derived from the following three fundamental experiments: Michelson–Morley experiment, by which the dependence of the speed of light on the direction of the measuring device can be tested. It establishes the relation between longitudinal and transverse lengths of moving bodies. Kennedy–Thorndike experiment, by which the dependence of the speed of light on the velocity of the measuring device can be tested. It establishes the relation between longitudinal lengths and the duration of time of moving bodies. Ives–Stilwell experiment, by which time dilation can be directly tested. From these three experiments and by using the Poincaré-Einstein synchronization, the complete Lorentz transformation follows, with being the Lorentz factor: Besides the derivation of the Lorentz transformation, the combination of these experiments is also important because they can be interpreted in different ways when viewed individually. For example, isotropy experiments such as Michelson-Morley can be seen as a simple consequence of the relativity principle, according to which any inertially moving observer can consider himself as at rest. Therefore, by itself, the MM experiment is compatible to Galilean-invariant theories like emission theory or the complete aether drag hypothesis, which also contain some sort of relativity principle. However, when other experiments that exclude the Galilean-invariant theories are considered (i.e. the Ives–Stilwell experiment, various refutations of emission theories and refutations of complete aether dragging), Lorentz-invariant theories and thus special relativity are the only theories that remain viable. Constancy of the speed of light Interferometers, resonators Modern variants of Michelson-Morley and Kennedy–Thorndike experiments have been conducted in order to test the isotropy of the speed of light. Contrary to Michelson-Morley, the Kennedy-Thorndike experiments employ different arm lengths, and the evaluations last several months. In that way, the influence of different velocities during Earth's orbit around the Sun can be observed. Laser, maser and optical resonators are used, reducing the possibility of any anisotropy of the speed of light to the 10−17 level. In addition to terrestrial tests, Lunar Laser Ranging Experiments have also been conducted as a variation of the Kennedy-Thorndike-experiment. Another type of isotropy experiments are the Mössbauer rotor experiments in the 1960s, by which the anisotropy of the Doppler effect on a rotating disc can be observed by using the Mössbauer effect (those experiments can also be utilized to measure time dilation, see below). No dependence on source velocity or energy Emission theories, according to which the speed of light depends on the velocity of the source, can conceivably explain the negative outcome of aether drift experiments. It was not until the mid-1960s that the constancy of the speed of light was definitively shown by experiment, since in 1965, J. G. Fox showed that the effects of the extinction theorem rendered the results of all experiments previous to that time inconclusive, and therefore compatible with both special relativity and emission theory. More recent experiments have definitely ruled out the emission model: the earliest were those of Filippas and Fox (1964), using moving sources of gamma rays, and Alväger et al. (1964), which demonstrated that photons did not acquire the speed of the high speed decaying mesons which were their source. In addition, the de Sitter double star experiment (1913) was repeated by Brecher (1977) under consideration of the extinction theorem, ruling out a source dependence as well. Observations of Gamma-ray bursts also demonstrated that the speed of light is independent of the frequency and energy of the light rays. One-way speed of light A series of one-way measurements were undertaken, all of them confirming the isotropy of the speed of light. However, only the two-way speed of light (from A to B back to A) can unambiguously be measured, since the one-way speed depends on the definition of simultaneity and therefore on the method of synchronization. The Einstein synchronization convention makes the one-way speed equal to the two-way speed. However, there are many models having isotropic two-way speed of light, in which the one-way speed is anisotropic by choosing different synchronization schemes. They are experimentally equivalent to special relativity because all of these models include effects like time dilation of moving clocks, that compensate any measurable anisotropy. However, of all models having isotropic two-way speed, only special relativity is acceptable for the overwhelming majority of physicists since all other synchronizations are much more complicated, and those other models (such as Lorentz ether theory) are based on extreme and implausible assumptions concerning some dynamical effects, which are aimed at hiding the "preferred frame" from observation. Isotropy of mass, energy, and space Clock-comparison experiments (periodic processes and frequencies can be considered as clocks) such as the Hughes–Drever experiments provide stringent tests of Lorentz invariance. They are not restricted to the photon sector as Michelson-Morley but directly determine any anisotropy of mass, energy, or space by measuring the ground state of nuclei. Upper limit of such anisotropies of 10−33 GeV have been provided. Thus these experiments are among the most precise verifications of Lorentz invariance ever conducted. Time dilation and length contraction The transverse Doppler effect and consequently time dilation was directly observed for the first time in the Ives–Stilwell experiment (1938). In modern Ives-Stilwell experiments in heavy ion storage rings using saturated spectroscopy, the maximum measured deviation of time dilation from the relativistic prediction has been limited to ≤ 10−8. Other confirmations of time dilation include Mössbauer rotor experiments in which gamma rays were sent from the middle of a rotating disc to a receiver at the edge of the disc, so that the transverse Doppler effect can be evaluated by means of the Mössbauer effect. By measuring the lifetime of muons in the atmosphere and in particle accelerators, the time dilation of moving particles was also verified. On the other hand, the Hafele–Keating experiment confirmed the resolution of the twin paradox, i.e. that a clock moving from A to B back to A is retarded with respect to the initial clock. However, in this experiment the effects of general relativity also play an essential role. Direct confirmation of length contraction is hard to achieve in practice since the dimensions of the observed particles are vanishingly small. However, there are indirect confirmations; for example, the behavior of colliding heavy ions can be explained if their increased density due to Lorentz contraction is considered. Contraction also leads to an increase of the intensity of the Coulomb field perpendicular to the direction of motion, whose effects already have been observed. Consequently, both time dilation and length contraction must be considered when conducting experiments in particle accelerators. Relativistic momentum and energy Starting with 1901, a series of measurements was conducted aimed at demonstrating the velocity dependence of the mass of electrons. The results actually showed such a dependency but the precision necessary to distinguish between competing theories was disputed for a long time. Eventually, it was possible to definitely rule out all competing models except special relativity. Today, special relativity's predictions are routinely confirmed in particle accelerators such as the Relativistic Heavy Ion Collider. For example, the increase of relativistic momentum and energy is not only precisely measured but also necessary to understand the behavior of cyclotrons and synchrotrons etc., by which particles are accelerated near to the speed of light. Sagnac and Fizeau Special relativity also predicts that two light rays traveling in opposite directions around a spinning closed path (e.g. a loop) require different flight times to come back to the moving emitter/receiver (this is a consequence of the independence of the speed of light from the velocity of the source, see above). This effect was actually observed and is called the Sagnac effect. Currently, the consideration of this effect is necessary for many experimental setups and for the correct functioning of GPS. If such experiments are conducted in moving media (e.g. water, or glass optical fiber), it is also necessary to consider Fresnel's dragging coefficient as demonstrated by the Fizeau experiment. Although this effect was initially understood as giving evidence of a nearly stationary aether or a partial aether drag it can easily be explained with special relativity by using the velocity composition law. Test theories Several test theories have been developed to assess a possible positive outcome in Lorentz violation experiments by adding certain parameters to the standard equations. These include the Robertson-Mansouri-Sexl framework (RMS) and the Standard-Model Extension (SME). RMS has three testable parameters with respect to length contraction and time dilation. From that, any anisotropy of the speed of light can be assessed. On the other hand, SME includes many Lorentz violation parameters, not only for special relativity, but for the Standard model and General relativity as well; thus it has a much larger number of testable parameters. Other modern tests Due to the developments concerning various models of Quantum gravity in recent years, deviations of Lorentz invariance (possibly following from those models) are again the target of experimentalists. Because "local Lorentz invariance" (LLI) also holds in freely falling frames, experiments concerning the weak Equivalence principle belong to this class of tests as well. The outcomes are analyzed by test theories (as mentioned above) like RMS or, more importantly, by SME. Besides the mentioned variations of Michelson–Morley and Kennedy–Thorndike experiments, Hughes–Drever experiments are continuing to be conducted for isotropy tests in the proton and neutron sector. To detect possible deviations in the electron sector, spin-polarized torsion balances are used. Time dilation is confirmed in heavy ion storage rings, such as the TSR at the MPIK, by observation of the Doppler effect of lithium, and those experiments are valid in the electron, proton, and photon sector. Other experiments use Penning traps to observe deviations of cyclotron motion and Larmor precession in electrostatic and magnetic fields. Possible deviations from CPT symmetry (whose violation represents a violation of Lorentz invariance as well) can be determined in experiments with neutral mesons, Penning traps and muons, see Antimatter Tests of Lorentz Violation. Astronomical tests are conducted in connection with the flight time of photons, where Lorentz violating factors could cause anomalous dispersion and birefringence leading to a dependency of photons on energy, frequency or polarization. With respect to threshold energy of distant astronomical objects, but also of terrestrial sources, Lorentz violations could lead to alterations in the standard values for the processes following from that energy, such as Vacuum Cherenkov radiation, or modifications of synchrotron radiation. Neutrino oscillations (see Lorentz-violating neutrino oscillations) and the speed of neutrinos (see measurements of neutrino speed) are being investigated for possible Lorentz violations. Other candidates for astronomical observations are the Greisen–Zatsepin–Kuzmin limit and Airy disks. The latter is investigated to find possible deviations of Lorentz invariance that could drive the photons out of phase. Observations in the Higgs sector are under way. See also Tests of general relativity History of special relativity Test theories of special relativity References Physics experiments Special relativity
Tests of special relativity
[ "Physics" ]
3,895
[ "Special relativity", "Experimental physics", "Physics experiments", "Theory of relativity" ]
1,608,955
https://en.wikipedia.org/wiki/Concanavalin%20A
Concanavalin A (ConA) is a lectin (carbohydrate-binding protein) originally extracted from the jack-bean (Canavalia ensiformis). It is a member of the legume lectin family. It binds specifically to certain structures found in various sugars, glycoproteins, and glycolipids, mainly internal and nonreducing terminal α-D-mannosyl and α-D-glucosyl groups. Its physiological function in plants, however, is still unknown. ConA is a plant mitogen, and is known for its ability to stimulate mouse T-cell subsets giving rise to four functionally distinct T cell populations, including precursors to regulatory T cells; a subset of human suppressor T-cells is also sensitive to ConA. ConA was the first lectin to be available on a commercial basis, and is widely used in biology and biochemistry to characterize glycoproteins and other sugar-containing entities on the surface of various cells. It is also used to purify glycosylated macromolecules in lectin affinity chromatography, as well as to study immune regulation by various immune cells. Structure and properties Like most lectins, ConA is a homotetramer: each sub-unit (26.5kDa, 235 amino-acids, heavily glycated) binds a metallic atom (usually Mn2+ and a Ca2+). It has the D2 symmetry. Its tertiary structure has been elucidated, as have the molecular basis of its interactions with metals as well as its affinity for the sugars mannose and glucose are well known. ConA binds specifically α-D-mannosyl and α-D-glucosyl residues (two hexoses differing only in the alcohol on carbon 2) in terminal position of ramified structures from B-Glycans (rich in α-mannose, or hybrid and bi-antennary glycan complexes). It has 4 binding sites, corresponding to the 4 sub-units. The molecular weight is 104–112 kDa and the isoelectric point (pI) is in the range of 4.5–5.5. ConA can also initiate cell division (mitogenesis), primarily acting on T-lymphocytes, by stimulating their energy metabolism within seconds of exposure. Maturation process ConA and its variants (found in closely related plants) are the only proteins known to undergo a post-translational sequence arrangement known as Circular permutation in proteins whereby the N-terminal half of the conA precursor is swapped to become the C-terminal half in the mature form; all other known circular permutations occur at the genetic level. ConA circular permutation is carried out by jack bean asparaginyl endopeptidase, a versatile enzyme capable of cleaving and ligating peptide substrates at a single active site. To convert conA to the mature form, jack bean asparaginyl endopeptidase cleaves the precursor of conA in the middle and ligates the two original termini. Biological activity Concanavalin A interacts with diverse receptors containing mannose carbohydrates, notably rhodopsin, blood group markers, insulin receptors, the immunoglobulins and the carcino-embryonary antigen (CEA). It also interacts with lipoproteins. ConA strongly agglutinates erythrocytes irrespective of blood-group, and various cancerous cells. It was demonstrated that transformed cells and trypsin-treated normal cells do not agglutinate at 4 °C, thereby suggesting that there is a temperature-sensitive step involved in ConA-mediated agglutination. ConA-mediated agglutination of other cell types has been reported, including muscle cells, B-lymphocytes (through surface immunoglobulins), fibroblasts, rat thymocytes, human fetal (but not adult) intestinal epithelial cells, and adipocytes. ConA is a lymphocyte mitogen. Similar to phytohemagglutinin (PHA), it is a selective T cell mitogen relative to its effects on B cells. PHA and ConA bind and cross-link components of the T cell receptor, and their ability to activate T cells is dependent on expression of the T cell receptor. ConA interacts with the surface mannose residues of many microbes, including the bacteria E. coli, and Bacillus subtilis and the protist Dictyostelium discoideum. It has also been shown as a stimulator of several matrix metalloproteinases (MMPs). ConA has proven useful in applications requiring solid-phase immobilization of glycoenzymes, especially those that have proved difficult to immobilize by traditional covalent coupling. Using ConA-couple matrices, such enzymes may be immobilized in high quantities without a concurrent loss of activity or stability. Such noncovalent ConA-glycoenzyme couplings may be relatively easily reversed by competition with sugars or at acidic pH. If necessary for certain applications, these couplings can be converted to covalent bindings by chemical manipulation. A report from Taiwan (2009) demonstrated potent therapeutic effect of ConA against experimental hepatoma (liver cancer); in the study by Lei and Chang, ConA was found to be sequestered more by hepatic tumor cells, in preference to surrounding normal hepatocytes. Internalization of ConA occurs preferentially to the mitochondria after binding to cell membrane glycoproteins, which triggers an autophagic cell death. ConA was found to partially inhibit tumor nodule growth independent of its lymphocyte activation; the eradication of the tumor in the murine in-situ hepatoma model in this study was additionally attributed to the mitogenic/lymphoproliferative action of ConA that may have activated a CD8+ T-cell-mediated, as well as NK- and NK-T cell-mediated, immune response in the liver. ConA intravitreal injection can be used in the modeling of proliferative vitreoretinopathy in rats. References External links Concanavalin A structure World of Lectin, Gateway to lectins con A in complex with methyl alpha1-2 mannobioside Proteins Lectins Legume lectins
Concanavalin A
[ "Chemistry" ]
1,387
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
1,608,987
https://en.wikipedia.org/wiki/Thoughtworks
Thoughtworks Holding, Inc. is a privately-held, global technology company with 49 offices in 18 countries. It provides software design and delivery, and tools and consulting services. The company is closely associated with the movement for agile software development, and has contributed to open source products. Thoughtworks' business includes Digital Product Development Services, Digital Experience and Distributed Agile software development. History 1980s–1990s In the late 1980s, Roy Singham founded Singham Business Services as a management consulting company servicing the equipment leasing industry in a Chicago basement. According to Singham, after two-to-three years, Singham started recruiting additional staff and came up with the name Thoughtworks in 1990. The company was incorporated under the new name in 1993 and focused on building software applications. Over time, Thoughtworks' technology shifted from C++ and Forte 4GL in the mid-1990s to include Java in the late 1990s. 1990s–2010s Martin Fowler joined the company in 1999 and became its chief scientist in 2000. In 2001, Thoughtworks agreed to settle a lawsuit by Microsoft for $480,000 for deploying unlicensed copies of office productivity software to employees. Also in 2001, Fowler, Jim Highsmith, and other key software figures authored the Agile Manifesto. The company began using agile techniques while working on a leasing project. Thoughtworks' technical expertise expanded with the .NET Framework in 2002, C# in 2004, Ruby and the Rails platform in 2006. In 2002, Thoughtworks chief scientist Martin Fowler wrote "Patterns of Enterprise Application Architecture" with contributions by ThoughtWorkers David Rice and Matthew Foemmel, as well as outside contributors Edward Hieatt, Robert Mee, and Randy Stafford. Thoughtworks Studios was launched as its product division in 2006 and shut down in 2020. The division created, supported and sold agile project management and software development and deployment tools including Mingle, Gauge (formerly Twist), Snap CI and GoCD. On 2 March 2007, Thoughtworks announced Trevor Mather as the new CEO. Singham became Executive chairman. Also in March 2007, Rebecca Parsons assumed the role of Chief Technical Officer, having been with the company since 1999. By 2008, Thoughtworks employed 1,000 people and was growing at the rate of 20–30% p.a., with bases around the world. Its clients included Microsoft, Oracle, major banks, and The Guardian newspaper. Singham owned 97% of the common stock of the company. By 2010, its clients included Daimler AG, Siemens and Barclays, and had opened a second headquarters in Bangalore. In 2010, Singham opened Thoughtworks’ Fifth Agile Software Development Conference in Beijing. 2010s–2020s In 2010, Jim Highsmith joined Thoughtworks. In April 2013, Thoughtworks announced a collective leadership structure and appointed four co-Presidents of the global organization. The appointments followed the announcement that the then current CEO, Trevor Mather, was leaving Thoughtworks to take up the role of CEO for the used car sales business Trader Media Group. In May 2013, Dr. David Walton was hired as Director of Global Health. Walton has done work in Haiti since 1999, including helping establish a 300-room, solar-powered hospital and the establishment of a noncommunicable disease clinic. In 2015, Guo Xiao, who started as a developer in Thoughtworks China in 1999, became the chief executive officer and President. Also in 2015, Chinese marketing data company AdMaster acquired Chinese online form automation platform JinShuJu from Thoughtworks. In early 2016, Thoughtworks closed their Toronto offices, the last remaining Canadian office after the closure of their Calgary offices in 2013. They have since reopened the Toronto office. Singham sold the company to British private equity firm Apax Partners in 2017 for $785 million, by which time it had 4,500 employees across 15 countries, including South Africa. Singham left the company. After 2017, several members of Thoughtworks senior staff began to work for the People's Support Foundation, founded by Singham's partner Jodie Evans with the support of Chad Wathington, Thoughtworks' chief strategy officer, and Jason Pfetcher, Thoughtworks' former general counsel. 2020s–Present Thoughtworks announced that it acquired Gemini Solutions Inc. in January 2021. Gemini is a privately held software development consulting services firm, and it is based in Romania. At the end of January 2021, Thoughtworks raised $720 million in funding according to data compiled by Chicago Inno. The following month, Thoughtworks acquired Fourkind, a machine learning and data science consulting company based in Finland. In March 2021, Thoughtworks worked with the Veterans Affairs Department to deploy a centralized mechanism for delivering updates via 'VANotify'. On September 15, 2021, Thoughtworks IPO'd on the NASDAQ and is listed as $TWKS. In April 2022, Thoughtworks acquired Connected, a product development company based in Canada. In May 2024, Guo Xiao stepped down as CEO of Thoughtworks, with the transition becoming official in June 2024. He is succeeded by Mike Sutcliff. In November 2024, Thoughtworks was taken private by Apax Partners for $4.40 per share. Corporate philosophy Thoughtworks launched its Social Impact Program in 2009. This program provided pro-bono or other developmental help for non-profits and organizations with socially-driven missions. Clients included Democracy Now! (mobile content delivery site), Human Network International (mobile data collection), and the Institute for Reproductive Health (SMS-based fertility planner). In 2010, Thoughtworks provided software engineering services for Grameen Foundation's Mifos platform. Translation Cards is an open source Android app that helps field workers and refugees communicate more effectively and confidently. With the help of Google volunteers, Mercy Corps partnered with Thoughtworks and UNHCR to create the app. Notable employees Ola Bini Zack Exley Martin Fowler Jim Highsmith Aaron Swartz See also Software industry in Telangana References External links Indian companies established in 1993 Software companies established in 1993 Companies listed on the Nasdaq Enterprise architecture Enterprise application integration Information technology consulting firms of the United States Linux companies Software companies based in Illinois Software companies of India Software design Software development process Agile software development Software companies of the United States 2021 initial public offerings Apax Partners companies
Thoughtworks
[ "Engineering" ]
1,277
[ "Design", "Software design" ]
1,609,031
https://en.wikipedia.org/wiki/IDN%20homograph%20attack
The internationalized domain name (IDN) homograph attack (sometimes written as homoglyph attack) is a method used by malicious parties to deceive computer users about what remote system they are communicating with, by exploiting the fact that many different characters look alike (i.e., they rely on homoglyphs to deceive visitors). For example, the Cyrillic, Greek and Latin alphabets each have a letter that has the same shape but represents different sounds or phonemes in their respective writing systems. This kind of spoofing attack is also known as script spoofing. Unicode incorporates numerous scripts (writing systems), and, for a number of reasons, similar-looking characters such as Greek Ο, Latin O, and Cyrillic О were not assigned the same code. Their incorrect or malicious usage is a possibility for security attacks. Thus, for example, a regular user of may be lured to click on it unquestioningly as an apparently familiar link, unaware that the third letter is not the Latin character "a" but rather the Cyrillic character "а" and is thus an entirely different domain from the intended one. The registration of homographic domain names is akin to typosquatting, in that both forms of attacks use a similar-looking name to a more established domain to fool a user. The major difference is that in typosquatting the perpetrator attracts victims by relying on natural typographical errors commonly made when manually entering a URL, while in homograph spoofing the perpetrator deceives the victims by presenting visually indistinguishable hyperlinks. Indeed, it would be a rare accident for a web user to type, for example, a Cyrillic letter within an otherwise English word, turning "bank" into "bаnk". There are cases in which a registration can be both typosquatting and homograph spoofing; the pairs of l/I, i/j, and 0/O are all both close together on keyboards and, depending on the typeface, may be difficult or impossible to distinguish visually. History An early nuisance of this kind, pre-dating the Internet and even text terminals, was the confusion between "l" (lowercase letter "L") / "1" (the number "one") and "O" (capital letter for vowel "o") / "0" (the number "zero"). Some typewriters in the pre-computer era even combined the L and the one; users had to type a lowercase L when the number one was needed. The zero/o confusion gave rise to the tradition of crossing zeros, so that a computer operator would type them correctly. Unicode may contribute to this greatly with its combining characters, accents, several types of hyphen, etc., often due to inadequate rendering support, especially with smaller font sizes and the wide variety of fonts. Even earlier, handwriting provided rich opportunities for confusion. A notable example is the etymology of the word "zenith". The translation from the Arabic "samt" included the scribe's confusing of "m" into "ni". This was common in medieval blackletter, which did not connect the vertical columns on the letters i, m, n, or u, making them difficult to distinguish when several were in a row. The latter, as well as "rn"/"m"/"rri" ("RN"/"M"/"RRI") confusion, is still possible for a human eye even with modern advanced computer technology. Intentional look-alike character substitution with different alphabets has also been known in various contexts. For example, Faux Cyrillic has been used as an amusement or attention-grabber and "Volapuk encoding", in which Cyrillic script is represented by similar Latin characters, was used in early days of the Internet as a way to overcome the lack of support for the Cyrillic alphabet. Another example is that vehicle registration plates can have both Cyrillic (for domestic usage in Cyrillic script countries) and Latin (for international driving) with the same letters. Registration plates that are issued in Greece are limited to using letters of the Greek alphabet that have homoglyphs in the Latin alphabet, as European Union regulations require the use of Latin letters. Homographs in ASCII ASCII has several characters or pairs of characters that look alike and are known as homographs (or homoglyphs). Spoofing attacks based on these similarities are known as homograph spoofing attacks. For example, 0 (the number) and O (the letter), "l" lowercase "L", and "I" uppercase "i". In a typical example of a hypothetical attack, someone could register a domain name that appears almost identical to an existing domain but goes somewhere else. For example, the domain "rnicrosoft.com" begins with "r" and "n", not "m". Other examples are G00GLE.COM which looks much like GOOGLE.COM in some fonts. Using a mix of uppercase and lowercase characters, googIe.com (capital i, not small L) looks much like google.com in some fonts. PayPal was a target of a phishing scam exploiting this, using the domain PayPaI.com. In certain narrow-spaced fonts such as Tahoma (the default in the address bar in Windows XP), placing a c in front of a j, l or i will produce homoglyphs such as cl cj ci (d g a). Homographs in internationalized domain names In multilingual computer systems, different logical characters may have identical appearances. For example, Unicode character U+0430, Cyrillic small letter a ("а"), can look identical to Unicode character U+0061, Latin small letter a, ("a") which is the lowercase "a" used in English. Hence wikipediа.org (xn--wikipedi-86g.org; the Cyrillic version) instead of wikipedia.org (the Latin version). The problem arises from the different treatment of the characters in the user's mind and the computer's programming. From the viewpoint of the user, a Cyrillic "а" within a Latin string is a Latin "a"; there is no difference in the glyphs for these characters in most fonts. However, the computer treats them differently when processing the character string as an identifier. Thus, the user's assumption of a one-to-one correspondence between the visual appearance of a name and the named entity breaks down. Internationalized domain names provide a backward-compatible way for domain names to use the full Unicode character set, and this standard is already widely supported. However this system expanded the character repertoire from a few dozen characters in a single alphabet to many thousands of characters in many scripts; this greatly increased the scope for homograph attacks. This opens a rich vein of opportunities for phishing and other varieties of fraud. An attacker could register a domain name that looks just like that of a legitimate website, but in which some of the letters have been replaced by homographs in another alphabet. The attacker could then send e-mail messages purporting to come from the original site, but directing people to the bogus site. The spoof site could then record information such as passwords or account details, while passing traffic through to the real site. The victims may never notice the difference, until suspicious or criminal activity occurs with their accounts. In December 2001 Evgeniy Gabrilovich and Alex Gontmakher, both from Technion, Israel, published a paper titled "The Homograph Attack", which described an attack that used Unicode URLs to spoof a website URL. To prove the feasibility of this kind of attack, the researchers successfully registered a variant of the domain name microsoft.com which incorporated Cyrillic characters. Problems of this kind were anticipated before IDN was introduced, and guidelines were issued to registries to try to avoid or reduce the problem. For example, it was advised that registries only accept characters from the Latin alphabet and that of their own country, not all of Unicode characters, but this advice was neglected by major TLDs. On February 6, 2005, Cory Doctorow reported that this exploit was disclosed by 3ric Johanson at the hacker conference Shmoocon. Web browsers supporting IDNA appeared to direct the URL http://www.pаypal.com/, in which the first a character is replaced by a Cyrillic а, to the site of the well known payment site PayPal, but actually led to a spoofed web site with different content. Popular browsers continued to have problems properly displaying international domain names through April 2017. The following alphabets have characters that can be used for spoofing attacks (please note, these are only the most obvious and common, given artistic license and how much risk the spoofer will take of getting caught; the possibilities are far more numerous than can be listed here): Cyrillic Cyrillic is, by far, the most commonly used alphabet for homoglyphs, largely because it contains 11 lowercase glyphs that are identical or nearly identical to Latin counterparts. The Cyrillic letters а, с, е, о, р, х and у have optical counterparts in the basic Latin alphabet and look close or identical to a, c, e, o, p, x and y. Cyrillic З, Ч and б resemble the numerals 3, 4 and 6. Italic type generates more homoglyphs: дтпи or дтпи (дтпи in standard type), resembling dmnu (in some fonts д can be used, since its italic form resembles a lowercase g; however, in most mainstream fonts, д instead resembles a partial differential sign, ∂). If capital letters are counted, АВСЕНІЈКМОРЅТХ can substitute ABCEHIJKMOPSTX, in addition to the capitals for the lowercase Cyrillic homoglyphs. Cyrillic non-Russian problematic letters are і and i, ј and j, ԛ and q, ѕ and s, ԝ and w, Ү and Y, while Ғ and F, Ԍ and G bear some resemblance to each other. Cyrillic ӓёїӧ can also be used if an IDN itself is being spoofed, to fake äëïö. While Komi De (ԁ), shha (һ), palochka (Ӏ) and izhitsa (ѵ) bear strong resemblance to Latin d, h, l and v, these letters are either rare or archaic and are not widely supported in most standard fonts (they are not included in the WGL-4). Attempting to use them could cause a ransom note effect. Greek From the Greek alphabet, only omicron (ο) and sometimes nu (ν) appear identical to a Latin alphabet letter in the lowercase used for URLs. Fonts that are in italic type will feature Greek alpha (α) looking like a Latin a. This list increases if close matches are also allowed (such as Greek εικηρτυωχγ for eiknptuwxy). Using capital letters, the list expands greatly. Greek ΑΒΕΗΙΚΜΝΟΡΤΧΥΖ looks identical to Latin ABEHIKMNOPTXYZ. Greek ΑΓΒΕΗΚΜΟΠΡΤΦΧ looks similar to Cyrillic АГВЕНКМОПРТФХ (as do Cyrillic Лл (Лл) and Greek Λ in certain geometric sans-serif fonts), Greek letters κ and ο look similar to Cyrillic к and о. Besides this Greek τ, φ can be similar to Cyrillic т, ф in some fonts, Greek δ looks like Cyrillic б, and the Cyrillic а also italicizes the same as its Latin counterpart, making it possible to substitute it for alpha or vice versa. The lunate form of sigma, Ϲϲ, resembles both Latin Cc and Cyrillic Сс. Especially in contemporary typefaces, Cyrillic л is rendered with a glyph indistinguishable from Greek π. If an IDN itself is being spoofed, Greek beta β can be a substitute for German eszett ß in some fonts (and in fact, code page 437 treats them as equivalent), as can Greek end-of-word-variant sigma ς for ç; accented Greek substitutes όίά can usually be used for óíá in many fonts, with the last of these (alpha) again only resembling a in italic type. Armenian The Armenian alphabet can also contribute critical characters: several Armenian characters like օ, ո, ս, as well as capital Տ and Լ are often completely identical to Latin characters in modern fonts, and symbols which similar enough to pass off, such as ցհոօզս which look like ghnoqu, յ which resembles j (albeit dotless), and ք, which can either resemble p or f depending on the font; ա can resemble Cyrillic ш. However, the use of Armenian is, luckily, a bit less reliable: Not all standard fonts feature Armenian glyphs (whereas the Greek and Cyrillic scripts are); Windows prior to Windows 7 rendered Armenian in a distinct font, Sylfaen, of which the mixing of Armenian with Latin would appear obviously different if using a font other than Sylfaen or a Unicode typeface. (This is known as a ransom note effect.) The current version of Tahoma, used in Windows 7, supports Armenian (previous versions did not). Furthermore, this font differentiates Latin g from Armenian ց. Two letters in Armenian (Ձշ) also can resemble the number 2, Յ resembles 3, while another (վ) sometimes resembles the number 4. Hebrew Hebrew spoofing is generally rare. Only three letters from that alphabet can reliably be used: samekh (ס), which sometimes resembles o, vav with diacritic (וֹ), which resembles an i, and heth (ח), which resembles the letter n. Less accurate approximants for some other alphanumerics can also be found, but these are usually only accurate enough to use for the purposes of foreign branding and not for substitution. Furthermore, the Hebrew alphabet is written from right to left and trying to mix it with left-to-right glyphs may cause problems. Thai Though the Thai script has historically had a distinct look with numerous loops and small flourishes, modern Thai typography, beginning with Manoptica in 1973 and continuing through IBM Plex in the modern era, has increasingly adopted a simplified style in which Thai characters are represented with glyphs strongly resembling Latin letters. ค (A), ท (n), น (u), บ (U), ป (J), พ (W), ร (S), and ล (a) are among the Thai glyphs that can closely resemble Latin. Chinese The Chinese language can be problematic for homographs as many characters exist as both traditional (regular script) and simplified Chinese characters. In the .org domain, registering one variant renders the other unavailable to anyone; in .biz a single Chinese-language IDN registration delivers both variants as active domains (which must have the same domain name server and the same registrant). .hk (.香港) also adopts this policy. Other scripts Other Unicode scripts in which homographs can be found include Number Forms (Roman numerals), CJK Compatibility and Enclosed CJK Letters and Months (certain abbreviations), Latin (certain digraphs), Currency Symbols, Mathematical Alphanumeric Symbols, and Alphabetic Presentation Forms (typographic ligatures). Accented characters Two names which differ only in an accent on one character may look very similar, particularly when the substitution involves the dotted letter i; the tittle (dot) on the i can be replaced with a diacritic (such as a grave accent or acute accent; both ì and í are included in most standard character sets and fonts) that can only be detected with close inspection. In most top-level domain registries, wíkipedia.tld (xn--wkipedia-c2a.tld) and wikipedia.tld are two different names which may be held by different registrants. One exception is .ca, where reserving the plain-ASCII version of the domain prevents another registrant from claiming an accented version of the same name. Non-displayable characters Unicode includes many characters which are not displayed by default, such as the zero-width space. In general, ICANN prohibits any domain with these characters from being registered, regardless of TLD. Known homograph attacks In 2011, an unknown source (registering under the name "Completely Anonymous") registered a domain name homographic to television station KBOI-TV's to create a fake news website. The sole purpose of the site was to spread an April Fool's Day joke regarding the Governor of Idaho issuing a supposed ban on the sale of music by Justin Bieber. In September 2017, security researcher Ankit Anubhav discovered an IDN homograph attack where the attackers registered adoḅe.com to deliver the Betabot trojan. Defending against the attack Client-side mitigation The simplest defense is for web browsers not to support IDNA or other similar mechanisms, or for users to turn off whatever support their browsers have. That could mean blocking access to IDNA sites, but generally browsers permit access and just display IDNs in Punycode. Either way, this amounts to abandoning non-ASCII domain names. Mozilla Firefox versions 22 and later display IDNs if either the TLD prevents homograph attacks by restricting which characters can be used in domain names or labels do not mix scripts for different languages. Otherwise, IDNs are displayed in Punycode. Google Chrome versions 51 and later use an algorithm similar to the one used by Firefox. Previous versions display an IDN only if all of its characters belong to one (and only one) of the user's preferred languages. Chromium and Chromium-based browsers such as Microsoft Edge (since 2020) and Opera also use the same algorithm. Safari's approach is to render problematic character sets as Punycode. This can be changed by altering the settings in Mac OS X's system files. Internet Explorer versions 7 and later allow IDNs except for labels that mix scripts for different languages. Labels that mix scripts are displayed in Punycode. There are exceptions to locales where ASCII characters are commonly mixed with localized scripts. Internet Explorer 7 was capable of using IDNs, but it imposes restrictions on displaying non-ASCII domain names based on a user-defined list of allowed languages and provides an anti-phishing filter that checks suspicious websites against a remote database of known phishing sites. Microsoft Edge Legacy converts all Unicode into Punycode. As an additional defense, Internet Explorer 7, Firefox 2.0 and above, and Opera 9.10 include phishing filters that attempt to alert users when they visit malicious websites. As of April 2017, several browsers (including Chrome, Firefox, and Opera) were displaying IDNs consisting purely of Cyrillic characters normally (not as punycode), allowing spoofing attacks. Chrome tightened IDN restrictions in version 59 to prevent this attack. Browser extensions like No Homo-Graphs are available for Google Chrome and Firefox that check whether the user is visiting a website which is a homograph of another domain from a user-defined list. These methods of defense only extend to within a browser. Homographic URLs that house malicious software can still be distributed, without being displayed as Punycode, through e-mail, social networking or other websites without being detected until the user actually clicks the link. While the fake link will show in Punycode when it is clicked, by this point the page has already begun loading into the browser. Server-side/registry operator mitigation The IDN homographs database is a Python library that allows developers to defend against this using machine learning-based character recognition. ICANN has implemented a policy prohibiting any potential internationalized TLD from choosing letters that could resemble an existing Latin TLD and thus be used for homograph attacks. Proposed IDN TLDs .бг (Bulgaria), .укр (Ukraine) and .ελ (Greece) have been rejected or stalled because of their perceived resemblance to Latin letters. All three (and Serbian .срб and Mongolian .мон) have later been accepted. Three-letter TLD are considered safer than two-letter TLD, since they are harder to match to normal Latin ISO-3166 country domains; although the potential to match new generic domains remains, such generic domains are far more expensive than registering a second- or third-level domain address, making it cost-prohibitive to try to register a homoglyphic TLD for the sole purpose of making fraudulent domains (which itself would draw ICANN scrutiny). The Russian registry operator Coordination Center for TLD RU only accepts Cyrillic names for the top-level domain .рф, forbidding a mix with Latin or Greek characters. However, the problem in .com and other gTLDs remains open. Research based mitigations In their 2019 study, Suzuki et al. introduced ShamFinder, a program for recognizing IDNs, shedding light on their prevalence in real-world scenarios. Similarly, Chiba et al. (2019) designed DomainScouter, a system adept at detecting diverse homograph IDNs in domains through analyzing an estimated 4.4 million registered IDNs across 570 Top-Level Domains (TLDs) it was able to successfully identify 8,284 IDN homographs, including many previously unidentified cases targeting brands in languages other than English. See also Security issues in Unicode Internationalized domain name Homoglyph Duplicate characters in Unicode Unicode equivalence Typosquatting Leet Gyaru-moji Yaminjeongeum Martian language Notes References Internationalized domain names Nonstandard spelling Unicode Deception Obfuscation Web security exploits Orthography
IDN homograph attack
[ "Technology" ]
4,598
[ "Computer security exploits", "Web security exploits" ]
1,609,171
https://en.wikipedia.org/wiki/SIGCOMM%20Award%20for%20Lifetime%20Contribution
The annual SIGCOMM Award for Lifetime Contribution recognizes lifetime contribution to the field of communication networks. The award is presented in the annual SIGCOMM Technical Conference. SIGCOMM is the Association for Computing Machinery (ACM)'s professional forum for the discussion of topics in the field of communications and computer networks, including technical design and engineering, regulation and operations, and the social implications of computer networking. The SIG's members are particularly interested in the systems engineering and architectural questions of communication. The awardees have been: 2024 K. K. Ramakrishnan 2023 Dina Katabi 2022 Deborah Estrin and Henning Schulzrinne 2021 Hari Balakrishnan 2020 Amin Vahdat and Lixia Zhang 2019 Mark Handley 2018 Jennifer Rexford 2017 Raj Jain 2016 Jim Kurose 2015 Albert Greenberg 2014 George Varghese 2013 Larry Peterson 2012 Nick McKeown 2011 Vern Paxson 2010 Radia Perlman 2009 Jon Crowcroft 2008 Don Towsley 2007 Sally Floyd 2006 Domenico Ferrari 2005 Paul Mockapetris 2004 Simon S. Lam 2003 David Cheriton 2002 Scott Shenker 2001 Van Jacobson 2000 Andre Danthine 1999 Peter Kirstein 1998 Larry Roberts 1997 Jon Postel 1997 Louis Pouzin 1996 Vint Cerf 1995 David J. Farber 1994 Paul Green 1993 Robert Kahn 1992 Sandy Fraser 1991 Hubert Zimmermann 1990 David D. Clark 1990 Leonard Kleinrock 1989 Paul Baran See also IEEE Internet Award Internet Hall of Fame List of computer science awards List of Internet pioneers List of pioneers in computer science References External links SIGCOMM Award Recipients Computer science awards Awards of the Association for Computing Machinery
SIGCOMM Award for Lifetime Contribution
[ "Technology" ]
334
[ "Science and technology awards", "Computer science", "Computer science awards" ]