id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,167,036
https://en.wikipedia.org/wiki/Robot%20locomotion
Robot locomotion is the collective name for the various methods that robots use to transport themselves from place to place. Wheeled robots are typically quite energy efficient and simple to control. However, other forms of locomotion may be more appropriate for a number of reasons, for example traversing rough terrain, as well as moving and interacting in human environments. Furthermore, studying bipedal and insect-like robots may beneficially impact on biomechanics. A major goal in this field is in developing capabilities for robots to autonomously decide how, when, and where to move. However, coordinating numerous robot joints for even simple matters, like negotiating stairs, is difficult. Autonomous robot locomotion is a major technological obstacle for many areas of robotics, such as humanoids (like Honda's Asimo). Types of locomotion Walking See Passive dynamics See Zero Moment Point See Leg mechanism See Hexapod (robotics) Walking robots simulate human or animal gait, as a replacement for wheeled motion. Legged motion makes it possible to negotiate uneven surfaces, steps, and other areas that would be difficult for a wheeled robot to reach, as well as causes less damage to environmental terrain as wheeled robots, which would erode it. Hexapod robots are based on insect locomotion, most popularly the cockroach and stick insect, whose neurological and sensory output is less complex than other animals. Multiple legs allow several different gaits, even if a leg is damaged, making their movements more useful in robots transporting objects. Examples of advanced running robots include ASIMO, BigDog, HUBO 2, RunBot, and Toyota Partner Robot. Rolling In terms of energy efficiency on hard, flat surfaces, wheeled robots are the most efficient. This is because an ideal, non-deformable rolling (but not slipping) wheel loses no energy. This is in contrast to legged robots which suffer an impact with the ground at heel strike and lose energy as a result. For simplicity, most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to. Examples: Boe-Bot, Cosmobot, Elmer, Elsie, Enon, HERO, IRobot Create, iRobot's Roomba, Johns Hopkins Beast, Land Walker, Modulus robot, Musa, Omnibot, PaPeRo, Phobot, Pocketdelta robot, Push the Talking Trash Can, RB5X, Rovio, Seropi, Shakey the robot, Sony Rolly, Spykee, TiLR, Topo, TR Araña, and Wakamaru. Hopping Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot, could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself. Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults. A quadruped was also demonstrated which could trot, run, pace, and bound. Examples: The MIT cheetah cub is an electrically powered quadruped robot with passive compliant legs capable of self-stabilizing in large range of speeds. The Tekken II is a small quadruped designed to walk on irregular terrains adaptively. Metachronal motion Coordinated, sequential mechanical action having the appearance of a traveling wave is called a metachronal rhythm or wave, and is employed in nature by ciliates for transport, and by worms and arthropods for locomotion. Slithering Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water. Examples: Snake-arm robot, Roboboa, and Snakebot. Swimming See Autonomous underwater vehicles Brachiating Brachiation allows robots to travel by swinging, using energy only to grab and release surfaces. This motion is similar to an ape swinging from tree to tree. The two types of brachiation can be compared to bipedal walking motions (continuous contact) or running (ricochetal). Continuous contact is when a hand/grasping mechanism is always attached to the surface being crossed; ricochetal employs a phase of aerial "flight" from one surface/limb to the next. Hybrid Robots can also be designed to perform locomotion in multiple modes. For example, the Reconfigurable Bipedal Snake Robot can both slither like a snake and walk like a biped robot. Biologically inspired locomotion The desire to create robots with dynamic locomotive abilities has driven scientists to look to nature for solutions. Several robots capable of basic locomotion in a single mode have been invented but are found to lack several capabilities, hence limiting their functions and applications. Highly intelligent robots are needed in several areas such as search and rescue missions, battlefields, and landscape investigation. Thus robots of this nature need to be small, light, quick, and possess the ability to move in multiple locomotive modes. As it turns out, multiple animals have provided inspiration for the design of several robots. Some such animals are: Pteromyini (flying squirrels) Pteromyini (a tribe made up of flying squirrels) exhibit great mobility while on land by making use of their quadruped walking ability with high-degrees of freedom (DoF) legs. In air, flying squirrels glide through by utilizing lift forces from the membrane between their legs. They possess a highly flexible membrane that allows for unrestrained movement of the legs. They use their highly elastic membrane to glide while in air and demonstrate lithe movement on the ground. In addition, Pteromyini are able to exhibit multi-modal locomotion due to the membrane that connects the fore and hind legs which also enhances their gliding ability. It has been proven that a flexible membrane possesses a higher lift coefficient than rigid plates and delays the angle of attack at which stall occurs. The flying squirrel also possesses thick bundles on the edges of its membrane, wingtips and tail which helps to minimize fluctuations and unnecessary energy loss. Pteromyini are able to boost their gliding ability due to the numerous physical attributes they possess. The flexible muscle structure serves multiple purposes. For one, the plagiopatagium, which serves as the primary generator of lift for the flying squirrel, is able to effectively function due to its thin and flexible muscles. The plagiopatagium is able to control tension on the membrane due to contraction and expansion. Tension control can ultimately help in energy savings due to minimized fluttering of the membrane. Once the squirrel lands, it contracts its membrane to ensure that the membrane does not sag when it is walking. The propatagium and uropatagium serve to provide extra lift for Pteromyini. While the propatagium is situated between the head and forelimbs of the flying squirrel, the uropatagium is located at the tail and hind limbs and these serve to provide the flying squirrel with increased agility and drag for landing. Additionally, the flying squirrel possesses thick rope-like muscle structures on the edges of its membrane to maintain the shape of the membranes. These muscular structures called platysma, tibiocarpalis, and semitendinosus, are located on the propatagium, plagiopatagium and uropatagium respectively. These thick muscle structures serve to guard against unnecessary fluttering due to strong wind pressures during gliding hence minimizing energy loss. The wingtips are situated at the forelimb wrists and serve to form an airfoil which minimizes the effect of induced drag due to the formation of wingtip vortices. The wingtips dampen the effects of the vortices and obstruct the induced drag from affecting the whole wing. Flying squirrels are able to unfold and fold their wingtips while gliding by using their thumbs. This serves to prevent undesired sagging of the wingtips. The tail of the flying squirrel allows for improved gliding abilities as it plays a critical role. As opposed to other vertebrates, Pteromyini possess a tail that is flattened to gain more aerodynamic surface as they glide. This also allows the flying squirrel to maintain pitch angle stability of its tail. This is particularly useful during landing as the flying squirrel is able to widen its pitch angle and induce more drag so as to decelerate and land safely. Furthermore, the legs and tail of Pteromyini serve to control their gliding direction. Due to the flexibility of the membranes around the legs, the chord angle and dihedral angle between the membrane and coronal plane of the body is controlled. This allows the animal to create rolling, pitching, and yawing movements which in turn control the speed and direction of the gliding. During landing, the animal is able to rapidly reduce its speed by increasing drag and changing its pitch angle using its membranes and further increasing air resistance by loosening the tension between the membranes of its legs. Desmodus Rotundus (vampire bat) The common vampire bats are known to possess powerful modes of terrestrial locomotion, such as jumping, and aerial locomotion such as gliding. Several studies have demonstrated that the morphology of the bat enables it to easily and effectively alternate between both locomotive modes. The anatomy that aids in this is essentially built around the largest muscle in the body of the bat, pectoralis profundus (posterior division). Between the two modes of locomotion, there are three bones that are shared. These three main bones are integral parts of the arm structure, namely the humerus, ulna, and radius. Since there already exists a sharing of components for both modes, no additional muscles are needed when transitioning from jumping to gliding. A detailed study of the morphology of the shoulder of the bat shows that the bones of the arm are slightly sturdier and the ulna and the radius have been fused so as to accommodate heavy reaction forces from the ground Schistocerca gregaria (desert locust) The desert locust is known for its ability to jump and fly over long distances as well as crawl on land. A detailed study of the anatomy of this organism provides some detail about the mechanisms for locomotion. The hind legs of the locust are developed for jumping. They possess a semi-lunar process which consists of the large extensor tibiae muscle, small flexor tibiae muscle, and banana-shaped thickened cuticle. When the tibiae muscle flexes, the mechanical advantage of the muscles and the vertical thrust component of the leg extension are increased. These desert locusts utilize a catapult mechanism wherein the energy is first stored in the hind legs and then released to extend the legs. In order for a perfect jump to occur, the locust must push its legs on the ground with a strong enough force so as to initiate a fast takeoff. The force must be adequate enough in order to attain a quick takeoff and decent jump height. The force must also be generated quickly. In order to effectively transition from the jumping mode to the flying mode, the insect must adjust the time during the wing opening to maximize the distance and height of the jump. When it is at the zenith of its jump, the flight mode becomes actuated. Multi-modal robot locomotion based on bio-inspiration Modeling of a multi-modal walking and gliding robot after Pteromyini (flying squirrels) Following the discovery of the requisite model to mimic, researchers sought to design a legged robot that was capable of achieving effective motion in aerial and terrestrial environments by the use of a flexible membrane. Thus, to achieve this goal, the following design considerations had to be taken into account: 1.       The shape and area of the membrane had to be consciously selected so that the intended aerodynamic capabilities of this membrane could be achieved. Additionally, the design of the membrane would affect the design of the legs since the membrane is attached to the legs. 2.       The membrane had to be flexible enough to allow for unrestricted movement of the legs during gliding and walking. However, the amount of flexibility had to be controlled due to the fact that excessive flexibility could lead to a significant loss of energy caused by the oscillations at regions of the membrane where strong pressure occur. 3.       The leg of the robot had to be designed to allow for appropriate torques for walking as well as gliding. In order to incorporate these factors, close attention had to be paid to the characteristics of the flying squirrel. The aerodynamic features of the robot were modeled using dynamic modeling and simulation. By imitating the thick muscle bundles of the membrane of the flying squirrel, the designers were able to minimize the fluctuations and oscillations on the membrane edges of the robot, thus reducing needless energy loss. Furthermore, the amount of drag on the wing of the robot was reduced by the use of retractable wingtips thereby allowing for improved gliding abilities. Moreover, the leg of the robot was designed to incorporate sufficient torque after mimicking the anatomy of Pteryomini's leg using virtual work analysis. Following the design of the leg and membrane of the robot, its average gliding ratio (GR) was determined to be 1.88. The robot functioned effectively, walking in several gait patterns and crawling with its high DoF legs. The robot was also able to land safely. These performances demonstrated the gliding and walking capabilities of the robot and its multi-modal locomotion Modeling of a multi-modal jumping and gliding robot after the Desmodus Rotundus (vampire bat) The design of the robot called Multi-Mo Bat involved the establishment of four primary phases of operation: energy storage phase, jumping phase, coasting phase, and gliding phase. The energy storing phase essentially involves the reservation of energy for the jumping energy. This energy is stored in the main power springs. This process additionally creates a torque around the joint of the shoulders which in turn configures the legs for jumping. Once the stored energy is released, the jump phase can be initiated. When the jump phase is initiated and the robot takes off from the ground, it transitions to the coast phase which occurs until the acme is reached and it begins to descend. As the robot descends, drag helps to reduce the speed at which it descends as the wing is reconfigured due to increased drag on the bottom of the airfoils. At this stage, the robot glides down. The anatomy of the arm of the vampire bat plays a key role in the design of the leg of the robot. In order to minimize the number of Degrees of Freedom (DoFs), the two components of the arm are mirrored over the xz plane. This then creates the four-bar design of the leg structure of the robot which results in only two independent DoFs. Modeling of a multi-modal jumping and flying robot after the Schistocerca gregaria (desert locust) The robot designed was powered by a single DC motor which integrated the performances of jumping and flapping. It was designed as an incorporation of the inverted slider-crank mechanism for the construction of the legs, a dog-clutch system to serve as the mechanism for winching, and a rack-pinion mechanism used for the flapping-wing system. This design incorporated a very efficient energy storage and release mechanism and an integrated wing flapping mechanism. A robot with features similar to the locust was developed. The primary feature of the robot's design was a gear system powered by a single motor which allowed the robot to perform its jumping and flapping motions. Just like the motion of the locust, the motion of the robot is initiated by the flexing of the legs to the position of maximum energy storage after which the energy is released immediately to generate the force necessary to attain flight. The robot was tested for performance and the results demonstrated that the robot was able to jump to an approximate height of 0.9m while weighing 23g and flapping its wings at a frequency of about 19 Hz. The robot tested without flapping wings performed less impressively, showing about 30% decrease in jumping performance as compared to the robot with the wings. These results are quite impressive as it is expected that the reverse be the case since the weight of the wings should have impacted the jumping. Approaches Product optimization Motion planning Motion capture may be performed on humans, insects and other organisms. Machine learning, typically with reinforcement learning. Notable researchers in the field Rodney Brooks Marc Raibert Jessica Hodgins Red Whittaker Shuuji Kajita, who introduced preview control to realize the anticipatory nature of walking in humanoid robots of the Humanoid Robotics Project. See also Microswimmer References External links Robot Locomotion Robot control
Robot locomotion
[ "Physics", "Engineering" ]
3,535
[ "Physical phenomena", "Robotics engineering", "Robot control", "Motion (physics)", "Robot locomotion" ]
1,169,436
https://en.wikipedia.org/wiki/Charge-transfer%20complex
In chemistry, charge-transfer (CT) complex, or electron donor-acceptor complex, describes a type of supramolecular assembly of two or more molecules or ions. The assembly consists of two molecules that self-attract through electrostatic forces, i.e., one has at least partial negative charge and the partner has partial positive charge, referred to respectively as the electron acceptor and electron donor. In some cases, the degree of charge transfer is "complete", such that the CT complex can be classified as a salt. In other cases, the charge-transfer association is weak, and the interaction can be disrupted easily by polar solvents. Examples Electron donor-acceptor complexes A number of organic compounds form charge-transfer complex, which are often described as electron-donor-acceptor complexes (EDA complexes). Typical acceptors are nitrobenzenes or tetracyanoethylene (TCNE). The strength of their interaction with electron donors correlates with the ionization potentials of the components. For TCNE, the stability constants (L/mol) for its complexes with benzene derivatives correlates with the number of methyl groups: benzene (0.128), 1,3,5-trimethylbenzene (1.11), 1,2,4,5-tetramethylbenzene (3.4), and hexamethylbenzene (16.8). A simple example for a prototypical electron-donor-acceptor complexes is nitroaniline. 1,3,5-Trinitrobenzene and related polynitrated aromatic compounds, being electron-deficient, form charge-transfer complexes with many arenes. Such complexes form upon crystallization, but often dissociate in solution to the components. Characteristically, these CT salts crystallize in stacks of alternating donor and acceptor (nitro aromatic) molecules, i.e. A-B-A-B. Dihalogen/interhalogen CT complexes Early studies on donor-acceptor complexes focused on the solvatochromism exhibited by iodine, which often results from I2 forming adducts with electron donors such as amines and ethers. Dihalogens X2 (X = Cl, Br, I) and interhalogens XY(X = I; Y = Cl, Br) are Lewis acid species capable of forming a variety of products when reacted with donor species. Among these species (including oxidation or protonated products), CT adducts D·XY have been largely investigated. The CT interaction has been quantified and is the basis of many schemes for parameterizing donor and acceptor properties, such as those devised by Gutmann, Childs, Beckett, and the ECW model. Many organic species featuring chalcogen or pnictogen donor atoms form CT salts. The nature of the resulting adducts can be investigated both in solution and in the solid state. In solution, the intensity of charge-transfer bands in the UV-Vis absorbance spectrum is strongly dependent upon the degree (equilibrium constant) of this association reaction. Methods have been developed to determine the equilibrium constant for these complexes in solution by measuring the intensity of absorption bands as a function of the concentration of donor and acceptor components in solution. The Benesi-Hildebrand method, named for its developers, was first described for the association of iodine dissolved in aromatic hydrocarbons. In the solid state a valuable parameter is the elongation of the X–X or X–Y bond length, resulting from the antibonding nature of the σ* LUMO. The elongation can be evaluated by means of structural determinations (XRD) and FT-Raman spectroscopy. A well-known example is the complex formed by iodine when combined with starch, which exhibits an intense purple charge-transfer band. This has widespread use as a rough screen for counterfeit currency. Unlike most paper, the paper used in US currency is not sized with starch. Thus, formation of this purple color on application of an iodine solution indicates a counterfeit. TTF-TCNQ: prototype for electrically conducting complexes In 1954, charge-transfer salts derived from perylene with iodine or bromine were reported with resistivities as low as 8 ohm·cm. In 1973, it was discovered that a combination of tetracyanoquinodimethane (TCNQ) and tetrathiafulvalene (TTF) forms a strong charge-transfer complex referred to as TTF-TCNQ. The solid shows almost metallic electrical conductance and was the first-discovered purely organic conductor. In a TTF-TCNQ crystal, TTF and TCNQ molecules are arranged independently in separate parallel-aligned stacks, and an electron transfer occurs from donor (TTF) to acceptor (TCNQ) stacks. Hence, electrons and electron holes are separated and concentrated in the stacks and can traverse in a one-dimensional direction along the TCNQ and TTF columns, respectively, when an electric potential is applied to the ends of a crystal in the stack direction. Superconductivity is exhibited by tetramethyl-tetraselenafulvalene-hexafluorophosphate (TMTSF2PF6), which is a semi-conductor at ambient conditions, shows superconductivity at low temperature (critical temperature) and high pressure: 0.9 K and 12 kbar. Critical current densities in these complexes are very small. Mechanistic implications Many reactions involving nucleophiles attacking electrophiles can be usefully assessed from the perspective of an incipient charge-transfer complex. Examples include electrophilic aromatic substitution, the addition of Grignard reagents to ketones, and brominolysis of metal-alkyl bonds. See also Exciplex – a special case where one of the molecules is in an excited state Organic semiconductor Organic superconductor References Historical sources Y. Okamoto and W. Brenner Organic Semiconductors, Rheinhold (1964) Physical organic chemistry Molecular electronics Organic semiconductors
Charge-transfer complex
[ "Chemistry", "Materials_science" ]
1,290
[ "Molecular physics", "Semiconductor materials", "Molecular electronics", "Physical organic chemistry", "Nanotechnology", "Organic semiconductors" ]
1,169,523
https://en.wikipedia.org/wiki/Hybridoma%20technology
Hybridoma technology is a method for producing large numbers of identical antibodies, also called monoclonal antibodies. This process starts by injecting a mouse (or other mammal) with an antigen that provokes an immune response. A type of white blood cell, the B cell, produces antibodies that bind to the injected antigen. These antibody producing B-cells are then harvested from the mouse and, in turn, fused with immortal myeloma cancer cells, to produce a hybrid cell line called a hybridoma, which has both the antibody-producing ability of the B-cell and the longevity and reproductivity of the myeloma. The hybridomas can be grown in culture, each culture starting with one viable hybridoma cell, producing cultures each of which consists of genetically identical hybridomas which produce one antibody per culture (monoclonal) rather than mixtures of different antibodies (polyclonal). The myeloma cell line that is used in this process is selected for its ability to grow in tissue culture and for an absence of antibody synthesis. In contrast to polyclonal antibodies, which are mixtures of many different antibody molecules, the monoclonal antibodies produced by each hybridoma line are all chemically identical. The production of monoclonal antibodies was invented by César Milstein and Georges J. F. Köhler in 1975. They shared the Nobel Prize of 1984 for Medicine and Physiology with Niels Kaj Jerne, who made other contributions to immunology. The term hybridoma was coined by Leonard Herzenberg during his sabbatical in César Milstein's laboratory in 1976–1977. Method Laboratory animals (mammals, e.g. mice) are first exposed to the antigen against which an antibody is to be generated. Usually this is done by a series of injections of the antigen in question, over the course of several weeks. These injections are typically followed by the use of in vivo electroporation, which significantly enhances the immune response. Once splenocytes are isolated from the mammal's spleen, the B cells are fused with immortalised myeloma cells. The fusion of the B cells with myeloma cells can be done using electrofusion. Electrofusion causes the B cells and myeloma cells to align and fuse with the application of an electric field. Alternatively, the B-cells and myelomas can be made to fuse by chemical protocols, most often using polyethylene glycol. The myeloma cells are selected beforehand to ensure they are not secreting antibody themselves and that they lack the hypoxanthine-guanine phosphoribosyltransferase (HGPRT) gene, making them sensitive (or vulnerable) to the HAT medium (see below). Fused cells are incubated in HAT medium (hypoxanthine-aminopterin-thymidine medium) for roughly 10 to 14 days. Aminopterin blocks the pathway that allows for nucleotide synthesis. Hence, unfused myeloma cells die, as they cannot produce nucleotides by the de novo or salvage pathways because they lack HGPRT. Removal of the unfused myeloma cells is necessary because they have the potential to outgrow other cells, especially weakly established hybridomas. Unfused B cells die as they have a short life span. In this way, only the B cell-myeloma hybrids survive, since the HGPRT gene coming from the B cells is functional. These cells produce antibodies (a property of B cells) and are immortal (a property of myeloma cells). The incubated medium is then diluted into multi-well plates to such an extent that each well contains only one cell. Since the antibodies in a well are produced by the same B cell, they will be directed towards the same epitope, and are thus monoclonal antibodies. The next stage is a rapid primary screening process, which identifies and selects only those hybridomas that produce antibodies of appropriate specificity. The first screening technique used is called ELISA. The hybridoma culture supernatant, secondary enzyme labeled conjugate, and chromogenic substrate, are then incubated, and the formation of a colored product indicates a positive hybridoma. Alternatively, immunocytochemical, western blot, and immunoprecipitation-mass spectrometry. Unlike western blot assays, immunoprecipitation-mass spectrometry facilitates screening and ranking of clones which bind to the native (non-denaturated) forms of antigen proteins. Flow cytometry screening has been used for primary screening of a large number (~1000) of hybridoma clones recognizing the native form of the antigen on the cell surface. In the flow cytometry-based screening, a mixture of antigen-negative cells and antigen-positive cells is used as the antigen to be tested for each hybridoma supernatant sample. The B cell that produces the desired antibodies can be cloned to produce many identical daughter clones. Supplemental media containing interleukin-6 (such as briclone) are essential for this step. Once a hybridoma colony is established, it will continually grow in culture medium like RPMI-1640 (with antibiotics and fetal bovine serum) and produce antibodies. Multiwell plates are used initially to grow the hybridomas, and after selection, are changed to larger tissue culture flasks. This maintains the well-being of the hybridomas and provides enough cells for cryopreservation and supernatant for subsequent investigations. The culture supernatant can yield 1 to 60 μg/ml of monoclonal antibody, which is maintained at -20 °C or lower until required. By using culture supernatant or a purified immunoglobulin preparation, further analysis of a potential monoclonal antibody producing hybridoma can be made in terms of reactivity, specificity, and cross-reactivity. Applications The use of monoclonal antibodies is numerous and includes the prevention, diagnosis, and treatment of disease. For example, monoclonal antibodies can distinguish subsets of B cells and T cells, which is helpful in identifying different types of leukaemias. In addition, specific monoclonal antibodies have been used to define cell surface markers on white blood cells and other cell types. This led to the cluster of differentiation series of markers. These are often referred to as CD markers and define several hundred different cell surface components of cells, each specified by binding of a particular monoclonal antibody. Such antibodies are extremely useful for fluorescence-activated cell sorting, the specific isolation of particular types of cells. In diagnostic histopathology With the help of monoclonal antibodies, tissues and organs can be classified based on their expression of certain defined markers, which reflect tissue or cellular genesis. Prostate specific antigen, placental alkaline phosphatase, human chorionic gonadotrophin, α-fetoprotein and others are organ-associated antigens and the production of monoclonal antibodies against these antigens helps in determining the nature of a primary tumor. Monoclonal antibodies are especially useful in distinguishing morphologically similar lesions, like pleural and peritoneal mesothelioma, adenocarcinoma, and in the determination of the organ or tissue origin of undifferentiated metastases. Selected monoclonal antibodies help in the detection of occult metastases (cancer of unknown primary origin) by immuno-cytological analysis of bone marrow, other tissue aspirates, as well as lymph nodes and other tissues and can have increased sensitivity over normal histopathological staining. One study performed a sensitive immuno-histochemical assay on bone marrow aspirates of 20 patients with localized prostate cancer. Three monoclonal antibodies (T16, C26, and AE-1), capable of recognizing membrane and cytoskeletal antigens expressed by epithelial cells to detect tumour cells, were used in the assay. Bone marrow aspirates of 22% of patients with localized prostate cancer (stage B, 0/5; Stage C, 2/4), and 36% patients with metastatic prostate cancer (Stage D1, 0/7 patients; Stage D2, 4/4 patients) had antigen-positive cells in their bone marrow. It was concluded that immuno-histochemical staining of bone marrow aspirates are very useful to detect occult bone marrow metastases in patients with apparently localized prostate cancer. Although immuno-cytochemistry using tumor-associated monoclonal antibodies has led to an improved ability to detect occult breast cancer cells in bone marrow aspirates and peripheral blood, further development of this method is necessary before it can be used routinely. One major drawback of immuno-cytochemistry is that only tumor-associated and not tumor-specific monoclonal antibodies are used, and as a result, some cross-reaction with normal cells can occur. In order to effectively stage breast cancer and assess the efficacy of purging regimens prior to autologous stem cell infusion, it is important to detect even small quantities of breast cancer cells. Immuno-histochemical methods are ideal for this purpose because they are simple, sensitive, and quite specific. Franklin et al. performed a sensitive immuno-cytochemical assay by using a combination of four monoclonal antibodies (260F9, 520C9, 317G5 and BrE-3) against tumor cell surface glycoproteins to identify breast tumour cells in bone marrow and peripheral blood. They concluded from the results that immuno-cytochemical staining of bone marrow and peripheral blood is a sensitive and simple way to detect and quantify breast cancer cells. One of the main reasons for metastatic relapse in patients with solid tumours is the early dissemination of malignant cells. The use of monoclonal antibodies (mAbs) specific for cytokeratins can identify disseminated individual epithelial tumor cells in the bone marrow. One study reports on having developed an immuno-cytochemical procedure for simultaneous labeling of cytokeratin component no. 18 (CK18) and prostate specific antigen (PSA). This would help in the further characterization of disseminated individual epithelial tumor cells in patients with prostate cancer. The twelve control aspirates from patients with benign prostatic hyperplasia showed negative staining, which further supports the specificity of CK18 in detecting epithelial tumour cells in bone marrow. In most cases of malignant disease complicated by effusion, neoplastic cells can be easily recognized. However, in some cases, malignant cells are not so easily seen or their presence is too doubtful to call it a positive report. The use of immuno-cytochemical techniques increases diagnostic accuracy in these cases. Ghosh, Mason and Spriggs analysed 53 samples of pleural or peritoneal fluid from 41 patients with malignant disease. Conventional cytological examination had not revealed any neoplastic cells. Three monoclonal antibodies (anti-CEA, Ca 1 and HMFG-2) were used to search for malignant cells. Immunocytochemical labelling was performed on unstained smears, which had been stored at -20 °C up to 18 months. Twelve of the forty-one cases in which immuno-cytochemical staining was performed, revealed malignant cells. The result represented an increase in diagnostic accuracy of approximately 20%. The study concluded that in patients with suspected malignant disease, immuno-cytochemical labeling should be used routinely in the examination of cytologically negative samples and has important implications with respect to patient management. Another application of immuno-cytochemical staining is for the detection of two antigens in the same smear. Double staining with light chain antibodies and with T and B cell markers can indicate the neoplastic origin of a lymphoma. One study has reported the isolation of a hybridoma cell line (clone 1E10), which produces a monoclonal antibody (IgM, k isotype). This monoclonal antibody shows specific immuno-cytochemical staining of nucleoli. Tissues and tumours can be classified based on their expression of certain markers, with the help of monoclonal antibodies. They help in distinguishing morphologically similar lesions and in determining the organ or tissue origin of undifferentiated metastases. Immuno-cytological analysis of bone marrow, tissue aspirates, lymph nodes etc. with selected monoclonal antibodies help in the detection of occult metastases. Monoclonal antibodies increase the sensitivity in detecting even small quantities of invasive or metastatic cells. Monoclonal antibodies (mAbs) specific for cytokeratins can detect disseminated individual epithelial tumour cells in the bone marrow. References External links Cell culture techniques Immunology
Hybridoma technology
[ "Chemistry", "Biology" ]
2,727
[ "Biochemistry methods", "Immunology", "Cell culture techniques" ]
1,170,160
https://en.wikipedia.org/wiki/Chirality%20%28mathematics%29
In geometry, a figure is chiral (and said to have chirality) if it is not identical to its mirror image, or, more precisely, if it cannot be mapped to its mirror image by rotations and translations alone. An object that is not chiral is said to be achiral. A chiral object and its mirror image are said to be enantiomorphs. The word chirality is derived from the Greek (cheir), the hand, the most familiar chiral object; the word enantiomorph stems from the Greek (enantios) 'opposite' + (morphe) 'form'. Examples Some chiral three-dimensional objects, such as the helix, can be assigned a right or left handedness, according to the right-hand rule. Many other familiar objects exhibit the same chiral symmetry of the human body, such as gloves and shoes. Right shoes differ from left shoes only by being mirror images of each other. In contrast thin gloves may not be considered chiral if you can wear them inside-out. The J-, L-, S- and Z-shaped tetrominoes of the popular video game Tetris also exhibit chirality, but only in a two-dimensional space. Individually they contain no mirror symmetry in the plane. Chirality and symmetry group A figure is achiral if and only if its symmetry group contains at least one orientation-reversing isometry. (In Euclidean geometry any isometry can be written as with an orthogonal matrix and a vector . The determinant of is either 1 or −1 then. If it is −1 the isometry is orientation-reversing, otherwise it is orientation-preserving. A general definition of chirality based on group theory exists. It does not refer to any orientation concept: an isometry is direct if and only if it is a product of squares of isometries, and if not, it is an indirect isometry. The resulting chirality definition works in spacetime. Chirality in two dimensions In two dimensions, every figure which possesses an axis of symmetry is achiral, and it can be shown that every bounded achiral figure must have an axis of symmetry. (An axis of symmetry of a figure is a line , such that is invariant under the mapping , when is chosen to be the -axis of the coordinate system.) For that reason, a triangle is achiral if it is equilateral or isosceles, and is chiral if it is scalene. Consider the following pattern: This figure is chiral, as it is not identical to its mirror image: But if one prolongs the pattern in both directions to infinity, one receives an (unbounded) achiral figure which has no axis of symmetry. Its symmetry group is a frieze group generated by a single glide reflection. Chirality in three dimensions In three dimensions, every figure that possesses a mirror plane of symmetry S1, an inversion center of symmetry S2, or a higher improper rotation (rotoreflection) Sn axis of symmetry is achiral. (A plane of symmetry of a figure is a plane , such that is invariant under the mapping , when is chosen to be the --plane of the coordinate system. A center of symmetry of a figure is a point , such that is invariant under the mapping , when is chosen to be the origin of the coordinate system.) Note, however, that there are achiral figures lacking both plane and center of symmetry. An example is the figure which is invariant under the orientation reversing isometry and thus achiral, but it has neither plane nor center of symmetry. The figure also is achiral as the origin is a center of symmetry, but it lacks a plane of symmetry. Achiral figures can have a center axis. Knot theory A knot is called achiral if it can be continuously deformed into its mirror image, otherwise it is called a chiral knot. For example, the unknot and the figure-eight knot are achiral, whereas the trefoil knot is chiral. See also Chiral polytope Chirality (physics) Parity (physics) Chirality (chemistry) Asymmetry Skewness Vertex algebra References Further reading External links Symmetry, Chirality, Symmetry Measures and Chirality Measures: General Definitions Chiral Polyhedra by Eric W. Weisstein, The Wolfram Demonstrations Project. Chiral manifold at the Manifold Atlas. Knot theory Polyhedra Chirality Topology
Chirality (mathematics)
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
949
[ "Pharmacology", "Origin of life", "Stereochemistry", "Chirality", "Topology", "Space", "Geometry", "Asymmetry", "Biochemistry", "Spacetime", "Symmetry", "Biological hypotheses" ]
1,170,166
https://en.wikipedia.org/wiki/Chirality%20%28chemistry%29
In chemistry, a molecule or ion is called chiral () if it cannot be superposed on its mirror image by any combination of rotations, translations, and some conformational changes. This geometric property is called chirality (). The terms are derived from Ancient Greek (cheir) 'hand'; which is the canonical example of an object with this property. A chiral molecule or ion exists in two stereoisomers that are mirror images of each other, called enantiomers; they are often distinguished as either "right-handed" or "left-handed" by their absolute configuration or some other criterion. The two enantiomers have the same chemical properties, except when reacting with other chiral compounds. They also have the same physical properties, except that they often have opposite optical activities. A homogeneous mixture of the two enantiomers in equal parts is said to be racemic, and it usually differs chemically and physically from the pure enantiomers. Chiral molecules will usually have a stereogenic element from which chirality arises. The most common type of stereogenic element is a stereogenic center, or stereocenter. In the case of organic compounds, stereocenters most frequently take the form of a carbon atom with four distinct (different) groups attached to it in a tetrahedral geometry. Less commonly, other atoms like N, P, S, and Si can also serve as stereocenters, provided they have four distinct substituents (including lone pair electrons) attached to them. A given stereocenter has two possible configurations (R and S), which give rise to stereoisomers (diastereomers and enantiomers) in molecules with one or more stereocenter. For a chiral molecule with one or more stereocenter, the enantiomer corresponds to the stereoisomer in which every stereocenter has the opposite configuration. An organic compound with only one stereogenic carbon is always chiral. On the other hand, an organic compound with multiple stereogenic carbons is typically, but not always, chiral. In particular, if the stereocenters are configured in such a way that the molecule can take a conformation having a plane of symmetry or an inversion point, then the molecule is achiral and is known as a meso compound. Molecules with chirality arising from one or more stereocenters are classified as possessing central chirality. There are two other types of stereogenic elements that can give rise to chirality, a stereogenic axis (axial chirality) and a stereogenic plane (planar chirality). Finally, the inherent curvature of a molecule can also give rise to chirality (inherent chirality). These types of chirality are far less common than central chirality. BINOL is a typical example of an axially chiral molecule, while trans-cyclooctene is a commonly cited example of a planar chiral molecule. Finally, helicene possesses helical chirality, which is one type of inherent chirality. Chirality is an important concept for stereochemistry and biochemistry. Most substances relevant to biology are chiral, such as carbohydrates (sugars, starch, and cellulose), all but one of the amino acids that are the building blocks of proteins, and the nucleic acids. Naturally occurring triglycerides are often chiral, but not always. In living organisms, one typically finds only one of the two enantiomers of a chiral compound. For that reason, organisms that consume a chiral compound usually can metabolize only one of its enantiomers. For the same reason, the two enantiomers of a chiral pharmaceutical usually have vastly different potencies or effects. Definition The chirality of a molecule is based on the molecular symmetry of its conformations. A conformation of a molecule is chiral if and only if it belongs to the Cn, Dn, T, O, I point groups (the chiral point groups). However, whether the molecule itself is considered to be chiral depends on whether its chiral conformations are persistent isomers that could be isolated as separated enantiomers, at least in principle, or the enantiomeric conformers rapidly interconvert at a given temperature and timescale through low-energy conformational changes (rendering the molecule achiral). For example, despite having chiral gauche conformers that belong to the C2 point group, butane is considered achiral at room temperature because rotation about the central C–C bond rapidly interconverts the enantiomers (3.4 kcal/mol barrier). Similarly, cis-1,2-dichlorocyclohexane consists of chair conformers that are nonidentical mirror images, but the two can interconvert via the cyclohexane chair flip (~10 kcal/mol barrier). As another example, amines with three distinct substituents (R1R2R3N:) are also regarded as achiral molecules because their enantiomeric pyramidal conformers rapidly undergo pyramidal inversion. However, if the temperature in question is low enough, the process that interconverts the enantiomeric chiral conformations becomes slow compared to a given timescale. The molecule would then be considered to be chiral at that temperature. The relevant timescale is, to some degree, arbitrarily defined: 1000 seconds is sometimes employed, as this is regarded as the lower limit for the amount of time required for chemical or chromatographic separation of enantiomers in a practical sense. Molecules that are chiral at room temperature due to restricted rotation about a single bond (barrier to rotation ≥ ca. 23 kcal/mol) are said to exhibit atropisomerism. A chiral compound can contain no improper axis of rotation (Sn), which includes planes of symmetry and inversion center. Chiral molecules are always dissymmetric (lacking Sn) but not always asymmetric (lacking all symmetry elements except the trivial identity). Asymmetric molecules are always chiral. The following table shows some examples of chiral and achiral molecules, with the Schoenflies notation of the point group of the molecule. In the achiral molecules, X and Y (with no subscript) represent achiral groups, whereas X and X or Y and Y represent enantiomers. Note that there is no meaning to the orientation of an S axis, which is just an inversion. Any orientation will do, so long as it passes through the center of inversion. Also note that higher symmetries of chiral and achiral molecules also exist, and symmetries that do not include those in the table, such as the chiral C or the achiral S. An example of a molecule that does not have a mirror plane or an inversion and yet would be considered achiral is 1,1-difluoro-2,2-dichlorocyclohexane (or 1,1-difluoro-3,3-dichlorocyclohexane). This may exist in many conformers (conformational isomers), but none of them has a mirror plane. In order to have a mirror plane, the cyclohexane ring would have to be flat, widening the bond angles and giving the conformation a very high energy. This compound would not be considered chiral because the chiral conformers interconvert easily. An achiral molecule having chiral conformations could theoretically form a mixture of right-handed and left-handed crystals, as often happens with racemic mixtures of chiral molecules (see Chiral resolution#Spontaneous resolution and related specialized techniques), or as when achiral liquid silicon dioxide is cooled to the point of becoming chiral quartz. Stereogenic centers A stereogenic center (or stereocenter) is an atom such that swapping the positions of two ligands (connected groups) on that atom results in a molecule that is stereoisomeric to the original. For example, a common case is a tetrahedral carbon bonded to four distinct groups a, b, c, and d (Cabcd), where swapping any two groups (e.g., Cbacd) leads to a stereoisomer of the original, so the central C is a stereocenter. Many chiral molecules have point chirality, namely a single chiral stereogenic center that coincides with an atom. This stereogenic center usually has four or more bonds to different groups, and may be carbon (as in many biological molecules), phosphorus (as in many organophosphates), silicon, or a metal (as in many chiral coordination compounds). However, a stereogenic center can also be a trivalent atom whose bonds are not in the same plane, such as phosphorus in P-chiral phosphines (PRR′R″) and sulfur in S-chiral sulfoxides (OSRR′), because a lone-pair of electrons is present instead of a fourth bond. Similarly, a stereogenic axis (or plane) is defined as an axis (or plane) in the molecule such that the swapping of any two ligands attached to the axis (or plane) gives rise to a stereoisomer. For instance, the C2-symmetric species 1,1′-bi-2-naphthol (BINOL) and 1,3-dichloroallene have stereogenic axes and exhibit axial chirality, while (E)-cyclooctene and many ferrocene derivatives bearing two or more substituents have stereogenic planes and exhibit planar chirality. Chirality can also arise from isotopic differences between atoms, such as in the deuterated benzyl alcohol PhCHDOH; which is chiral and optically active ([α]D = 0.715°), even though the non-deuterated compound PhCH2OH is not. If two enantiomers easily interconvert, the pure enantiomers may be practically impossible to separate, and only the racemic mixture is observable. This is the case, for example, of most amines with three different substituents (NRR′R″), because of the low energy barrier for nitrogen inversion. When the optical rotation for an enantiomer is too low for practical measurement, the species is said to exhibit cryptochirality. Chirality is an intrinsic part of the identity of a molecule, so the systematic name includes details of the absolute configuration (R/S, D/L, or other designations). Manifestations of chirality Flavor: the artificial sweetener aspartame has two enantiomers. L-aspartame tastes sweet whereas D-aspartame is tasteless. Odor: R-(–)-carvone smells like spearmint whereas S-(+)-carvone smells like caraway. Drug effectiveness: the antidepressant drug citalopram is sold as a racemic mixture. However, studies have shown that only the (S)-(+) enantiomer (escitalopram) is responsible for the drug's beneficial effects. Drug safety: D‑penicillamine is used in chelation therapy and for the treatment of rheumatoid arthritis whereas L‑penicillamine is toxic as it inhibits the action of pyridoxine, an essential B vitamin. In biochemistry Many biologically active molecules are chiral, including the naturally occurring amino acids (the building blocks of proteins) and sugars. The origin of this homochirality in biology is the subject of much debate. Most scientists believe that Earth life's "choice" of chirality was purely random, and that if carbon-based life forms exist elsewhere in the universe, their chemistry could theoretically have opposite chirality. However, there is some suggestion that early amino acids could have formed in comet dust. In this case, circularly polarised radiation (which makes up 17% of stellar radiation) could have caused the selective destruction of one chirality of amino acids, leading to a selection bias which ultimately resulted in all life on Earth being homochiral. Enzymes, which are chiral, often distinguish between the two enantiomers of a chiral substrate. One could imagine an enzyme as having a glove-like cavity that binds a substrate. If this glove is right-handed, then one enantiomer will fit inside and be bound, whereas the other enantiomer will have a poor fit and is unlikely to bind. -forms of amino acids tend to be tasteless, whereas -forms tend to taste sweet. Spearmint leaves contain the -enantiomer of the chemical carvone or R-(−)-carvone and caraway seeds contain the -enantiomer or S-(+)-carvone. The two smell different to most people because our olfactory receptors are chiral. Chirality is important in context of ordered phases as well, for example the addition of a small amount of an optically active molecule to a nematic phase (a phase that has long range orientational order of molecules) transforms that phase to a chiral nematic phase (or cholesteric phase). Chirality in context of such phases in polymeric fluids has also been studied in this context. In inorganic chemistry Chirality is a symmetry property, not a property of any part of the periodic table. Thus many inorganic materials, molecules, and ions are chiral. Quartz is an example from the mineral kingdom. Such noncentric materials are of interest for applications in nonlinear optics. In the areas of coordination chemistry and organometallic chemistry, chirality is pervasive and of practical importance. A famous example is tris(bipyridine)ruthenium(II) complex in which the three bipyridine ligands adopt a chiral propeller-like arrangement. The two enantiomers of complexes such as [Ru(2,2′-bipyridine)3]2+ may be designated as Λ (capital lambda, the Greek version of "L") for a left-handed twist of the propeller described by the ligands, and Δ (capital delta, Greek "D") for a right-handed twist (pictured). dextro- and levo-rotation (the clockwise and counterclockwise optical rotation of plane-polarized light) uses similar notation, but shouldn't be confused. Chiral ligands confer chirality to a metal complex, as illustrated by metal-amino acid complexes. If the metal exhibits catalytic properties, its combination with a chiral ligand is the basis of asymmetric catalysis. Methods and practices The term optical activity is derived from the interaction of chiral materials with polarized light. In a solution, the (−)-form, or levorotatory form, of an optical isomer rotates the plane of a beam of linearly polarized light counterclockwise. The (+)-form, or dextrorotatory form, of an optical isomer does the opposite. The rotation of light is measured using a polarimeter and is expressed as the optical rotation. Enantiomers can be separated by chiral resolution. This often involves forming crystals of a salt composed of one of the enantiomers and an acid or base from the so-called chiral pool of naturally occurring chiral compounds, such as malic acid or the amine brucine. Some racemic mixtures spontaneously crystallize into right-handed and left-handed crystals that can be separated by hand. Louis Pasteur used this method to separate left-handed and right-handed sodium ammonium tartrate crystals in 1849. Sometimes it is possible to seed a racemic solution with a right-handed and a left-handed crystal so that each will grow into a large crystal. Liquid chromatography (HPLC and TLC) may also be used as an analytical method for the direct separation of enantiomers and the control of enantiomeric purity, e.g. active pharmaceutical ingredients (APIs) which are chiral. Miscellaneous nomenclature Any non-racemic chiral substance is called scalemic. Scalemic materials can be enantiopure or enantioenriched. A chiral substance is enantiopure when only one of two possible enantiomers is present so that all molecules within a sample have the same chirality sense. Use of homochiral as a synonym is strongly discouraged. A chiral substance is enantioenriched or heterochiral when its enantiomeric ratio is greater than 50:50 but less than 100:0. Enantiomeric excess or e.e. is the difference between how much of one enantiomer is present compared to the other. For example, a sample with 40% e.e. of R contains 70% R and 30% S (70% − 30% = 40%). History The rotation of plane polarized light by chiral substances was first observed by Jean-Baptiste Biot in 1812, and gained considerable importance in the sugar industry, analytical chemistry, and pharmaceuticals. Louis Pasteur deduced in 1848 that this phenomenon has a molecular basis. The term chirality itself was coined by Lord Kelvin in 1894. Different enantiomers or diastereomers of a compound were formerly called optical isomers due to their different optical properties. At one time, chirality was thought to be restricted to organic chemistry, but this misconception was overthrown by the resolution of a purely inorganic compound, a cobalt complex called hexol, by Alfred Werner in 1911. In the early 1970s, various groups established that the human olfactory organ is capable of distinguishing chiral compounds. See also Chirality (electromagnetism) Chirality (mathematics) Chirality (physics) Enantiopure drug Enantioselective synthesis Handedness Orientation (vector space) Pfeiffer effect Stereochemistry for overview of stereochemistry in general Stereoisomerism Supramolecular chirality References Further reading External links 21st International Symposium on Chirality STEREOISOMERISM - OPTICAL ISOMERISM Symposium highlights-Session 5: New technologies for small molecule synthesis IUPAC nomenclature for amino acid configurations. Michigan State University's explanation of R/S nomenclature Chirality & Odour Perception at leffingwell.com Chirality & Bioactivity I.: Pharmacology Chirality and the Search for Extraterrestrial Life The Handedness of the Universe by Roger A Hegstrom and Dilip K Kondepudi http://quantummechanics.ucsd.edu/ph87/ScientificAmerican/Sciam/Hegstrom_The_Handedness_of_the_universe.pdf Stereochemistry Polarization (waves) Chirality Chemical nomenclature Biochemistry Origin of life Pharmacology
Chirality (chemistry)
[ "Physics", "Chemistry", "Biology" ]
4,028
[ "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Astrophysics", "Chirality", "Space", "Medicinal chemistry", "Asymmetry", "nan", "Spacetime", "Polarization (waves)", "Symmetry", "Biological hypotheses" ]
1,170,169
https://en.wikipedia.org/wiki/Chirality%20%28physics%29
A chiral phenomenon is one that is not identical to its mirror image (see the article on mathematical chirality). The spin of a particle may be used to define a handedness, or helicity, for that particle, which, in the case of a massless particle, is the same as chirality. A symmetry transformation between the two is called parity transformation. Invariance under parity transformation by a Dirac fermion is called chiral symmetry. Chirality and helicity The helicity of a particle is positive ("right-handed") if the direction of its spin is the same as the direction of its motion. It is negative ("left-handed") if the directions of spin and motion are opposite. So a standard clock, with its spin vector defined by the rotation of its hands, has left-handed helicity if tossed with its face directed forwards. Mathematically, helicity is the sign of the projection of the spin vector onto the momentum vector: "left" is negative, "right" is positive. The chirality of a particle is more abstract: It is determined by whether the particle transforms in a right- or left-handed representation of the Poincaré group. For massless particles – photons, gluons, and (hypothetical) gravitons – chirality is the same as helicity; a given massless particle appears to spin in the same direction along its axis of motion regardless of point of view of the observer. For massive particles – such as electrons, quarks, and neutrinos – chirality and helicity must be distinguished: In the case of these particles, it is possible for an observer to change to a reference frame moving faster than the spinning particle, in which case the particle will then appear to move backwards, and its helicity (which may be thought of as "apparent chirality") will be reversed. That is, helicity is a constant of motion, but it is not Lorentz invariant. Chirality is Lorentz invariant, but is not a constant of motion: a massive left-handed spinor, when propagating, will evolve into a right handed spinor over time, and vice versa. A massless particle moves with the speed of light, so no real observer (who must always travel at less than the speed of light) can be in any reference frame where the particle appears to reverse its relative direction of spin, meaning that all real observers see the same helicity. Because of this, the direction of spin of massless particles is not affected by a change of inertial reference frame (a Lorentz boost) in the direction of motion of the particle, and the sign of the projection (helicity) is fixed for all reference frames: The helicity of massless particles is a relativistic invariant (a quantity whose value is the same in all inertial reference frames) which always matches the massless particle's chirality. The discovery of neutrino oscillation implies that neutrinos have mass, so the photon is the only confirmed massless particle; gluons are expected to also be massless, although this has not been conclusively tested. Hence, these are the only two particles now known for which helicity could be identical to chirality, and only the photon has been confirmed by measurement. All other observed particles have mass and thus may have different helicities in different reference frames. Chiral theories Particle physicists have only observed or inferred left-chiral fermions and right-chiral antifermions engaging in the charged weak interaction. In the case of the weak interaction, which can in principle engage with both left- and right-chiral fermions, only two left-handed fermions interact. Interactions involving right-handed or opposite-handed fermions have not been shown to occur, implying that the universe has a preference for left-handed chirality. This preferential treatment of one chiral realization over another violates parity, as first noted by Chien Shiung Wu in her famous experiment known as the Wu experiment. This is a striking observation, since parity is a symmetry that holds for all other fundamental interactions. Chirality for a Dirac fermion is defined through the operator , which has eigenvalues ±1; the eigenvalue's sign is equal to the particle's chirality: +1 for right-handed, −1 for left-handed. Any Dirac field can thus be projected into its left- or right-handed component by acting with the projection operators or on . The coupling of the charged weak interaction to fermions is proportional to the first projection operator, which is responsible for this interaction's parity symmetry violation. A common source of confusion is due to conflating the , chirality operator with the helicity operator. Since the helicity of massive particles is frame-dependent, it might seem that the same particle would interact with the weak force according to one frame of reference, but not another. The resolution to this paradox is that , for which helicity is not frame-dependent. By contrast, for massive particles, chirality is not the same as helicity, or, alternatively, helicity is not Lorentz invariant, so there is no frame dependence of the weak interaction: a particle that couples to the weak force in one frame does so in every frame. A theory that is asymmetric with respect to chiralities is called a chiral theory, while a non-chiral (i.e., parity-symmetric) theory is sometimes called a vector theory. Many pieces of the Standard Model of physics are non-chiral, which is traceable to anomaly cancellation in chiral theories. Quantum chromodynamics is an example of a vector theory, since both chiralities of all quarks appear in the theory, and couple to gluons in the same way. The electroweak theory, developed in the mid 20th century, is an example of a chiral theory. Originally, it assumed that neutrinos were massless, and assumed the existence of only left-handed neutrinos and right-handed antineutrinos. After the observation of neutrino oscillations, which imply that neutrinos are massive (like all other fermions) the revised theories of the electroweak interaction now include both right- and left-handed neutrinos. However, it is still a chiral theory, as it does not respect parity symmetry. The exact nature of the neutrino is still unsettled and so the electroweak theories that have been proposed are somewhat different, but most accommodate the chirality of neutrinos in the same way as was already done for all other fermions. Chiral symmetry Vector gauge theories with massless Dirac fermion fields exhibit chiral symmetry, i.e., rotating the left-handed and the right-handed components independently makes no difference to the theory. We can write this as the action of rotation on the fields:   and   or   and   With flavors, we have unitary rotations instead: . More generally, we write the right-handed and left-handed states as a projection operator acting on a spinor. The right-handed and left-handed projection operators are and Massive fermions do not exhibit chiral symmetry, as the mass term in the Lagrangian, , breaks chiral symmetry explicitly. Spontaneous chiral symmetry breaking may also occur in some theories, as it most notably does in quantum chromodynamics. The chiral symmetry transformation can be divided into a component that treats the left-handed and the right-handed parts equally, known as vector symmetry, and a component that actually treats them differently, known as axial symmetry. (cf. Current algebra.) A scalar field model encoding chiral symmetry and its breaking is the chiral model. The most common application is expressed as equal treatment of clockwise and counter-clockwise rotations from a fixed frame of reference. The general principle is often referred to by the name chiral symmetry. The rule is absolutely valid in the classical mechanics of Newton and Einstein, but results from quantum mechanical experiments show a difference in the behavior of left-chiral versus right-chiral subatomic particles. Example: u and d quarks in QCD Consider quantum chromodynamics (QCD) with two massless quarks and (massive fermions do not exhibit chiral symmetry). The Lagrangian reads In terms of left-handed and right-handed spinors, it reads (Here, is the imaginary unit and the Dirac operator.) Defining it can be written as The Lagrangian is unchanged under a rotation of qL by any 2×2 unitary matrix , and qR by any 2×2 unitary matrix . This symmetry of the Lagrangian is called flavor chiral symmetry, and denoted as . It decomposes into The singlet vector symmetry, , acts as and thus invariant under gauge symmetry. This corresponds to baryon number conservation. The singlet axial group transforms as the following global transformation However, it does not correspond to a conserved quantity, because the associated axial current is not conserved. It is explicitly violated by a quantum anomaly. The remaining chiral symmetry turns out to be spontaneously broken by a quark condensate formed through nonperturbative action of QCD gluons, into the diagonal vector subgroup known as isospin. The Goldstone bosons corresponding to the three broken generators are the three pions. As a consequence, the effective theory of QCD bound states like the baryons, must now include mass terms for them, ostensibly disallowed by unbroken chiral symmetry. Thus, this chiral symmetry breaking induces the bulk of hadron masses, such as those for the nucleons — in effect, the bulk of the mass of all visible matter. In the real world, because of the nonvanishing and differing masses of the quarks, is only an approximate symmetry to begin with, and therefore the pions are not massless, but have small masses: they are pseudo-Goldstone bosons. More flavors For more "light" quark species, flavors in general, the corresponding chiral symmetries are , decomposing into and exhibiting a very analogous chiral symmetry breaking pattern. Most usually, is taken, the u, d, and s quarks taken to be light (the eightfold way), so then approximately massless for the symmetry to be meaningful to a lowest order, while the other three quarks are sufficiently heavy to barely have a residual chiral symmetry be visible for practical purposes. An application in particle physics In theoretical physics, the electroweak model breaks parity maximally. All its fermions are chiral Weyl fermions, which means that the charged weak gauge bosons W and W only couple to left-handed quarks and leptons. Some theorists found this objectionable, and so conjectured a GUT extension of the weak force which has new, high energy W′ and Z′ bosons, which do couple with right handed quarks and leptons: to Here, (pronounced " left") is from above, while is the baryon number minus the lepton number. The electric charge formula in this model is given by where and are the left and right weak isospin values of the fields in the theory. There is also the chromodynamic . The idea was to restore parity by introducing a left-right symmetry. This is a group extension of (the left-right symmetry) by to the semidirect product This has two connected components where acts as an automorphism, which is the composition of an involutive outer automorphism of with the interchange of the left and right copies of with the reversal of . It was shown by Mohapatra & Senjanovic (1975) that left-right symmetry can be spontaneously broken to give a chiral low energy theory, which is the Standard Model of Glashow, Weinberg, and Salam, and also connects the small observed neutrino masses to the breaking of left-right symmetry via the seesaw mechanism. In this setting, the chiral quarks and are unified into an irreducible representation ("irrep") The leptons are also unified into an irreducible representation The Higgs bosons needed to implement the breaking of left-right symmetry down to the Standard Model are This then provides three sterile neutrinos which are perfectly consistent with neutrino oscillation data. Within the seesaw mechanism, the sterile neutrinos become superheavy without affecting physics at low energies. Because the left–right symmetry is spontaneously broken, left–right models predict domain walls. This left-right symmetry idea first appeared in the Pati–Salam model (1974) and Mohapatra–Pati models (1975). See also electroweak theory chirality (chemistry) chirality (mathematics) chiral symmetry breaking handedness spinors and Dirac fields sigma model chiral model Notes References External links To see a summary of the differences and similarities between chirality and helicity (those covered here and more) in chart form, one may go to Pedagogic Aids to Quantum Field Theory and click on the link near the bottom of the page entitled "Chirality and Helicity Summary". To see an in depth discussion of the two with examples, which also shows how chirality and helicity approach the same thing as speed approaches that of light, click the link entitled "Chirality and Helicity in Depth" on the same page. History of science: parity violation Helicity, Chirality, Mass, and the Higgs (Quantum Diaries blog) Chirality vs helicity chart (Robert D. Klauber) Quantum field theory Quantum chromodynamics Symmetry Chirality
Chirality (physics)
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
2,941
[ "Quantum field theory", "Pharmacology", "Origin of life", "Biochemistry", "Stereochemistry", "Quantum mechanics", "Chirality", "Geometry", "Asymmetry", "Biological hypotheses", "Symmetry" ]
1,170,314
https://en.wikipedia.org/wiki/Wingtip%20vortices
Wingtip vortices are circular patterns of rotating air left behind a wing as it generates lift. The name is a misnomer because the cores of the vortices are slightly inboard of the wing tips. Wingtip vortices are sometimes named trailing or lift-induced vortices because they also occur at points other than at the wing tips. Indeed, vorticity is trailed at any point on the wing where the lift varies span-wise (a fact described and quantified by the lifting-line theory); it eventually rolls up into large vortices near the wingtip, at the edge of flap devices, or at other abrupt changes in wing planform. Wingtip vortices are associated with induced drag, the imparting of downwash, and are a fundamental consequence of three-dimensional lift generation. Careful selection of wing geometry (in particular, wingspan), as well as of cruise conditions, are design and operational methods to minimize induced drag. Wingtip vortices form the primary component of wake turbulence. Depending on ambient atmospheric humidity as well as the geometry and wing loading of aircraft, water may condense or freeze in the core of the vortices, making the vortices visible. Generation of trailing vortices When a wing generates aerodynamic lift, it results in a region of downwash between the two vortices. Three-dimensional lift and the occurrence of wingtip vortices can be approached with the concept of horseshoe vortex and described accurately with the Lanchester–Prandtl theory. In this view, the trailing vortex is a continuation of the wing-bound vortex inherent to the lift generation. Effects and mitigation Wingtip vortices are associated with induced drag, an unavoidable consequence of three-dimensional lift generation. The rotary motion of the air within the shed wingtip vortices (sometimes described as a "leakage") reduces the effective angle of attack of the air on the wing. The lifting-line theory describes the shedding of trailing vortices as span-wise changes in lift distribution. For a given wing span and surface, minimal induced drag is obtained with an elliptical lift distribution. For a given lift distribution and wing planform area, induced drag is reduced with increasing aspect ratio. As a consequence, aircraft for which a high lift-to-drag ratio is desirable, such as gliders or long-range airliners, typically have high aspect ratio wings. Such wings however have disadvantages with respect to structural constraints and maneuverability, as evidenced by combat and aerobatic planes which usually feature short, stubby wings despite the efficiency losses. Another method of reducing induced drag is the use of winglets, as seen on most modern airliners. Winglets increase the effective aspect ratio of the wing, changing the pattern and magnitude of the vorticity in the vortex pattern. A reduction is achieved in the kinetic energy in the circular air flow, which reduces the amount of fuel expended to perform work upon the spinning air[citation needed]. After NASA became concerned about the increasing density of air traffic potentially causing vortex related accidents at airports, an experiment by NASA Ames Research Center wind tunnel testing with a 747 model found that the configuration of the flaps could be changed on existing aircraft to break the vortex into three smaller and less disturbing vortexes. This primarily involved changing the settings of the outboard flaps, and could theoretically be retrofitted to existing aircraft. Visibility of vortices The cores of the vortices can sometimes be visible when the water present in them condenses from gas (vapor) to liquid. This water can sometimes even freeze, forming ice particles. Condensation of water vapor in wing tip vortices is most common on aircraft flying at high angles of attack, such as fighter aircraft in high g maneuvers, or airliners taking off and landing on humid days. Aerodynamic condensation and freezing The cores of vortices spin at very high speed and are regions of very low pressure. To first approximation, these low-pressure regions form with little exchange of heat with the neighboring regions (i.e., adiabatically), so the local temperature in the low-pressure regions drops, too. If it drops below the local dew point, there results a condensation of water vapor present in the cores of wingtip vortices, making them visible. The temperature may even drop below the local freezing point, in which case ice crystals will form inside the cores. The phase of water (i.e., whether it assumes the form of a solid, liquid, or gas) is determined by its temperature and pressure. For example, in the case of liquid-gas transition, at each pressure there is a special "transition temperature" such that if the sample temperature is even a little above , the sample will be a gas, but, if the sample temperature is even a little below , the sample will be a liquid; see phase transition. For example, at the standard atmospheric pressure, is 100 °C = 212 °F. The transition temperature decreases with decreasing pressure (which explains why water boils at lower temperatures at higher altitudes and at higher temperatures in a pressure cooker; see here for more information). In the case of water vapor in air, the corresponding to the partial pressure of water vapor is called the dew point. (The solid–liquid transition also happens around a specific transition temperature called the melting point. For most substances, the melting point also decreases with decreasing pressure, although water ice in particular - in its Ih form, which is the most familiar one - is a prominent exception to this rule.) Vortex cores are regions of low pressure. As a vortex core begins to form, the water in the air (in the region that is about to become the core) is in vapor phase, which means that the local temperature is above the local dew point. After the vortex core forms, the pressure inside it has decreased from the ambient value, and so the local dew point () has dropped from the ambient value. Thus, in and of itself, a drop in pressure would tend to keep water in vapor form: The initial dew point was already below the ambient air temperature, and the formation of the vortex has made the local dew point even lower. However, as the vortex core forms, its pressure (and so its dew point) is not the only property that is dropping: The vortex-core temperature is dropping also, and in fact it can drop by much more than the dew point does. To first approximation, the formation of vortex cores is thermodynamically an adiabatic process, i.e., one with no exchange of heat. In such a process, the drop in pressure is accompanied by a drop in temperature, according to the equation Here and are the absolute temperature and pressure at the beginning of the process (here equal to the ambient air temperature and pressure), and are the absolute temperature and pressure in the vortex core (which is the end result of the process), and the constant is about 7/5 = 1.4 for air (see here). Thus, even though the local dew point inside the vortex cores is even lower than in the ambient air, the water vapor may nevertheless condense — if the formation of the vortex brings the local temperature below the new local dew point. For a typical transport aircraft landing at an airport, these conditions are as follows: and have values corresponding to the so-called standard conditions, i.e.,  = 1 atm = 1013.25 mb = 101325 Pa and  = 293.15 K (which is 20 °C = 68 °F). The relative humidity is a comfortable 35% (dew point of 4.1 °C = 39.4 °F). This corresponds to a partial pressure of water vapor of 820 Pa = 8.2 mb. In a vortex core, the pressure () drops to about 80% of the ambient pressure, i.e., to about 80 000 Pa. The temperature in the vortex core is given by the equation above as or 0.86 °C = 33.5 °F. Next, the partial pressure of water in the vortex core drops in proportion to the drop in the total pressure (i.e., by the same percentage), to about 650 Pa = 6.5 mb. According to a dew point calculator, that partial pressure results in the local dew point of about 0.86 °C; in other words, the new local dew point is about equal to the new local temperature. Therefore, this is a marginal case; if the relative humidity of the ambient air were even a bit higher (with the total pressure and temperature remaining as above), then the local dew point inside the vortices would rise, while the local temperature would remain the same. Thus, the local temperature would now be lower than the local dew point, and so the water vapor inside the vortices would indeed condense. Under the right conditions, the local temperature in vortex cores may drop below the local freezing point, in which case ice particles will form inside the vortex cores. The water-vapor condensation mechanism in wingtip vortices is thus driven by local changes in air pressure and temperature. This is to be contrasted to what happens in another well-known case of water condensation related to airplanes: the contrails from airplane engine exhausts. In the case of contrails, the local air pressure and temperature do not change significantly; what matters instead is that the exhaust contains both water vapor (which increases the local water-vapor concentration and so its partial pressure, resulting in elevated dew point and freezing point) as well as aerosols (which provide nucleation centers for the condensation and freezing). Formation flight One theory on migrating bird flight states that many larger bird species fly in a V formation so that all but the leader bird can take advantage of the upwash part of the wingtip vortex of the bird ahead. Hazards Wingtip vortices can pose a hazard to aircraft, especially during the landing and takeoff phases of flight. The intensity or strength of the vortex is a function of aircraft size, speed, and configuration (flap setting, etc.). The strongest vortices are produced by heavy aircraft, flying slowly, with wing flaps and landing gear retracted ("heavy, slow and clean"). Large jet aircraft can generate vortices that can persist for many minutes, drifting with the wind. The hazardous aspects of wingtip vortices are most often discussed in the context of wake turbulence. If a light aircraft immediately follows a heavy aircraft, wake turbulence from the heavy aircraft can roll the light aircraft faster than can be resisted by use of ailerons. At low altitudes, in particular during takeoff and landing, this can lead to an upset from which recovery is not possible. ("Light" and "heavy" are relative terms, and even smaller jets have been rolled by this effect.) Air traffic controllers attempt to ensure an adequate separation between departing and arriving aircraft by issuing wake turbulence warnings to pilots. In general, to avoid vortices an aircraft is safer if its takeoff is before the rotation point of the airplane that took off before it. However care must be taken to stay upwind (or otherwise away) from any vortices that were generated by the previous aircraft. On landing behind an airplane the aircraft should stay above the earlier one's flight path and touch down further along the runway. Glider pilots routinely practice flying in wingtip vortices when they do a maneuver called "boxing the wake". This involves descending from the higher to lower position behind a tow plane. This is followed by making a rectangular figure by holding the glider at high and low points away from the towing plane before coming back up through the vortices. (For safety this is not done below 1500 feet above the ground, and usually with an instructor present.) Given the relatively slow speeds and lightness of both aircraft the procedure is safe but does instill a sense of how strong and where the turbulence is located. Gallery See also Index of aviation articles Aspect ratio (wing) Chemtrail conspiracy theory Crow instability Helmholtz's theorems Horseshoe vortex Lift-induced drag V formation Vortex Wake turbulence References External links Video from NASA's Dryden Flight Research Center tests on wingtip vortices: C-5 Galaxy: Lockheed L-1011: Wingtip Vortices during a landing - Video at Youtube Aircraft aerodynamics Aviation risks Vortices Aircraft wing design Articles containing video clips ja:ウェーク・タービュランス
Wingtip vortices
[ "Chemistry", "Mathematics" ]
2,569
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
18,229,656
https://en.wikipedia.org/wiki/Microbial%20mat
A microbial mat is a multi-layered sheet or biofilm of microbial colonies, composed of mainly bacteria and/or archaea. Microbial mats grow at interfaces between different types of material, mostly on submerged or moist surfaces, but a few survive in deserts. A few are found as endosymbionts of animals. Although only a few centimetres thick at most, microbial mats create a wide range of internal chemical environments, and hence generally consist of layers of microorganisms that can feed on or at least tolerate the dominant chemicals at their level and which are usually of closely related species. In moist conditions mats are usually held together by slimy substances secreted by the microorganisms. In many cases some of the bacteria form tangled webs of filaments which make the mat tougher. The best known physical forms are flat mats and stubby pillars called stromatolites, but there are also spherical forms. Microbial mats are the earliest form of life on Earth for which there is good fossil evidence, from , and have been the most important members and maintainers of the planet's ecosystems. Originally they depended on hydrothermal vents for energy and chemical "food", but the development of photosynthesis allowed mats to proliferate outside of these environments by utilizing a more widely available energy source, sunlight. The final and most significant stage of this liberation was the development of oxygen-producing photosynthesis, since the main chemical inputs for this are carbon dioxide and water. As a result, microbial mats began to produce the atmosphere we know today, in which free oxygen is a vital component. At around the same time they may also have been the birthplace of the more complex eukaryote type of cell, of which all multicellular organisms are composed. Microbial mats were abundant on the shallow seabed until the Cambrian substrate revolution, when animals living in shallow seas increased their burrowing capabilities and thus broke up the surfaces of mats and let oxygenated water into the deeper layers, poisoning the oxygen-intolerant microorganisms that lived there. Although this revolution drove mats off soft floors of shallow seas, they still flourish in many environments where burrowing is limited or impossible, including rocky seabeds and shores, and hyper-saline and brackish lagoons. They are found also on the floors of the deep oceans. Because of microbial mats' ability to use almost anything as "food", there is considerable interest in industrial uses of mats, especially for water treatment and for cleaning up pollution. Description Microbial mats may also be referred to as algal mats and bacterial mats. They are a type of biofilm that is large enough to see with the naked eye and robust enough to survive moderate physical stresses. These colonies of bacteria form on surfaces at many types of interface, for example between water and the sediment or rock at the bottom, between air and rock or sediment, between soil and bed-rock, etc. Such interfaces form vertical chemical gradients, i.e. vertical variations in chemical composition, which make different levels suitable for different types of bacteria and thus divide microbial mats into layers, which may be sharply defined or may merge more gradually into each other. A variety of microbes are able to transcend the limits of diffusion by using "nanowires" to shuttle electrons from their metabolic reactions up to two centimetres deep in the sediment – for example, electrons can be transferred from reactions involving hydrogen sulfide deeper within the sediment to oxygen in the water, which acts as an electron acceptor. The best-known types of microbial mat may be flat laminated mats, which form on approximately horizontal surfaces, and stromatolites, stubby pillars built as the microbes slowly move upwards to avoid being smothered by sediment deposited on them by water. However, there are also spherical mats, some on the outside of pellets of rock or other firm material and others inside spheres of sediment. Structure A microbial mat consists of several layers, each of which is dominated by specific types of microorganism, mainly bacteria. Although the composition of individual mats varies depending on the environment, as a general rule the by-products of each group of microorganisms serve as "food" for other groups. In effect each mat forms its own food chain, with one or a few groups at the top of the food chain as their by-products are not consumed by other groups. Different types of microorganism dominate different layers based on their comparative advantage for living in that layer. In other words, they live in positions where they can out-perform other groups rather than where they would absolutely be most comfortable — ecological relationships between different groups are a combination of competition and co-operation. Since the metabolic capabilities of bacteria (what they can "eat" and what conditions they can tolerate) generally depend on their phylogeny (i.e. the most closely related groups have the most similar metabolisms), the different layers of a mat are divided both by their different metabolic contributions to the community and by their phylogenetic relationships. In a wet environment where sunlight is the main source of energy, the uppermost layers are generally dominated by aerobic photosynthesizing cyanobacteria (blue-green bacteria whose color is caused by their having chlorophyll), while the lowest layers are generally dominated by anaerobic sulfate-reducing bacteria. Sometimes there are intermediate (oxygenated only in the daytime) layers inhabited by facultative anaerobic bacteria. For example, in hypersaline ponds near Guerrero Negro (Mexico) various kind of mats were explored. There are some mats with a middle purple layer inhabited by photosynthesizing purple bacteria. Some other mats have a white layer inhabited by chemotrophic sulfur oxidizing bacteria and beneath them an olive layer inhabited by photosynthesizing green sulfur bacteria and heterotrophic bacteria. However, this layer structure is not changeless during a day: some species of cyanobacteria migrate to deeper layers at morning, and go back at evening, to avoid intensive solar light and UV radiation at mid-day. Microbial mats are generally held together and bound to their substrates by slimy extracellular polymeric substances which they secrete. In many cases some of the bacteria form filaments (threads), which tangle and thus increase the colonies' structural strength, especially if the filaments have sheaths (tough outer coverings). This combination of slime and tangled threads attracts other microorganisms which become part of the mat community, for example protozoa, some of which feed on the mat-forming bacteria, and diatoms, which often seal the surfaces of submerged microbial mats with thin, parchment-like coverings. Marine mats may grow to a few centimeters in thickness, of which only the top few millimeters are oxygenated. Types of environment colonized Underwater microbial mats have been described as layers that live by exploiting and to some extent modifying local chemical gradients, i.e. variations in the chemical composition. Thinner, less complex biofilms live in many sub-aerial environments, for example on rocks, on mineral particles such as sand, and within soil. They have to survive for long periods without liquid water, often in a dormant state. Microbial mats that live in tidal zones, such as those found in the Sippewissett salt marsh, often contain a large proportion of similar microorganisms that can survive for several hours without water. Microbial mats and less complex types of biofilm are found at temperature ranges from –40 °C to +120 °C, because variations in pressure affect the temperatures at which water remains liquid. They even appear as endosymbionts in some animals, for example in the hindguts of some echinoids. Ecological and geological importance Microbial mats use all of the types of metabolism and feeding strategy that have evolved on Earth—anoxygenic and oxygenic photosynthesis; anaerobic and aerobic chemotrophy (using chemicals rather than sunshine as a source of energy); organic and inorganic respiration and fermentation (i..e converting food into energy with and without using oxygen in the process); autotrophy (producing food from inorganic compounds) and heterotrophy (producing food only from organic compounds, by some combination of predation and detritivory). Most sedimentary rocks and ore deposits have grown by a reef-like build-up rather than by "falling" out of the water, and this build-up has been at least influenced and perhaps sometimes caused by the actions of microbes. Stromatolites, bioherms (domes or columns similar internally to stromatolites) and biostromes (distinct sheets of sediment) are among such microbe-influenced build-ups. Other types of microbial mat have created wrinkled "elephant skin" textures in marine sediments, although it was many years before these textures were recognized as trace fossils of mats. Microbial mats have increased the concentration of metal in many ore deposits, and without this it would not be feasible to mine them—examples include iron (both sulfide and oxide ores), uranium, copper, silver and gold deposits. Role in the history of life The earliest mats Microbial mats are among the oldest clear signs of life, as microbially induced sedimentary structures (MISS) formed have been found in western Australia. At that early stage the mats' structure may already have been similar to that of modern mats that do not include photosynthesizing bacteria. It is even possible that non-photosynthesizing mats were present as early as . If so, their energy source would have been hydrothermal vents (high-pressure hot springs around submerged volcanoes), and the evolutionary split between bacteria and archea may also have occurred around this time. The earliest mats may have been small, single-species biofilms of chemotrophs that relied on hydrothermal vents to supply both energy and chemical "food". Within a short time (by geological standards) the build-up of dead microorganisms would have created an ecological niche for scavenging heterotrophs, possibly methane-emitting and sulfate-reducing organisms that would have formed new layers in the mats and enriched their supply of biologically useful chemicals. Photosynthesis It is generally thought that photosynthesis, the biological generation of chemical energy from light, evolved shortly after (3 billion). However an isotope analysis suggests that oxygenic photosynthesis may have been widespread as early as . There are several different types of photosynthetic reaction, and analysis of bacterial DNA indicates that photosynthesis first arose in anoxygenic purple bacteria, while the oxygenic photosynthesis seen in cyanobacteria and much later in plants was the last to evolve. The earliest photosynthesis may have been powered by infra-red light, using modified versions of pigments whose original function was to detect infra-red heat emissions from hydrothermal vents. The development of photosynthetic energy generation enabled the microorganisms first to colonize wider areas around vents and then to use sunlight as an energy source. The role of the hydrothermal vents was now limited to supplying reduced metals into the oceans as a whole rather than being the main supporters of life in specific locations. Heterotrophic scavengers would have accompanied the photosynthesizers in their migration out of the "hydrothermal ghetto". The evolution of purple bacteria, which do not produce or use oxygen but can tolerate it, enabled mats to colonize areas that locally had relatively high concentrations of oxygen, which is toxic to organisms that are not adapted to it. Microbial mats could have been separated into oxidized and reduced layers. Cyanobacteria and oxygen The last major stage in the evolution of microbial mats was the appearance of cyanobacteria, photosynthesizers which both produce and use oxygen. This gave undersea mats their typical modern structure: an oxygen-rich top layer of cyanobacteria; a layer of photosynthesizing purple bacteria that could tolerate oxygen; and oxygen-free, H2S-dominated lower layers of heterotrophic scavengers, mainly methane-emitting and sulfate-reducing organisms. It is estimated that the appearance of oxygenic photosynthesis increased biological productivity by a factor of between 100 and 1,000. All photosynthetic reactions require a reducing agent, but the significance of oxygenic photosynthesis is that it uses water as a reducing agent, and water is much more plentiful than the geologically produced reducing agents on which photosynthesis previously depended. The resulting increases in the populations of photosynthesizing bacteria in the top layers of microbial mats would have caused corresponding population increases among the chemotrophic and heterotrophic microorganisms that inhabited the lower layers and which fed respectively on the by-products of the photosynthesizers and on the corpses and / or living bodies of the other mat organisms. These increases would have made microbial mats the planet's dominant ecosystems. From this point onwards life itself would have produced significantly more of the resources it needed than did geochemical processes. Oxygenic photosynthesis in microbial mats would also have increased the free oxygen content of the Earth's atmosphere, both directly by emitting oxygen and because the mats emitted molecular hydrogen (H2), some of which would have escaped from the Earth's atmosphere before it could re-combine with free oxygen to form more water. Microbial mats thus likely played a major role in the evolution of organisms which could first tolerate free oxygen and then use it as an energy source. Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms — for example anaerobic fermentation produces a net yield of two molecules of adenosine triphosphate, cells' internal "fuel", per molecule of glucose, while aerobic respiration produces a net yield of 36. The oxygenation of the atmosphere was a prerequisite for the evolution of the more complex eukaryote type of cell, from which all multicellular organisms are built. Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms: the photosynthesis mechanisms of both green bacteria and purple bacteria; oxygen production; and the Calvin cycle, which converts carbon dioxide and water into carbohydrates and sugars. It is likely that they acquired many of these sub-systems from existing mat organisms, by some combination of horizontal gene transfer and endosymbiosis followed by fusion. Whatever the causes, cyanobacteria are the most self-sufficient of the mat organisms and were well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton, which forms the basis of most marine food chains. Origin of eukaryotes The time at which eukaryotes first appeared is still uncertain: there is reasonable evidence that fossils dated between and represent eukaryotes, but the presence of steranes in Australian shales may indicate that eukaryotes were present . There is still debate about the origins of eukaryotes, and many of the theories focus on the idea that a bacterium first became an endosymbiont of an anaerobic archean and then fused with it to become one organism. If such endosymbiosis was an important factor, microbial mats would have encouraged it. There are two known variations of this scenario: The boundary between the oxygenated and oxygen-free zones of a mat would have moved up when photosynthesis shut down at night and back down when photosynthesis resumed after the next sunrise. Symbiosis between independent aerobic and anaerobic organisms would have enabled both to live comfortably in the zone that was subject to oxygen "tides", and subsequent endosymbiosis would have made such partnerships more mobile. The initial partnership may have been between anaerobic archea that required molecular hydrogen (H2) and heterotrophic bacteria that produced it and could live both with and without oxygen. Life on land Microbial mats from ~ provide the first evidence of life in the terrestrial realm. The earliest multicellular animals The Ediacara biota are the earliest widely accepted evidence of multicellular animals. Most Ediacaran strata with the "elephant skin" texture characteristic of microbial mats contain fossils, and Ediacaran fossils are hardly ever found in beds that do not contain these microbial mats. Adolf Seilacher categorized the animals as: "mat encrusters", which were permanently attached to the mat; "mat scratchers", which grazed the surface of the mat without destroying it; "mat stickers", suspension feeders that were partially embedded in the mat; and "undermat miners", which burrowed underneath the mat and fed on decomposing mat material. The Cambrian substrate revolution In the Early Cambrian, however, organisms began to burrow vertically for protection or food, breaking down the microbial mats, and thus allowing water and oxygen to penetrate a considerable distance below the surface and kill the oxygen-intolerant microorganisms in the lower layers. As a result of this Cambrian substrate revolution, marine microbial mats are confined to environments in which burrowing is non-existent or negligible: very harsh environments, such as hyper-saline lagoons or brackish estuaries, which are uninhabitable for the burrowing organisms that broke up the mats; rocky "floors" which the burrowers cannot penetrate; the depths of the oceans, where burrowing activity today is at a similar level to that in the shallow coastal seas before the revolution. Current status Although the Cambrian substrate revolution opened up new niches for animals, it was not catastrophic for microbial mats, but it did greatly reduce their extent. Use of microbial mats in paleontology Most fossils preserve only the hard parts of organisms, e.g. shells. The rare cases where soft-bodied fossils are preserved (the remains of soft-bodied organisms and also of the soft parts of organisms for which only hard parts such as shells are usually found) are extremely valuable because they provide information about organisms that are hardly ever fossilized and much more information than is usually available about those for which only the hard parts are usually preserved. Microbial mats help to preserve soft-bodied fossils by: Capturing corpses on the sticky surfaces of mats and thus preventing them from floating or drifting away. Physically protecting them from being eaten by scavengers and broken up by burrowing animals, and protecting fossil-bearing sediments from erosion. For example, the speed of water current required to erode sediment bound by a mat is 20–30 times as great as the speed required to erode a bare sediment. Preventing or reducing decay both by physically screening the remains from decay-causing bacteria and by creating chemical conditions that are hostile to decay-causing bacteria. Preserving tracks and burrows by protecting them from erosion. Many trace fossils date from significantly earlier than the body fossils of animals that are thought to have been capable of making them and thus improve paleontologists' estimates of when animals with these capabilities first appeared. Industrial uses The ability of microbial mat communities to use a vast range of "foods" has recently led to interest in industrial uses. There have been trials of microbial mats for purifying water, both for human use and in fish farming, and studies of their potential for cleaning up oil spills. As a result of the growing commercial potential, there have been applications for and grants of patents relating to the growing, installation and use of microbial mats, mainly for cleaning up pollutants and waste products. See also Biological soil crust Cambrian substrate revolution Cyanobacteria Ediacaran type preservation Evolutionary history of life Sippewissett Microbial Mat Sewage fungus Notes References Seckbach S (2010) Microbial Mats: Modern and Ancient Microorganisms in Stratified Systems Springer, . External links – outline of microbial mats and pictures of mats in various situations and at various magnifications. Archean life Cambrian life Microbiology Fossils Phanerozoic Proterozoic life Evolutionary biology
Microbial mat
[ "Chemistry", "Biology" ]
4,167
[ "Evolutionary biology", "Microbiology", "Microscopy" ]
18,233,581
https://en.wikipedia.org/wiki/Finite%20element%20method
Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved and are often required to solve the largest and most complex problems. FEM is a general numerical method for solving partial differential equations in two- or three-space variables (i.e., some boundary value problems). There are also studies about using FEM to solve high-dimensional problems. To solve a problem, FEM subdivides a large system into smaller, simpler parts called finite elements. This is achieved by a particular space discretization in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution that has a finite number of points. FEM formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. FEM then approximates a solution by minimizing an associated error function via the calculus of variations. Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA). Basic concepts The subdivision of a whole domain into simpler parts has several advantages: Accurate representation of complex geometry; Inclusion of dissimilar material properties; Easy representation of the total solution; and Capture of local effects. A typical approach using the method involves the following: Step 1: Dividing the domain of the problem into a collection of subdomains, with each subdomain represented by a set of element equations for the original problem. Step 2: Systematically recombining all sets of element equations into a global system of equations for the final calculation. The global system of equations uses known solution techniques and can be calculated from the initial values of the original problem to obtain a numerical answer. In the first step above, the element equations are simple equations that locally approximate the original complex equations to be studied, where the original equations are often partial differential equations (PDEs). To explain the approximation of this process, FEM is commonly introduced as a special case of the Galerkin method. The process, in mathematical language, is to construct an integral of the inner product of the residual and the weight functions; then, set the integral to zero. In simple terms, it is a procedure that minimizes the approximation error by fitting trial functions into the PDE. The residual is the error caused by the trial functions, and the weight functions are polynomial approximation functions that project the residual. The process eliminates all the spatial derivatives from the PDE, thus approximating the PDE locally using the following: a set of algebraic equations for steady-state problems; and a set of ordinary differential equations for transient problems. These equation sets are element equations. They are linear if the underlying PDE is linear and vice versa. Algebraic equation sets that arise in the steady-state problems are solved using numerical linear algebraic methods. In contrast, ordinary differential equation sets that occur in the transient problems are solved by numerical integrations using standard techniques such as Euler's method or the Runge–Kutta method. In the second step above, a global system of equations is generated from the element equations by transforming coordinates from the subdomains' local nodes to the domain's global nodes. This spatial transformation includes appropriate orientation adjustments as applied in relation to the reference coordinate system. The process is often carried out using FEM software with coordinate data generated from the subdomains. The practical application of FEM is known as finite element analysis (FEA). FEA, as applied in engineering, is a computational tool for performing engineering analysis. It includes the use of mesh generation techniques for dividing a complex problem into smaller elements, as well as the use of software coded with a FEM algorithm. When applying FEA, the complex problem is usually a physical system with the underlying physics, such as the Euler–Bernoulli beam equation, the heat equation, or the Navier–Stokes equations, expressed in either PDEs or integral equations, while the divided, smaller elements of the complex problem represent different areas in the physical system. FEA may be used for analyzing problems over complicated domains (e.g., cars and oil pipelines) when the domain changes (e.g., during a solid-state reaction with a moving boundary), when the desired precision varies over the entire domain, or when the solution lacks smoothness. FEA simulations provide a valuable resource, as they remove multiple instances of creating and testing complex prototypes for various high-fidelity situations. For example, in a frontal crash simulation, it is possible to increase prediction accuracy in important areas, like the front of the car, and reduce it in the rear of the car, thus reducing the cost of the simulation. Another example would be in numerical weather prediction, where it is more important to have accurate predictions over developing highly nonlinear phenomena, such as tropical cyclones in the atmosphere or eddies in the ocean, rather than relatively calm areas. A clear, detailed, and practical presentation of this approach can be found in the textbook The Finite Element Method for Engineers. History While it is difficult to quote the date of the invention of FEM, the method originated from the need to solve complex elasticity and structural analysis problems in civil and aeronautical engineering. Its development can be traced back to work by Alexander Hrennikoff and Richard Courant in the early 1940s. Another pioneer was Ioannis Argyris. In the USSR, the introduction of the practical application of FEM is usually connected with Leonard Oganesyan. It was also independently rediscovered in China by Feng Kang in the late 1950s and early 1960s, based on the computations of dam constructions, where it was called the "finite difference method" based on variation principles. Although the approaches used by these pioneers are different, they share one essential characteristic: the mesh discretization of a continuous domain into a set of discrete sub-domains, usually called elements. Hrennikoff's work discretizes the domain by using a lattice analogy, while Courant's approach divides the domain into finite triangular sub-regions to solve second-order elliptic partial differential equations that arise from the problem of the torsion of a cylinder. Courant's contribution was evolutionary, drawing on a large body of earlier results for PDEs developed by Lord Rayleigh, Walther Ritz, and Boris Galerkin. The application of FEM gained momentum in the 1960s and 1970s due to the developments of J. H. Argyris and his co-workers at the University of Stuttgart; R. W. Clough and his co-workers at University of California Berkeley; O. C. Zienkiewicz and his co-workers Ernest Hinton, Bruce Irons, and others at Swansea University; Philippe G. Ciarlet at the University of Paris 6; and Richard Gallagher and his co-workers at Cornell University. During this period, additional impetus was provided by the available open-source FEM programs. NASA sponsored the original version of NASTRAN. University of California Berkeley made the finite element programs SAP IV and, later, OpenSees widely available. In Norway, the ship classification society Det Norske Veritas (now DNV GL) developed Sesam in 1969 for use in the analysis of ships. A rigorous mathematical basis for FEM was provided in 1973 with a publication by Gilbert Strang and George Fix. The method has since been generalized for the numerical modeling of physical systems in a wide variety of engineering disciplines, such as electromagnetism, heat transfer, and fluid dynamics. Technical discussion The structure of finite element methods A finite element method is characterized by a variational formulation, a discretization strategy, one or more solution algorithms, and post-processing procedures. Examples of the variational formulation are the Galerkin method, the discontinuous Galerkin method, mixed methods, etc. A discretization strategy is understood to mean a clearly defined set of procedures that cover (a) the creation of finite element meshes, (b) the definition of basis function on reference elements (also called shape functions), and (c) the mapping of reference elements onto the elements of the mesh. Examples of discretization strategies are the h-version, p-version, hp-version, x-FEM, isogeometric analysis, etc. Each discretization strategy has certain advantages and disadvantages. A reasonable criterion in selecting a discretization strategy is to realize nearly optimal performance for the broadest set of mathematical models in a particular model class. Various numerical solution algorithms can be classified into two broad categories; direct and iterative solvers. These algorithms are designed to exploit the sparsity of matrices that depend on the variational formulation and discretization strategy choices. Post-processing procedures are designed to extract the data of interest from a finite element solution. To meet the requirements of solution verification, postprocessors need to provide for a posteriori error estimation in terms of the quantities of interest. When the errors of approximation are larger than what is considered acceptable, then the discretization has to be changed either by an automated adaptive process or by the action of the analyst. Some very efficient postprocessors provide for the realization of superconvergence. Illustrative problems P1 and P2 The following two problems demonstrate the finite element method. P1 is a one-dimensional problem where is given, is an unknown function of , and is the second derivative of with respect to . P2 is a two-dimensional problem (Dirichlet problem) where is a connected open region in the plane whose boundary is nice (e.g., a smooth manifold or a polygon), and and denote the second derivatives with respect to and , respectively. The problem P1 can be solved directly by computing antiderivatives. However, this method of solving the boundary value problem (BVP) works only when there is one spatial dimension. It does not generalize to higher-dimensional problems or problems like . For this reason, we will develop the finite element method for P1 and outline its generalization to P2. Our explanation will proceed in two steps, which mirror two essential steps one must take to solve a boundary value problem (BVP) using the FEM. In the first step, one rephrases the original BVP in its weak form. Little to no computation is usually required for this step. The transformation is done by hand on paper. The second step is discretization, where the weak form is discretized in a finite-dimensional space. After this second step, we have concrete formulae for a large but finite-dimensional linear problem whose solution will approximately solve the original BVP. This finite-dimensional problem is then implemented on a computer. Weak formulation The first step is to convert P1 and P2 into their equivalent weak formulations. The weak form of P1 If solves P1, then for any smooth function that satisfies the displacement boundary conditions, i.e. at and , we have Conversely, if with satisfies (1) for every smooth function then one may show that this will solve P1. The proof is easier for twice continuously differentiable (mean value theorem) but may be proved in a distributional sense as well. We define a new operator or map by using integration by parts on the right-hand-side of (1): where we have used the assumption that . The weak form of P2 If we integrate by parts using a form of Green's identities, we see that if solves P2, then we may define for any by where denotes the gradient and denotes the dot product in the two-dimensional plane. Once more can be turned into an inner product on a suitable space of once differentiable functions of that are zero on . We have also assumed that (see Sobolev spaces). The existence and uniqueness of the solution can also be shown. A proof outline of the existence and uniqueness of the solution We can loosely think of to be the absolutely continuous functions of that are at and (see Sobolev spaces). Such functions are (weakly) once differentiable, and it turns out that the symmetric bilinear map then defines an inner product which turns into a Hilbert space (a detailed proof is nontrivial). On the other hand, the left-hand-side is also an inner product, this time on the Lp space . An application of the Riesz representation theorem for Hilbert spaces shows that there is a unique solving (2) and, therefore, P1. This solution is a-priori only a member of , but using elliptic regularity, will be smooth if is. Discretization P1 and P2 are ready to be discretized, which leads to a common sub-problem (3). The basic idea is to replace the infinite-dimensional linear problem: Find such that with a finite-dimensional version: where is a finite-dimensional subspace of . There are many possible choices for (one possibility leads to the spectral method). However, we take as a space of piecewise polynomial functions for the finite element method. For problem P1 We take the interval , choose values of with and we define by: where we define and . Observe that functions in are not differentiable according to the elementary definition of calculus. Indeed, if then the derivative is typically not defined at any , . However, the derivative exists at every other value of , and one can use this derivative for integration by parts. For problem P2 We need to be a set of functions of . In the figure on the right, we have illustrated a triangulation of a 15-sided polygonal region in the plane (below), and a piecewise linear function (above, in color) of this polygon which is linear on each triangle of the triangulation; the space would consist of functions that are linear on each triangle of the chosen triangulation. One hopes that as the underlying triangular mesh becomes finer and finer, the solution of the discrete problem (3) will, in some sense, converge to the solution of the original boundary value problem P2. To measure this mesh fineness, the triangulation is indexed by a real-valued parameter which one takes to be very small. This parameter will be related to the largest or average triangle size in the triangulation. As we refine the triangulation, the space of piecewise linear functions must also change with . For this reason, one often reads instead of in the literature. Since we do not perform such an analysis, we will not use this notation. Choosing a basis To complete the discretization, we must select a basis of . In the one-dimensional case, for each control point we will choose the piecewise linear function in whose value is at and zero at every , i.e., for ; this basis is a shifted and scaled tent function. For the two-dimensional case, we choose again one basis function per vertex of the triangulation of the planar region . The function is the unique function of whose value is at and zero at every . Depending on the author, the word "element" in the "finite element method" refers to the domain's triangles, the piecewise linear basis function, or both. So, for instance, an author interested in curved domains might replace the triangles with curved primitives and so might describe the elements as being curvilinear. On the other hand, some authors replace "piecewise linear" with "piecewise quadratic" or even "piecewise polynomial". The author might then say "higher order element" instead of "higher degree polynomial". The finite element method is not restricted to triangles (tetrahedra in 3-d or higher-order simplexes in multidimensional spaces). Still, it can be defined on quadrilateral subdomains (hexahedra, prisms, or pyramids in 3-d, and so on). Higher-order shapes (curvilinear elements) can be defined with polynomial and even non-polynomial shapes (e.g., ellipse or circle). Examples of methods that use higher degree piecewise polynomial basis functions are the hp-FEM and spectral FEM. More advanced implementations (adaptive finite element methods) utilize a method to assess the quality of the results (based on error estimation theory) and modify the mesh during the solution aiming to achieve an approximate solution within some bounds from the exact solution of the continuum problem. Mesh adaptivity may utilize various techniques; the most popular are: moving nodes (r-adaptivity) refining (and unrefined) elements (h-adaptivity) changing order of base functions (p-adaptivity) combinations of the above (hp-adaptivity). Small support of the basis The primary advantage of this choice of basis is that the inner products and will be zero for almost all . (The matrix containing in the location is known as the Gramian matrix.) In the one dimensional case, the support of is the interval . Hence, the integrands of and are identically zero whenever . Similarly, in the planar case, if and do not share an edge of the triangulation, then the integrals and are both zero. Matrix form of the problem If we write and then problem (3), taking for , becomes If we denote by and the column vectors and , and if we let and be matrices whose entries are and then we may rephrase (4) as It is not necessary to assume . For a general function , problem (3) with for becomes actually simpler, since no matrix is used, where and for . As we have discussed before, most of the entries of and are zero because the basis functions have small support. So we now have to solve a linear system in the unknown where most of the entries of the matrix , which we need to invert, are zero. Such matrices are known as sparse matrices, and there are efficient solvers for such problems (much more efficient than actually inverting the matrix.) In addition, is symmetric and positive definite, so a technique such as the conjugate gradient method is favored. For problems that are not too large, sparse LU decompositions and Cholesky decompositions still work well. For instance, MATLAB's backslash operator (which uses sparse LU, sparse Cholesky, and other factorization methods) can be sufficient for meshes with a hundred thousand vertices. The matrix is usually referred to as the stiffness matrix, while the matrix is dubbed the mass matrix. General form of the finite element method In general, the finite element method is characterized by the following process. One chooses a grid for . In the preceding treatment, the grid consisted of triangles, but one can also use squares or curvilinear polygons. Then, one chooses basis functions. We used piecewise linear basis functions in our discussion, but it is common to use piecewise polynomial basis functions. Separate consideration is the smoothness of the basis functions. For second-order elliptic boundary value problems, piecewise polynomial basis function that is merely continuous suffice (i.e., the derivatives are discontinuous.) For higher-order partial differential equations, one must use smoother basis functions. For instance, for a fourth-order problem such as , one may use piecewise quadratic basis functions that are . Another consideration is the relation of the finite-dimensional space to its infinite-dimensional counterpart in the examples above . A conforming element method is one in which space is a subspace of the element space for the continuous problem. The example above is such a method. If this condition is not satisfied, we obtain a nonconforming element method, an example of which is the space of piecewise linear functions over the mesh, which are continuous at each edge midpoint. Since these functions are generally discontinuous along the edges, this finite-dimensional space is not a subspace of the original . Typically, one has an algorithm for subdividing a given mesh. If the primary method for increasing precision is to subdivide the mesh, one has an h-method (h is customarily the diameter of the largest element in the mesh.) In this manner, if one shows that the error with a grid is bounded above by , for some and , then one has an order p method. Under specific hypotheses (for instance, if the domain is convex), a piecewise polynomial of order method will have an error of order . If instead of making h smaller, one increases the degree of the polynomials used in the basis function, one has a p-method. If one combines these two refinement types, one obtains an hp-method (hp-FEM). In the hp-FEM, the polynomial degrees can vary from element to element. High-order methods with large uniform p are called spectral finite element methods (SFEM). These are not to be confused with spectral methods. For vector partial differential equations, the basis functions may take values in . Various types of finite element methods AEM The Applied Element Method or AEM combines features of both FEM and Discrete element method or (DEM). A-FEM Yang and Lui introduced the Augmented-Finite Element Method, whose goal was to model the weak and strong discontinuities without needing extra DoFs, as PuM stated. CutFEM The Cut Finite Element Approach was developed in 2014. The approach is "to make the discretization as independent as possible of the geometric description and minimize the complexity of mesh generation, while retaining the accuracy and robustness of a standard finite element method." Generalized finite element method The generalized finite element method (GFEM) uses local spaces consisting of functions, not necessarily polynomials, that reflect the available information on the unknown solution and thus ensure good local approximation. Then a partition of unity is used to “bond” these spaces together to form the approximating subspace. The effectiveness of GFEM has been shown when applied to problems with domains having complicated boundaries, problems with micro-scales, and problems with boundary layers. Mixed finite element method The mixed finite element method is a type of finite element method in which extra independent variables are introduced as nodal variables during the discretization of a partial differential equation problem. Variable – polynomial The hp-FEM combines adaptively elements with variable size h and polynomial degree p to achieve exceptionally fast, exponential convergence rates. hpk-FEM The hpk-FEM combines adaptively elements with variable size h, polynomial degree of the local approximations p, and global differentiability of the local approximations (k-1) to achieve the best convergence rates. XFEM The extended finite element method (XFEM) is a numerical technique based on the generalized finite element method (GFEM) and the partition of unity method (PUM). It extends the classical finite element method by enriching the solution space for solutions to differential equations with discontinuous functions. Extended finite element methods enrich the approximation space to naturally reproduce the challenging feature associated with the problem of interest: the discontinuity, singularity, boundary layer, etc. It was shown that for some problems, such an embedding of the problem's feature into the approximation space can significantly improve convergence rates and accuracy. Moreover, treating problems with discontinuities with XFEMs suppresses the need to mesh and re-mesh the discontinuity surfaces, thus alleviating the computational costs and projection errors associated with conventional finite element methods at the cost of restricting the discontinuities to mesh edges. Several research codes implement this technique to various degrees: GetFEM++ xfem++ openxfem++ XFEM has also been implemented in codes like Altair Radios, ASTER, Morfeo, and Abaqus. It is increasingly being adopted by other commercial finite element software, with a few plugins and actual core implementations available (ANSYS, SAMCEF, OOFELIE, etc.). Scaled boundary finite element method (SBFEM) The introduction of the scaled boundary finite element method (SBFEM) came from Song and Wolf (1997). The SBFEM has been one of the most profitable contributions in the area of numerical analysis of fracture mechanics problems. It is a semi-analytical fundamental-solutionless method combining the advantages of finite element formulations and procedures and boundary element discretization. However, unlike the boundary element method, no fundamental differential solution is required. S-FEM The S-FEM, Smoothed Finite Element Methods, is a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining mesh-free methods with the finite element method. Spectral element method Spectral element methods combine the geometric flexibility of finite elements and the acute accuracy of spectral methods. Spectral methods are the approximate solution of weak-form partial equations based on high-order Lagrangian interpolants and used only with certain quadrature rules. Meshfree methods Discontinuous Galerkin methods Finite element limit analysis Stretched grid method Loubignac iteration Loubignac iteration is an iterative method in finite element methods. Crystal plasticity finite element method (CPFEM) The crystal plasticity finite element method (CPFEM) is an advanced numerical tool developed by Franz Roters. Metals can be regarded as crystal aggregates, which behave anisotropy under deformation, such as abnormal stress and strain localization. CPFEM, based on the slip (shear strain rate), can calculate dislocation, crystal orientation, and other texture information to consider crystal anisotropy during the routine. It has been applied in the numerical study of material deformation, surface roughness, fractures, etc. Virtual element method (VEM) The virtual element method (VEM), introduced by Beirão da Veiga et al. (2013) as an extension of mimetic finite difference (MFD) methods, is a generalization of the standard finite element method for arbitrary element geometries. This allows admission of general polygons (or polyhedra in 3D) that are highly irregular and non-convex in shape. The name virtual derives from the fact that knowledge of the local shape function basis is not required and is, in fact, never explicitly calculated. Link with the gradient discretization method Some types of finite element methods (conforming, nonconforming, mixed finite element methods) are particular cases of the gradient discretization method (GDM). Hence the convergence properties of the GDM, which are established for a series of problems (linear and nonlinear elliptic problems, linear, nonlinear, and degenerate parabolic problems), hold as well for these particular FEMs. Comparison to the finite difference method The finite difference method (FDM) is an alternative way of approximating solutions of PDEs. The differences between FEM and FDM are: The most attractive feature of the FEM is its ability to handle complicated geometries (and boundaries) with relative ease. While FDM in its basic form is restricted to handle rectangular shapes and simple alterations thereof, the handling of geometries in FEM is theoretically straightforward. FDM is not usually used for irregular CAD geometries but more often for rectangular or block-shaped models. FEM generally allows for more flexible mesh adaptivity than FDM. The most attractive feature of finite differences is that it is straightforward to implement. One could consider the FDM a particular case of the FEM approach in several ways. E.g., first-order FEM is identical to FDM for Poisson's equation if the problem is discretized by a regular rectangular mesh with each rectangle divided into two triangles. There are reasons to consider the mathematical foundation of the finite element approximation more sound, for instance, because the quality of the approximation between grid points is poor in FDM. The quality of a FEM approximation is often higher than in the corresponding FDM approach, but this is highly problem-dependent, and several examples to the contrary can be provided. Generally, FEM is the method of choice in all types of analysis in structural mechanics (i.e., solving for deformation and stresses in solid bodies or dynamics of structures). In contrast, computational fluid dynamics (CFD) tend to use FDM or other methods like finite volume method (FVM). CFD problems usually require discretization of the problem into a large number of cells/gridpoints (millions and more). Therefore the cost of the solution favors simpler, lower-order approximation within each cell. This is especially true for 'external flow' problems, like airflow around the car, airplane, or weather simulation. Finite element and fast fourier transform (FFT) methods Another method used for approximating solutions to a partial differential equation is the Fast Fourier Transform (FFT), where the solution is approximated by a fourier series computed using the FFT. For approximating the mechanical response of materials under stress, FFT is often much faster, but FEM may be more accurate. One example of the respective advantages of the two methods is in simulation of rolling a sheet of aluminum (an FCC metal), and drawing a wire of tungsten (a BCC metal). This simulation did not have a sophisticated shape update algorithm for the FFT method. In both cases, the FFT method was more than 10 times as fast as FEM, but in the wire drawing simulation, where there were large deformations in grains, the FEM method was much more accurate. In the sheet rolling simulation, the results of the two methods were similar. FFT has a larger speed advantage in cases where the boundary conditions are given in the materials strain, and loses some of its efficiency in cases where the stress is used to apply the boundary conditions, as more iterations of the method are needed. The FE and FFT methods can also be combined in a voxel based method (2) to simulate deformation in materials, where the FE method is used for the macroscale stress and deformation, and the FFT method is used on the microscale to deal with the effects of microscale on the mechanical response. Unlike FEM, FFT methods’ similarities to image processing methods means that an actual image of the microstructure from a microscope can be input to the solver to get a more accurate stress response. Using a real image with FFT avoids meshing the microstructure, which would be required if using FEM simulation of the microstructure, and might be difficult. Because fourier approximations are inherently periodic, FFT can only be used in cases of periodic microstructure, but this is common in real materials. FFT can also be combined with FEM methods by using fourier components as the variational basis for approximating the fields inside an element, which can take advantage of the speed of FFT based solvers. Application Various specializations under the umbrella of the mechanical engineering discipline (such as aeronautical, biomechanical, and automotive industries) commonly use integrated FEM in the design and development of their products. Several modern FEM packages include specific components such as thermal, electromagnetic, fluid, and structural working environments. In a structural simulation, FEM helps tremendously in producing stiffness and strength visualizations and minimizing weight, materials, and costs. FEM allows detailed visualization of where structures bend or twist, indicating the distribution of stresses and displacements. FEM software provides a wide range of simulation options for controlling the complexity of modeling and system analysis. Similarly, the desired level of accuracy required and associated computational time requirements can be managed simultaneously to address most engineering applications. FEM allows entire designs to be constructed, refined, and optimized before the design is manufactured. The mesh is an integral part of the model and must be controlled carefully to give the best results. Generally, the higher the number of elements in a mesh, the more accurate the solution of the discretized problem. However, there is a value at which the results converge, and further mesh refinement does not increase accuracy. This powerful design tool has significantly improved both the standard of engineering designs and the design process methodology in many industrial applications. The introduction of FEM has substantially decreased the time to take products from concept to the production line. Testing and development have been accelerated primarily through improved initial prototype designs using FEM. In summary, benefits of FEM include increased accuracy, enhanced design and better insight into critical design parameters, virtual prototyping, fewer hardware prototypes, a faster and less expensive design cycle, increased productivity, and increased revenue. In the 1990s FEM was proposed for use in stochastic modeling for numerically solving probability models and later for reliability assessment. FEM is widely applied for approximating differential equations that describe physical systems. This method is very popular in the community of Computational fluid dynamics, and there are many applications for solving Navier–Stokes equations with FEM. Recently, the application of FEM has been increasing in the researches of computational plasma. Promising numerical results using FEM for Magnetohydrodynamics, Vlasov equation, and Schrödinger equation have been proposed. See also Applied element method Boundary element method Céa's lemma Computer experiment Direct stiffness method Discontinuity layout optimization Discrete element method Finite difference method Finite element machine Finite element method in structural mechanics Finite volume method Finite volume method for unsteady flow Infinite element method Interval finite element Isogeometric analysis Lattice Boltzmann methods List of finite element software packages Meshfree methods Movable cellular automaton Multidisciplinary design optimization Multiphysics Patch test Rayleigh–Ritz method Space mapping STRAND7 Tessellation (computer graphics) Weakened weak form References Further reading G. Allaire and A. Craig: Numerical Analysis and Optimization: An Introduction to Mathematical Modelling and Numerical Simulation. K. J. Bathe: Numerical methods in finite element analysis, Prentice-Hall (1976). Thomas J.R. Hughes: The Finite Element Method: Linear Static and Dynamic Finite Element Analysis, Prentice-Hall (1987). J. Chaskalovic: Finite Elements Methods for Engineering Sciences, Springer Verlag, (2008). Endre Süli: Finite Element Methods for Partial Differential Equations. O. C. Zienkiewicz, R. L. Taylor, J. Z. Zhu : The Finite Element Method: Its Basis and Fundamentals, Butterworth-Heinemann (2005). N. Ottosen, H. Petersson: Introduction to the Finite Element Method, Prentice-Hall (1992). Susanne C. Brenner, L. Ridgway Scott: The Mathematical Theory of Finite Element Methods, Springer-Verlag New York, ISBN 978-0-387-75933-3 (2008). Zohdi, T. I. (2018) A finite element primer for beginners-extended version including sample tests and projects. Second Edition https://link.springer.com/book/10.1007/978-3-319-70428-9 Leszek F. Demkowicz: Mathematical Theory of Finite Elements, SIAM, ISBN 978-1-61197-772-1 (2024). Continuum mechanics Numerical differential equations Partial differential equations Structural analysis Computational electromagnetics Canadian inventions Russian inventions
Finite element method
[ "Physics", "Engineering" ]
7,357
[ "Structural engineering", "Computational electromagnetics", "Continuum mechanics", "Structural analysis", "Classical mechanics", "Computational physics", "Mechanical engineering", "Aerospace engineering" ]
18,242,141
https://en.wikipedia.org/wiki/List%20of%20quasiparticles
This is a list of quasiparticles and collective excitations used in condensed matter physics. List References Physics-related lists it:Quasiparticella#Lista delle quasiparticelle
List of quasiparticles
[ "Physics", "Materials_science" ]
42
[ "Quasiparticles", "Subatomic particles", "Condensed matter physics", "Matter" ]
18,243,381
https://en.wikipedia.org/wiki/Arithmetic%20combinatorics
In mathematics, arithmetic combinatorics is a field in the intersection of number theory, combinatorics, ergodic theory and harmonic analysis. Scope Arithmetic combinatorics is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive combinatorics is the special case when only the operations of addition and subtraction are involved. Ben Green explains arithmetic combinatorics in his review of "Additive Combinatorics" by Tao and Vu. Important results Szemerédi's theorem Szemerédi's theorem is a result in arithmetic combinatorics concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k term arithmetic progression for every k. This conjecture, which became Szemerédi's theorem, generalizes the statement of van der Waerden's theorem. Green–Tao theorem and extensions The Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, there exist arithmetic progressions of primes, with k terms, where k can be any natural number. The proof is an extension of Szemerédi's theorem. In 2006, Terence Tao and Tamar Ziegler extended the result to cover polynomial progressions. More precisely, given any integer-valued polynomials P1,..., Pk in one unknown m all with constant term 0, there are infinitely many integers x, m such that x + P1(m), ..., x + Pk(m) are simultaneously prime. The special case when the polynomials are m, 2m, ..., km implies the previous result that there are length k arithmetic progressions of primes. Breuillard–Green–Tao theorem The Breuillard–Green–Tao theorem, proved by Emmanuel Breuillard, Ben Green, and Terence Tao in 2011, gives a complete classification of approximate groups. This result can be seen as a nonabelian version of Freiman's theorem, and a generalization of Gromov's theorem on groups of polynomial growth. Example If A is a set of N integers, how large or small can the sumset the difference set and the product set be, and how are the sizes of these sets related? (Not to be confused: the terms difference set and product set can have other meanings.) Extensions The sets being studied may also be subsets of algebraic structures other than the integers, for example, groups, rings and fields. See also Additive number theory Additive combinatorics Approximate group Corners theorem Ergodic Ramsey theory Problems involving arithmetic progressions Schnirelmann density Shapley–Folkman lemma Sidon set Sum-free set Restricted sumset Sum-product phenomenon Notes References Additive Combinatorics and Theoretical Computer Science , Luca Trevisan, SIGACT News, June 2009 Open problems in additive combinatorics, E Croot, V Lev From Rotating Needles to Stability of Waves: Emerging Connections between Combinatorics, Analysis, and PDE, Terence Tao, AMS Notices March 2001 Further reading Some Highlights of Arithmetic Combinatorics, resources by Terence Tao Additive Combinatorics: Winter 2007, K Soundararajan Earliest Connections of Additive Combinatorics and Computer Science, Luca Trevisan Additive number theory Sumsets Harmonic analysis Ergodic theory Additive combinatorics
Arithmetic combinatorics
[ "Mathematics" ]
724
[ "Additive combinatorics", "Ergodic theory", "Combinatorics", "Sumsets", "Dynamical systems" ]
18,246,628
https://en.wikipedia.org/wiki/Entropy%20estimation
In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations. The simplest and most common approach uses histogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate, although the nature of the (suspected) distribution of the data may also be a factor, as well as the sample size and the size of the alphabet of the probability distribution. Histogram estimator The histogram approach uses the idea that the differential entropy of a probability distribution for a continuous random variable , can be approximated by first approximating with a histogram of the observations, and then finding the discrete entropy of a quantization of with bin probabilities given by that histogram. The histogram is itself a maximum-likelihood (ML) estimate of the discretized frequency distribution ), where is the width of the th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is biased, and although corrections can be made to the estimate, they may not always be satisfactory. A method better suited for multidimensional probability density functions (pdf) is to first make a pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian mixture modeling (GMM), where the expectation maximization (EM) algorithm is used to find an ML estimate of a weighted sum of Gaussian pdf's approximating the data pdf. Estimates based on sample-spacings If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of (the reciprocal of) the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with high variance, but can be improved, for example by thinking about the space between a given value and the one m away from it, where m is some fixed number. The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks. One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed. Estimates based on nearest-neighbours For each point in our dataset, we can find the distance to its nearest neighbour. We can in fact estimate the entropy from the distribution of the nearest-neighbour-distance of our datapoints. (In a uniform distribution these distances all tend to be fairly similar, whereas in a strongly nonuniform distribution they may vary a lot more.) Bayesian estimator When in under-sampled regime, having a prior on the distribution can help the estimation. One such Bayesian estimator was proposed in the neuroscience context known as the NSB (Nemenman–Shafee–Bialek) estimator. The NSB estimator uses a mixture of Dirichlet prior, chosen such that the induced prior over the entropy is approximately uniform. Estimates based on expected entropy A new approach to the problem of entropy evaluation is to compare the expected entropy of a sample of random sequence with the calculated entropy of the sample. The method gives very accurate results, but it is limited to calculations of random sequences modeled as Markov chains of the first order with small values of bias and correlations. This is the first known method that takes into account the size of the sample sequence and its impact on the accuracy of the calculation of entropy. Deep Neural Network estimator A deep neural network (DNN) can be used to estimate the joint entropy and called Neural Joint Entropy Estimator (NJEE). Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in an image classification task, the NJEE maps a vector of pixel values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes. References Entropy and information Information theory Statistical randomness Random number generation
Entropy estimation
[ "Physics", "Mathematics", "Technology", "Engineering" ]
1,025
[ "Telecommunications engineering", "Physical quantities", "Applied mathematics", "Entropy and information", "Computer science", "Entropy", "Information theory", "Dynamical systems" ]
19,248,693
https://en.wikipedia.org/wiki/Macromolecular%20crowding
The phenomenon of macromolecular crowding alters the properties of molecules in a solution when high concentrations of macromolecules such as proteins are present. Such conditions occur routinely in living cells; for instance, the cytosol of Escherichia coli contains about 300–400 mg/ml of macromolecules. Crowding occurs since these high concentrations of macromolecules reduce the volume of solvent available for other molecules in the solution, which has the result of increasing their effective concentrations. Crowding can promote formation of a biomolecular condensate by colloidal phase separation. This crowding effect can make molecules in cells behave in radically different ways than in test-tube assays. Consequently, measurements of the properties of enzymes or processes in metabolism that are made in the laboratory (in vitro) in dilute solutions may be different by many orders of magnitude from the true values seen in living cells (in vivo). The study of biochemical processes under realistically crowded conditions is very important, since these conditions are a ubiquitous property of all cells and crowding may be essential for the efficient operation of metabolism. Indeed, in vitro studies have shown that crowding greatly influences binding stability of proteins to DNA. Cause and effects The interior of cells is a crowded environment. For example, an Escherichia coli cell is only about 2 micrometres (μm) long and 0.5 μm in diameter, with a cell volume of 0.6 - 0.7 μm3. However, E. coli can contain up to 4,288 different types of proteins, and about 1,000 of these types are produced at a high enough level to be easily detected. Added to this mix are various forms of RNA and the cell's DNA chromosome, giving a total concentration of macromolecules of between 300 and 400 mg/ml. In eukaryotes the cell's interior is further crowded by the protein filaments that make up the cytoskeleton, this meshwork divides the cytosol into a network of narrow pores. These high concentrations of macromolecules occupy a large proportion of the volume of the cell, which reduces the volume of solvent that is available for other macromolecules. This excluded volume effect increases the effective concentration of macromolecules (increasing their chemical activity), which in turn alters the rates and equilibrium constants of their reactions. In particular this effect alters dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes, or when DNA-binding proteins bind to their targets in the genome. Crowding may also affect enzyme reactions involving small molecules if the reaction involves a large change in the shape of the enzyme. The size of the crowding effect depends on both the molecular mass and shape of the molecule involved, although mass seems to be the major factor – with the effect being stronger with larger molecules. Notably, the size of the effect is non-linear, so macromolecules are much more strongly affected than are small molecules such as amino acids or simple sugars. Macromolecular crowding is therefore an effect exerted by large molecules on the properties of other large molecules. Importance Macromolecular crowding is an important effect in biochemistry and cell biology. For example, the increase in the strength of interactions between proteins and DNA produced by crowding may be of key importance in processes such as transcription and DNA replication. Crowding has also been suggested to be involved in processes as diverse as the aggregation of hemoglobin in sickle-cell disease, and the responses of cells to changes in their volume. The importance of crowding in protein folding is of particular interest in biophysics. Here, the crowding effect can accelerate the folding process, since a compact folded protein will occupy less volume than an unfolded protein chain. However, crowding can reduce the yield of correctly folded protein by increasing protein aggregation. Crowding may also increase the effectiveness of chaperone proteins such as GroEL in the cell, which could counteract this reduction in folding efficiency. It has also been shown that macromolecular crowding affects protein-folding dynamics as well as overall protein shape where distinct conformational changes are accompanied by secondary structure alterations implying that crowding-induced shape changes may be important for protein function and malfunction in vivo. A particularly striking example of the importance of crowding effects involves the crystallins that fill the interior of the lens. These proteins have to remain stable and in solution for the lens to be transparent; precipitation or aggregation of crystallins causes cataracts. Crystallins are present in the lens at extremely high concentrations, over 500 mg/ml, and at these levels crowding effects are very strong. The large crowding effect adds to the thermal stability of the crystallins, increasing their resistance to denaturation. This effect may partly explain the extraordinary resistance shown by the lens to damage caused by high temperatures. Crowding may also play a role in diseases that involve protein aggregation, such as sickle cell anemia where mutant hemoglobin forms aggregates and alzheimer's disease, where tau protein forms neurofibrillary tangles under crowded conditions within neurons. Study Due to macromolecular crowding, enzyme assays and biophysical measurements performed in dilute solution may fail to reflect the actual process and its kinetics taking place in the cytosol. One approach to produce more accurate measurements would be to use highly concentrated extracts of cells, to try to maintain the cell contents in a more natural state. However, such extracts contain many kinds of biologically active molecules, which can interfere with the phenomena being studied. Consequently, crowding effects are mimicked in vitro by adding high concentrations of relatively inert molecules such as polyethylene glycol, ficoll, dextran, or serum albumin to experimental media. However, using such artificial crowding agents can be complicated, as these crowding molecules can sometimes interact in other ways with the process being examined, such as by binding weakly to one of the components. Macromolecular crowding and protein folding A major importance of macromolecular crowding to biological systems stems from its effect on protein folding. The underlying physical mechanism by which macromolecular crowding helps to stabilize proteins in their folded state is often explained in terms of excluded volume - the volume inaccessible to the proteins due to their interaction with macromolecular crowders. This notion goes back to Asakura and Oosawa, who have described depletion forces induced by steric, hard-core, interactions. A hallmark of the mechanism inferred from the above is that the effect is completely a-thermal, and thus completely entropic. These ideas were also proposed to explain why small cosolutes, namely protective osmolytes, which are preferentially excluded from proteins, also shift the protein folding equilibrium towards the folded state. However, it has been shown by various methods, both experimental and theoretical, that depletion forces are not always entropic in nature. See also Ideal solution Colligative properties References External links Physical chemistry Tissue engineering Protein methods Biophysics
Macromolecular crowding
[ "Physics", "Chemistry", "Engineering", "Biology" ]
1,484
[ "Biochemistry methods", "Biological engineering", "Applied and interdisciplinary physics", "Cloning", "Chemical engineering", "Protein methods", "Protein biochemistry", "Biophysics", "Tissue engineering", "nan", "Physical chemistry", "Medical technology" ]
19,248,758
https://en.wikipedia.org/wiki/Hadamard%E2%80%93Rybczynski%20equation
In fluid dynamics, the Hadamard–Rybczynski equation gives the terminal velocity of slowly moving spherical bubble through an ambient fluid. It is named after Jacques Hadamard and Witold Rybczynski: where is the radius of the bubble. the gravitational acceleration. the density of the bubble. the density of the ambient fluid. the viscosity of the bubble. the viscosity of the ambient fluid. the resultant velocity of the bubble. The Hadamard–Rybczynski equation can be derived from the Navier–Stokes equations by considering only the buoyancy force and drag force acting on the moving bubble. The surface tension force and inertia force of the bubble are neglected. References Further reading Fluid dynamics Bubbles (physics) Equations of fluid dynamics Buoyancy
Hadamard–Rybczynski equation
[ "Physics", "Chemistry", "Engineering" ]
163
[ "Equations of fluid dynamics", "Equations of physics", "Bubbles (physics)", "Foams", "Chemical engineering", "Piping", "Fluid dynamics" ]
19,251,434
https://en.wikipedia.org/wiki/Prestressed%20structure
In structural engineering, a prestressed structure is a load-bearing structure whose overall integrity, stability and security depend, primarily, on prestressing: the intentional creation of permanent stresses in the structure for the purpose of improving its performance under various service conditions. The basic types of prestressing are: Precompression with mostly the structure's own weight Pre-tensioning with high-strength embedded tendons Post-tensioning with high-strength bonded or unbonded tendons Today, the concept of a prestressed structure is widely employed in the design of buildings, underground structures, TV towers, power stations, floating storage and offshore facilities, nuclear reactor vessels, and numerous bridge systems. It is especially prominent in construction using concrete (see pre-stressed concrete). The idea of precompression was apparently familiar to ancient Roman architects. The tall attic wall of the Colosseum works as a stabilizing device for the wall piers beneath it. References Construction Structural engineering
Prestressed structure
[ "Engineering" ]
204
[ "Construction", "Civil engineering", "Structural engineering" ]
19,254,708
https://en.wikipedia.org/wiki/Double%20tangent%20bundle
In mathematics, particularly differential topology, the double tangent bundle or the second tangent bundle refers to the tangent bundle of the total space TM of the tangent bundle of a smooth manifold M . A note on notation: in this article, we denote projection maps by their domains, e.g., πTTM : TTM → TM. Some authors index these maps by their ranges instead, so for them, that map would be written πTM. The second tangent bundle arises in the study of connections and second order ordinary differential equations, i.e., (semi)spray structures on smooth manifolds, and it is not to be confused with the second order jet bundle. Secondary vector bundle structure and canonical flip Since is a vector bundle in its own right, its tangent bundle has the secondary vector bundle structure where is the push-forward of the canonical projection In the following we denote and apply the associated coordinate system on TM. Then the fibre of the secondary vector bundle structure at X∈TxM takes the form The double tangent bundle is a double vector bundle. The canonical flip is a smooth involution j:TTM→TTM that exchanges these vector space structures in the sense that it is a vector bundle isomorphism between and In the associated coordinates on TM it reads as The canonical flip has the property that for any f: R2 → M, where s and t are coordinates of the standard basis of R 2. Note that both partial derivatives are functions from R2 to TTM. This property can, in fact, be used to give an intrinsic definition of the canonical flip. Indeed, there is a submersion p: J20 (R2,M) → TTM given by where p can be defined in the space of two-jets at zero because only depends on f up to order two at zero. We consider the application: where α(s,t)= (t,s). Then J is compatible with the projection p and induces the canonical flip on the quotient TTM. Canonical tensor fields on the tangent bundle As for any vector bundle, the tangent spaces of the fibres TxM of the tangent bundle can be identified with the fibres TxM themselves. Formally this is achieved through the vertical lift, which is a natural vector space isomorphism defined as The vertical lift can also be seen as a natural vector bundle isomorphism from the pullback bundle of over onto the vertical tangent bundle The vertical lift lets us define the canonical vector field which is smooth in the slit tangent bundle TM\0. The canonical vector field can be also defined as the infinitesimal generator of the Lie-group action Unlike the canonical vector field, which can be defined for any vector bundle, the canonical endomorphism is special to the tangent bundle. The canonical endomorphism J satisfies and it is also known as the tangent structure for the following reason. If (E,p,M) is any vector bundle with the canonical vector field V and a (1,1)-tensor field J that satisfies the properties listed above, with VE in place of VTM, then the vector bundle (E,p,M) is isomorphic to the tangent bundle of the base manifold, and J corresponds to the tangent structure of TM in this isomorphism. There is also a stronger result of this kind which states that if N is a 2n-dimensional manifold and if there exists a (1,1)-tensor field J on N that satisfies then N is diffeomorphic to an open set of the total space of a tangent bundle of some n-dimensional manifold M, and J corresponds to the tangent structure of TM in this diffeomorphism. In any associated coordinate system on TM the canonical vector field and the canonical endomorphism have the coordinate representations (Semi)spray structures A Semispray structure on a smooth manifold M is by definition a smooth vector field H on TM \0 such that JH=V. An equivalent definition is that j(H)=H, where j:TTM→TTM is the canonical flip. A semispray H is a spray, if in addition, [V,H]=H. Spray and semispray structures are invariant versions of second order ordinary differential equations on M. The difference between spray and semispray structures is that the solution curves of sprays are invariant in positive reparametrizations as point sets on M, whereas solution curves of semisprays typically are not. Nonlinear covariant derivatives on smooth manifolds The canonical flip makes it possible to define nonlinear covariant derivatives on smooth manifolds as follows. Let be an Ehresmann connection on the slit tangent bundle TM\0 and consider the mapping where Y*:TM→TTM is the push-forward, j:TTM→TTM is the canonical flip and κ:T(TM/0)→TM/0 is the connector map. The mapping DX is a derivation in the module Γ (TM) of smooth vector fields on M in the sense that . . Any mapping DX with these properties is called a (nonlinear) covariant derivative on M. The term nonlinear refers to the fact that this kind of covariant derivative DX on is not necessarily linear with respect to the direction X∈TM/0 of the differentiation. Looking at the local representations one can confirm that the Ehresmann connections on (TM/0,πTM/0,M) and nonlinear covariant derivatives on M are in one-to-one correspondence. Furthermore, if DX is linear in X, then the Ehresmann connection is linear in the secondary vector bundle structure, and DX coincides with its linear covariant derivative. See also Spray (mathematics) Secondary vector bundle structure Finsler manifold References Differential geometry Topology
Double tangent bundle
[ "Physics", "Mathematics" ]
1,212
[ "Spacetime", "Topology", "Space", "Geometry" ]
3,366,880
https://en.wikipedia.org/wiki/Las%20Campanas%20Redshift%20Survey
The Las Campanas Redshift Survey is considered the first attempt to map a large area of the universe out to a redshift of z = 0.2. It was begun in 1991 using the Las Campanas telescope in Chile to catalog 26418 separate galaxies. It is considered one of the first surveys to document the so-called "end of greatness" where the Cosmological Principle of isotropy could be seen. Superclusters and voids are prominent features in the survey. See also 2dF Galaxy Redshift Survey Sloan Digital Sky Survey References Observational astronomy Astronomical surveys
Las Campanas Redshift Survey
[ "Astronomy" ]
125
[ "Astronomical surveys", "Observational astronomy", "Works about astronomy", "Astronomy stubs", "Astronomical catalogue stubs", "Astronomical objects", "Astronomical sub-disciplines" ]
3,368,914
https://en.wikipedia.org/wiki/Beta-2%20microglobulin
β2 microglobulin (B2M) is a component of MHC class I molecules. MHC class I molecules have α1, α2, and α3 proteins which are present on all nucleated cells (excluding red blood cells). In humans, the β2 microglobulin protein is encoded by the B2M gene. Structure and function β2 microglobulin lies beside the α3 chain on the cell surface. Unlike α3, β2 has no transmembrane region. Directly above β2 (that is, further away from the cell) lies the α1 chain, which itself is next to the α2. β2 microglobulin associates not only with the alpha chain of MHC class I molecules, but also with class I-like molecules such as CD1 (5 genes in humans), MR1, the neonatal Fc receptor (FcRn), and Qa-1 (a form of alloantigen). Nevertheless, the β2 microglobulin gene is outside of the MHC (HLA) locus, on a different chromosome. An additional function is association with the HFE protein, together regulating the expression of hepcidin in the liver which targets the iron transporter ferroportin on the basolateral membrane of enterocytes and cell membrane of macrophages for degradation resulting in decreased iron uptake from food and decreased iron release from recycled red blood cells in the MPS (mononuclear phagocyte system) respectively. Loss of this function causes iron excess and hemochromatosis. In a cytomegalovirus infection, a viral protein binds to β2 microglobulin, preventing assembly of MHC class I molecules and their transport to the plasma membrane. Mice models deficient for the β2 microglobulin gene have been engineered. These mice demonstrate that β2 microglobulin is necessary for cell surface expression of MHC class I and stability of the peptide-binding groove. In fact, in the absence of β2 microglobulin, very limited amounts of MHC class I (classical and non-classical) molecules can be detected on the surface (bare lymphocyte syndrome or BLS). In the absence of MHC class I, CD8+ T cells cannot develop. (CD8+ T cells are a subset of T cells involved in the development of acquired immunity.) Clinical significance In patients on long-term hemodialysis, it can aggregate into amyloid fibers that deposit in joint spaces, a disease, known as dialysis-related amyloidosis. Low levels of β2 microglobulin can indicate non-progression of HIV. Levels of β2 microglobulin can be elevated in multiple myeloma and lymphoma, though in these cases primary amyloidosis (amyloid light chain) and secondary amyloidosis (amyloid associated protein) are more common. The normal value of β2 microglobulin is < 2 mg/L. However, with respect to multiple myeloma, the levels of β2 microglobulin may also be at the other end of the spectrum. Diagnostic testing for multiple myeloma includes obtaining the β2 microglobulin level, for this level is an important prognostic indicator. , a patient with a level < 4 mg/L is expected to have a median survival of 43 months, while one with a level > 4 mg/L has a median survival of only 12 months. β2 microglobulin levels cannot, however, distinguish between monoclonal gammopathy of undetermined significance (MGUS), which has a better prognosis, and smouldering (low grade) myeloma. Loss-of-function mutations in this gene have been reported in cancer patients unresponsive to immunotherapies. Virus relevance β2 microglobulin has been shown to be of high relevance for viral entry of Coxsackievirus A9 and Vaccinia virus (a Poxvirus). For Coxsackievirus A9, it is likely that β2 microglobulin is required for the transport to plasma membrane of the identified receptor, the Human Neonatal Fc Receptor (FcRn). However, the specific function for Vaccinia virus has not yet been elucidated. References Further reading External links Proteins Immune system
Beta-2 microglobulin
[ "Chemistry", "Biology" ]
909
[ "Biomolecules by chemical classification", "Immune system", "Organ systems", "Molecular biology", "Proteins" ]
3,369,465
https://en.wikipedia.org/wiki/Ferromagnetic%20resonance
Ferromagnetic resonance, or FMR, is coupling between an electromagnetic wave and the magnetization of a medium through which it passes. This coupling induces a significant loss of power of the wave. The power is absorbed by the precessing magnetization (Larmor precession) of the material and lost as heat. For this coupling to occur, the frequency of the incident wave must be equal to the precession frequency of the magnetization (Larmor frequency) and the polarization of the wave must match the orientation of the magnetization. This effect can be used for various applications such as spectroscopic techniques or conception of microwave devices. The FMR spectroscopic technique is used to probe the magnetization of ferromagnetic materials. It is a standard tool for probing spin waves and spin dynamics. FMR is very broadly similar to electron paramagnetic resonance (EPR), and also somewhat similar to nuclear magnetic resonance (NMR), except that FMR probes the sample magnetization resulting from the magnetic moments of dipolar-coupled but unpaired electrons, while NMR probes the magnetic moment of atomic nuclei that are screened by the atomic or molecular orbitals surrounding such nuclei of non-zero nuclear spin. The FMR resonance is also the basis of various high-frequency electronic devices, such as resonance isolators or circulators. History Ferromagnetic resonance was experimentally discovered by V. K. Arkad'yev when he observed the absorption of UHF radiation by ferromagnetic materials in 1911. A qualitative explanation of FMR along with an explanation of the results from Arkad'yev was offered up by Ya. G. Dorfman in 1923, when he suggested that the optical transitions due to Zeeman splitting could provide a way to study ferromagnetic structure. A 1935 paper published by Lev Landau and Evgeny Lifshitz predicted the existence of ferromagnetic resonance of the Larmor precession, which was independently verified in experiments by J. H. E. Griffiths (UK) and E. K. Zavoiskij (USSR) in 1946. Description FMR arises from the precessional motion of the (usually quite large) magnetization of a ferromagnetic material in an external magnetic field . The magnetic field exerts a torque on the sample magnetization which causes the magnetic moments in the sample to precess. The precession frequency of the magnetization depends on the orientation of the material, the strength of the magnetic field, as well as the macroscopic magnetization of the sample; the effective precession frequency of the ferromagnet is much lower in value from the precession frequency observed for free electrons in EPR. Moreover, linewidths of absorption peaks can be greatly affected both by dipolar-narrowing and exchange-broadening (quantum) effects. Furthermore, not all absorption peaks observed in FMR are caused by the precession of the magnetic moments of electrons in the ferromagnet. Thus, the theoretical analysis of FMR spectra is far more complex than that of EPR or NMR spectra. The basic setup for an FMR experiment is a microwave resonant cavity with an electromagnet. The resonant cavity is fixed at a frequency in the super high frequency band. A detector is placed at the end of the cavity to detect the microwaves. The magnetic sample is placed between the poles of the electromagnet and the magnetic field is swept while the resonant absorption intensity of the microwaves is detected. When the magnetization precession frequency and the resonant cavity frequency are the same, absorption increases sharply which is indicated by a decrease in the intensity at the detector. Furthermore, the resonant absorption of microwave energy causes local heating of the ferromagnet. In samples with local magnetic parameters varying on the nanometer scale this effect is used for spatial dependent spectroscopy investigations. The resonant frequency of a film with parallel applied external field is given by the Kittel formula: where is the magnetization of the ferromagnet and is the gyromagnetic ratio. See also Electron paramagnetic resonance Nuclear magnetic resonance References Further reading External links Calculation of some important resonance fields Spatially resolved ferromagnetic resonance technique Ferromagnetic Resonance (FMR) (Wolfgang Kuch, Freie Universität Berlin) Spectroscopy Magnetic ordering
Ferromagnetic resonance
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
933
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Electric and magnetic fields in matter", "Materials science", "Magnetic ordering", "Condensed matter physics", "Spectroscopy" ]
3,372,103
https://en.wikipedia.org/wiki/Pneudraulics
Derived from the words hydraulics and pneumatics, pneudraulics is the term used when discussing systems on military aircraft that use either or some combination of hydraulic and pneumatic systems. The science of fluids made of both gas and liquid. Pneudraulic systems Landing gear Flaps and slats Rudder Ailerons Speed brake Wheel brakes Nose wheel steering References Fluid power
Pneudraulics
[ "Physics" ]
82
[ "Power (physics)", "Fluid power", "Physical quantities" ]
3,372,377
https://en.wikipedia.org/wiki/Optical%20fiber
An optical fiber, or optical fibre, is a flexible glass or plastic fiber that can transmit light from one end to the other. Such fibers find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data transfer rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss and are immune to electromagnetic interference. Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, such as fiber optic sensors and fiber lasers. Glass optical fibers are typically made by drawing, while plastic fibers can be made either by drawing or by extrusion. Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than . Being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors. The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian-American physicist Narinder Singh Kapany. History Daniel Colladon and Jacques Babinet first demonstrated the guiding of light by refraction, the principle that makes fiber optics possible, in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall also wrote about the property of total internal reflection in an introductory book about the nature of light in 1870: In the late 19th century, a team of Viennese doctors guided light through bent glass rods to illuminate body cavities. Practical applications such as close internal illumination during dentistry followed, early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was largely forgotten. In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding. Later that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material. Kapany coined the term fiber optics after writing a 1960 article in Scientific American that introduced the topic to a wide audience. He subsequently wrote the first book about the new field. The first working fiber-optic data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. In 1968, NASA used fiber optics in the television cameras that were sent to the moon. At the time, the use in the cameras was classified confidential, and employees handling the cameras had to be supervised by someone with an appropriate security clearance. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables (STC) were the first to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer (dB/km), making fibers a practical communication medium, in 1965. They proposed that the attenuation in fibers available at the time was caused by impurities that could be removed, rather than by fundamental physical effects such as scattering. They correctly and systematically theorized the light-loss properties for optical fiber and pointed out the right material to use for such fibers—silica glass with high purity. This discovery earned Kao the Nobel Prize in Physics in 2009. The crucial attenuation limit of 20 dB/km was first achieved in 1970 by researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for American glass maker Corning Glass Works. They demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. In 1981, General Electric produced fused quartz ingots that could be drawn into strands long. Initially, high-quality optical fibers could only be manufactured at 2 meters per second. Chemical engineer Thomas Mensah joined Corning in 1983 and increased the speed of manufacture to over 50 meters per second, making optical fiber cables cheaper than traditional copper ones. These innovations ushered in the era of optical fiber telecommunication. The Italian research center CSELT worked with Corning to develop practical optical fiber cables, resulting in the first metropolitan fiber optic cable being deployed in Turin in 1977. CSELT also developed an early technique for splicing optical fibers, called Springroove. Attenuation in modern optical cables is far less than in electrical copper cables, leading to long-haul fiber connections with repeater distances of . Two teams, led by David N. Payne of the University of Southampton and Emmanuel Desurvire at Bell Labs, developed the erbium-doped fiber amplifier, which reduced the cost of long-distance fiber systems by reducing or eliminating optical-electrical-optical repeaters, in 1986 and 1987 respectively. The emerging field of photonic crystals led to the development in 1991 of photonic-crystal fiber, which guides light by diffraction from a periodic structure, rather than by total internal reflection. The first photonic crystal fibers became commercially available in 2000. Photonic crystal fibers can carry higher power than conventional fibers and their wavelength-dependent properties can be manipulated to improve performance. These fibers can have hollow cores. Uses Communication Optical fiber is used as a medium for telecommunication and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because infrared light propagates through the fiber with much lower attenuation compared to electricity in electrical cables. This allows long distances to be spanned with few repeaters. 10 or 40 Gbit/s is typical in deployed systems. Through the use of wavelength-division multiplexing (WDM), each fiber can carry many independent channels, each using a different wavelength of light. The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the forward error correction (FEC) overhead, multiplied by the number of channels (usually up to 80 in commercial dense WDM systems ). For short-distance applications, such as a network in an office building (see fiber to the office), fiber-optic cabling can save space in cable ducts. This is because a single fiber can carry much more data than electrical cables such as standard category 5 cable, which typically runs at 100 Mbit/s or 1 Gbit/s speeds. Fibers are often also used for short-distance connections between devices. For example, most high-definition televisions offer a digital audio optical connection. This allows the streaming of audio over light, using the S/PDIF protocol over an optical TOSLINK connection. Sensors Fibers have many uses in remote sensing. In some applications, the fiber itself is the sensor (the fibers channel optical light to a processing device that analyzes changes in the light's characteristics). In other cases, fiber is used to connect a sensor to a measurement system. Optical fibers can be used as sensors to measure strain, temperature, pressure, and other quantities by modifying a fiber so that the property being measured modulates the intensity, phase, polarization, wavelength, or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest since only a simple source and detector are required. A particularly useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over distances of up to one meter. Distributed acoustic sensing is one example of this. In contrast, highly localized measurements can be provided by integrating miniaturized sensing elements with the tip of the fiber. These can be implemented by various micro- and nanofabrication technologies, such that they do not exceed the microscopic boundary of the fiber tip, allowing for such applications as insertion into blood vessels via hypodermic needle. Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode one, to transmit modulated light from either a non-fiber optical sensor—or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach otherwise inaccessible places. An example is the measurement of temperature inside jet engines by using a fiber to transmit radiation into a pyrometer outside the engine. Extrinsic sensors can be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic sensors measure vibration, rotation, displacement, velocity, acceleration, torque, and torsion. A solid-state version of the gyroscope, using the interference of light, has been developed. The fiber optic gyroscope (FOG) has no moving parts and exploits the Sagnac effect to detect mechanical rotation. Common uses for fiber optic sensors include advanced intrusion detection security systems. The light is transmitted along a fiber optic sensor cable placed on a fence, pipeline, or communication cabling, and the returned signal is monitored and analyzed for disturbances. This return signal is digitally processed to detect disturbances and trip an alarm if an intrusion has occurred. Optical fibers are widely used as components of optical chemical sensors and optical biosensors. Power transmission Optical fiber can be used to transmit power using a photovoltaic cell to convert the light into electricity. While this method of power transmission is not as efficient as conventional ones, it is especially useful in situations where it is desirable not to have a metallic conductor as in the case of use near MRI machines, which produce strong magnetic fields. Other examples are for powering electronics in high-powered antenna elements and measurement devices used in high-voltage transmission equipment. Other uses Optical fibers are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. Many microscopes use fiber-optic light sources to provide intense illumination of samples being studied. Optical fiber is also used in imaging optics. A coherent bundle of fibers is used, sometimes along with lenses, for a long, thin imaging device called an endoscope, which is used to view objects through a small hole. Medical endoscopes are used for minimally invasive exploratory or surgical procedures. Industrial endoscopes (see fiberscope or borescope) are used for inspecting anything hard to reach, such as jet engine interiors. In some buildings, optical fibers route sunlight from the roof to other parts of the building (see nonimaging optics). Optical-fiber lamps are used for illumination in decorative applications, including signs, art, toys and artificial Christmas trees. Optical fiber is an intrinsic part of the light-transmitting concrete building product LiTraCon. Optical fiber can also be used in structural health monitoring. This type of sensor can detect stresses that may have a lasting impact on structures. It is based on the principle of measuring analog attenuation. In spectroscopy, optical fiber bundles transmit light from a spectrometer to a substance that cannot be placed inside the spectrometer itself, in order to analyze its composition. A spectrometer analyzes substances by bouncing light off and through them. By using fibers, a spectrometer can be used to study objects remotely. An optical fiber doped with certain rare-earth elements such as erbium can be used as the gain medium of a fiber laser or optical amplifier. Rare-earth-doped optical fibers can be used to provide signal amplification by splicing a short section of doped fiber into a regular (undoped) optical fiber line. The doped fiber is optically pumped with a second laser wavelength that is coupled into the line in addition to the signal wave. Both wavelengths of light are transmitted through the doped fiber, which transfers energy from the second pump wavelength to the signal wave. The process that causes the amplification is stimulated emission. Optical fiber is also widely exploited as a nonlinear medium. The glass medium supports a host of nonlinear optical interactions, and the long interaction lengths possible in fiber facilitate a variety of phenomena, which are harnessed for applications and fundamental investigation. Conversely, fiber nonlinearity can have deleterious effects on optical signals, and measures are often required to minimize such unwanted effects. Optical fibers doped with a wavelength shifter collect scintillation light in physics experiments. Fiber-optic sights for handguns, rifles, and shotguns use pieces of optical fiber to improve the visibility of markings on the sight. Principle of operation An optical fiber is a cylindrical dielectric waveguide (nonconducting waveguide) that transmits light along its axis through the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer, both of which are made of dielectric materials. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The boundary between the core and cladding may either be abrupt, in step-index fiber, or gradual, in graded-index fiber. Light can be fed into optical fibers using lasers or LEDs. Fiber is immune to electrical interference as there is no cross-talk between signals in different cables and no pickup of environmental noise. Information traveling inside the optical fiber is even immune to electromagnetic pulses generated by nuclear devices. Fiber cables do not conduct electricity, which makes fiber useful for protecting communications equipment in high voltage environments such as power generation facilities or applications prone to lightning strikes. The electrical isolation also prevents problems with ground loops. Because there is no electricity in optical cables that could potentially generate sparks, they can be used in environments where explosive fumes are present. Wiretapping (in this case, fiber tapping) is more difficult compared to electrical connections. Fiber cables are not targeted for metal theft. In contrast, copper cable systems use large amounts of copper and have been targeted since the 2000s commodities boom. Refractive index The refractive index is a way of measuring the speed of light in a material. Light travels fastest in a vacuum, such as in outer space. The speed of light in vacuum is about 300,000 kilometers (186,000 miles) per second. The refractive index of a medium is calculated by dividing the speed of light in vacuum by the speed of light in that medium. The refractive index of vacuum is therefore 1, by definition. A typical single-mode fiber used for telecommunications has a cladding made of pure silica, with an index of 1.444 at 1500 nm, and a core of doped silica with an index around 1.4475. The larger the index of refraction, the slower light travels in that medium. From this information, a simple rule of thumb is that a signal using optical fiber for communication will travel at around 200,000 kilometers per second. Thus a phone call carried by fiber between Sydney and New York, a 16,000-kilometer distance, means that there is a minimum delay of 80 milliseconds (about of a second) between when one caller speaks and the other hears. Total internal reflection When light traveling in an optically dense medium hits a boundary at a steep angle of incidence (larger than the critical angle for the boundary), the light is completely reflected. This is called total internal reflection. This effect is used in optical fibers to confine light in the core. Most modern optical fiber is weakly guiding, meaning that the difference in refractive index between the core and the cladding is very small (typically less than 1%). Light travels through the fiber core, bouncing back and forth off the boundary between the core and cladding. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles can travel down the fiber without leaking out. This range of angles is called the acceptance cone of the fiber. There is a maximum angle from the fiber axis at which light may enter the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this maximum angle is the numerical aperture (NA) of the fiber. Fiber with a larger NA requires less precision to splice and work with than fiber with a smaller NA. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Single-mode fiber has a small NA. Multi-mode fiber Fiber with large core diameter (greater than 10 micrometers) may be analyzed by geometrical optics. Such fiber is called multi-mode fiber, from the electromagnetic analysis (see below). In a step-index multi-mode fiber, rays of light are guided along the fiber core by total internal reflection. Rays that meet the core-cladding boundary at an angle (measured relative to a line normal to the boundary) greater than the critical angle for this boundary, are completely reflected. The critical angle is determined by the difference in the index of refraction between the core and cladding materials. Rays that meet the boundary at a low angle are refracted from the core into the cladding where they terminate. The critical angle determines the acceptance angle of the fiber, often reported as a numerical aperture. A high numerical aperture allows light to propagate down the fiber in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber. However, this high numerical aperture increases the amount of dispersion as rays at different angles have different path lengths and therefore take different amounts of time to traverse the fiber. In graded-index fiber, the index of refraction in the core decreases continuously between the axis and the cladding. This causes light rays to bend smoothly as they approach the cladding, rather than reflecting abruptly from the core-cladding boundary. The resulting curved paths reduce multi-path dispersion because high-angle rays pass more through the lower-index periphery of the core, rather than the high-index center. The index profile is chosen to minimize the difference in axial propagation speeds of the various rays in the fiber. This ideal index profile is very close to a parabolic relationship between the index and the distance from the axis. Single-mode fiber Fiber with a core diameter less than about ten times the wavelength of the propagating light cannot be modeled using geometric optics. Instead, it must be analyzed as an electromagnetic waveguide structure, according to Maxwell's equations as reduced to the electromagnetic wave equation. As an optical waveguide, the fiber supports one or more confined transverse modes by which light can propagate along the fiber. Fiber supporting only one mode is called single-mode. The waveguide analysis shows that the light energy in the fiber is not completely confined in the core. Instead, especially in single-mode fibers, a significant fraction of the energy in the bound mode travels in the cladding as an evanescent wave. The most common type of single-mode fiber has a core diameter of 8–10 micrometers and is designed for use in the near infrared. Multi-mode fiber, by comparison, is manufactured with core diameters as small as 50 micrometers and as large as hundreds of micrometers. Special-purpose fiber Some special-purpose optical fiber is constructed with a non-cylindrical core or cladding layer, usually with an elliptical or rectangular cross-section. These include polarization-maintaining fiber used in fiber optic sensors and fiber designed to suppress whispering gallery mode propagation. Photonic-crystal fiber is made with a regular pattern of index variation (often in the form of cylindrical holes that run along the length of the fiber). Such fiber uses diffraction effects instead of or in addition to total internal reflection, to confine light to the fiber's core. The properties of the fiber can be tailored to a wide variety of applications. Mechanisms of attenuation Attenuation in fiber optics, also known as transmission loss, is the reduction in the intensity of the light signal as it travels through the transmission medium. Attenuation coefficients in fiber optics are usually expressed in units of dB/km. The medium is usually a fiber of silica glass that confines the incident light beam within. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. The four orders of magnitude reduction in the attenuation of silica optical fibers over four decades was the result of constant improvement of manufacturing processes, raw material purity, preform, and fiber designs, which allowed for these fibers to approach the theoretical lower limit of attenuation. Single-mode optical fibers can be made with extremely low loss. Corning's Vascade® EX2500 fiber, a low loss single-mode fiber for telecommunications wavelengths, has a nominal attenuation of 0.148 dB/km at 1550 nm. A 10 km length of such fiber transmits nearly 71% of optical energy at 1550 nm. Attenuation in optical fiber is caused primarily by both scattering and absorption. In fibers based on fluoride glasses such as ZBLAN, minimum attenuation is limited by impurity absorption. Vast majority of optical fibers are based on silica glass, where impurity absorption is negligible. In silica fibers attenuation is determined by intrinsic mechanisms: Rayleigh scattering in the glasses through which the light is propagating, and infrared absorption in the same glasses. Absorption in silica increases steeply at wavelengths above 1570 nm. At wavelengths most useful for telecommunications, Rayleigh scattering is the dominant loss mechanism. At 1550 nm attenuation components for a record low loss fiber are given as follows: Rayleigh scattering loss: 0.1200 dB/km, infrared absorption loss: 0.0150 dB/km, impurity absorption loss: 0.0047 dB/km, waveguide imperfection loss: 0.0010 dB/km. Light scattering The propagation of light through the core of an optical fiber is based on the total internal reflection of the lightwave, in terms of geometric optics, or guided modes, in terms of electromagnetic waveguide. In a typical single mode optical fiber about 75% of light is propagating through the core material, having higher refractive index, and about 25% of light is propagating through the cladding, having lower refractive index. The interface between the core and cladding glasses is exceptionally smooth and does not give rise to a significant scattering loss or a waveguide imperfection loss. The scattering loss originates primarily from the Rayleigh scattering in the bulk of the glasses composing the fiber core and cladding. The scattering of light in optical quality glass fiber is caused by molecular level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that glass is simply the limiting case of a polycrystalline solid. Within this framework, domains exhibiting various degrees of short-range order become the building blocks of metals as well as glasses and ceramics. Distributed both between and within these domains are micro-structural defects that provide the most ideal locations for light scattering. Scattering depends on the wavelength of the light being scattered and on the size of the scattering centers. Angular dependence of the light intensity scattered from an optical fiber matched that of Rayleigh scattering, indicating that the scattering centers are much smaller than the wavelength of propagating light. It originates from the density fluctuations driven by fictive temperature of the glass, and from the concentration fluctuations of dopants in both the core and the cladding. Rayleigh scattering coefficient, R, can be presented as : where represents Rayleigh scattering on density fluctuations and represents Rayleigh scattering on dopant concentration fluctuations. Dopants, such as germanium dioxide or fluorine, are used to create the refractive index difference between the core and the cladding, to form a waveguide structure. where is wavelength, is refractive index, is photo-elastic coefficient, is isothermal compressibility, is the Boltzmann constant, is fictive temperature. The only physically significant variable affecting scattering on density fluctuations is the fictive temperature of the glass, lower fictive temperature results in a more homogeneous glass and lower Rayleigh scattering. Fictive temperature may be dramatically reduced by about 100 wt. ppm of alkali oxide dopant in the fiber core, as well as slower cooling of the fiber during the fiber draw process. These approaches are used to produce optical fibers with the lowest attenuation, especially those for submarine telecom cables. For small dopant concentrations, is proportional to , where is the mole fraction of the dopant in SiO2-based glass and is the refractive index of the glass. When GeO2 dopant is used to increase the refractive index of the fiber core, it increases the concentration fluctuation component of Rayleigh scattering, and attenuation of the fiber. This is why the lowest attenuation fibers do not use GeO2 in the core, and use fluorine in the cladding, to reduce the refractive index of the cladding. in pure silica core fiber is proportional to the overlap integral between LP01 mode and fluorine-induced concentration fluctuation component in the cladding. In the core of potassium-doped pure silica-core (KPSC) fiber only density fluctuations play a significant role, as the concentrations of K2O, fluorine and chlorine are very low. The density fluctuations in the core are moderated by lower fictive temperature resulting from potassium doping, and are further reduced by annealing during the fiber draw process. This differs from the cladding, where higher fluorine dopant levels and the resulting concentration fluctuations add to the loss. In such fibers the light travelling through the core experiences lower scattering and lower attenuation compared to the light propagating through the cladding segment of the fiber. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. UV-Vis-IR absorption In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths. Primary material considerations include both electrons and molecules as follows: At the electronic level, it depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color. At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how closely packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material to transmit longer wavelengths in the infrared (IR), far IR, radio, and microwave ranges. The design of any optically transparent device requires the selection of materials based upon knowledge of its properties and limitations. The crystal structure absorption characteristics observed at the lower frequency regions (mid- to far-IR wavelength range) define the long-wavelength transparency limit of the material. They are the result of the interactive coupling between the motions of thermally induced vibrations of the constituent atoms and molecules of the solid lattice and the incident light wave radiation. Hence, all materials are bounded by limiting regions of absorption caused by atomic and molecular vibrations (bond-stretching) in the far-infrared (>10 μm). In other words, the selective absorption of IR light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integer multiple of the frequency, i.e. harmonic) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of IR light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strikes an object, the energy is either reflected or transmitted. Loss budget Attenuation over a cable run is significantly increased by the inclusion of connectors and splices. When computing the acceptable attenuation (loss budget) between a transmitter and a receiver one includes: dB loss due to the type and length of fiber optic cable, dB loss introduced by connectors, and dB loss introduced by splices. Connectors typically introduce 0.3 dB per connector on well-polished connectors. Splices typically introduce less than 0.2 dB per splice. The total loss can be calculated by: Loss = dB loss per connector × number of connectors + dB loss per splice × number of splices + dB loss per kilometer × kilometers of fiber, where the dB loss per kilometer is a function of the type of fiber and can be found in the manufacturer's specifications. For example, a typical 1550 nm single-mode fiber has a loss of 0.3 dB per kilometer. The calculated loss budget is used when testing to confirm that the measured loss is within the normal operating parameters. Manufacturing Materials Glass optical fibers are almost always made from silica, but some other materials, such as fluorozirconate, fluoroaluminate, and chalcogenide glasses as well as crystalline materials like sapphire, are used for longer-wavelength infrared or other specialized applications. Silica and fluoride glasses usually have refractive indices of about 1.5, but some materials such as the chalcogenides can have indices as high as 3. Typically the index difference between core and cladding is less than one percent. Plastic optical fibers (POF) are commonly step-index multi-mode fibers with a core diameter of 0.5 millimeters or larger. POF typically have higher attenuation coefficients than glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based systems. Silica Silica exhibits fairly good optical transmission over a wide range of wavelengths. In the near-infrared (near IR) portion of the spectrum, particularly around 1.5 μm, silica can have extremely low absorption and scattering losses of the order of 0.2 dB/km. Such low losses depend on using ultra-pure silica. A high transparency in the 1.4-μm region is achieved by maintaining a low concentration of hydroxyl groups (OH). Alternatively, a high OH concentration is better for transmission in the ultraviolet (UV) region. Silica can be drawn into fibers at reasonably high temperatures and has a fairly broad glass transformation range. One other advantage is that fusion splicing and cleaving of silica fibers is relatively effective. Silica fiber also has high mechanical strength against both pulling and even bending, provided that the fiber is not too thick and that the surfaces have been well prepared during processing. Even simple cleaving of the ends of the fiber can provide nicely flat surfaces with acceptable optical quality. Silica is also relatively chemically inert. In particular, it is not hygroscopic (does not absorb water). Silica glass can be doped with various materials. One purpose of doping is to raise the refractive index (e.g. with germanium dioxide (GeO2) or aluminium oxide (Al2O3)) or to lower it (e.g. with fluorine or boron trioxide (B2O3)). Doping is also possible with laser-active ions (for example, rare-earth-doped fibers) in order to obtain active fibers to be used, for example, in fiber amplifiers or laser applications. Both the fiber core and cladding are typically doped, so that the entire assembly (core and cladding) is effectively the same compound (e.g. an aluminosilicate, germanosilicate, phosphosilicate or borosilicate glass). Particularly for active fibers, pure silica is usually not a very suitable host glass, because it exhibits a low solubility for rare-earth ions. This can lead to quenching effects due to the clustering of dopant ions. Aluminosilicates are much more effective in this respect. Silica fiber also exhibits a high threshold for optical damage. This property ensures a low tendency for laser-induced breakdown. This is important for fiber amplifiers when utilized for the amplification of short pulses. Because of these properties, silica fibers are the material of choice in many optical applications, such as communications (except for very short distances with plastic optical fiber), fiber lasers, fiber amplifiers, and fiber-optic sensors. Large efforts put forth in the development of various types of silica fibers have further increased the performance of such fibers over other materials. Fluoride glass Fluoride glass is a class of non-oxide optical quality glasses composed of fluorides of various metals. Because of the low viscosity of these glasses, it is very difficult to completely avoid crystallization while processing it through the glass transition (or drawing the fiber from the melt). Thus, although heavy metal fluoride glasses (HMFG) exhibit very low optical attenuation, they are not only difficult to manufacture, but are quite fragile, and have poor resistance to moisture and other environmental attacks. Their best attribute is that they lack the absorption band associated with the hydroxyl (OH) group (3,200–3,600 cm−1; i.e., 2,777–3,125 nm or 2.78–3.13 μm), which is present in nearly all oxide-based glasses. Such low losses were never realized in practice, and the fragility and high cost of fluoride fibers made them less than ideal as primary candidates. Fluoride fibers are used in mid-IR spectroscopy, fiber optic sensors, thermometry, and imaging. Fluoride fibers can be used for guided lightwave transmission in media such as YAG (yttrium aluminium garnet) lasers at 2.9 μm, as required for medical applications (e.g. ophthalmology and dentistry). An example of a heavy metal fluoride glass is the ZBLAN glass group, composed of zirconium, barium, lanthanum, aluminium, and sodium fluorides. Their main technological application is as optical waveguides in both planar and fiber forms. They are advantageous especially in the mid-infrared (2,000–5,000 nm) range. Phosphate glass Phosphate glass is a class of optical glasses composed of metaphosphates of various metals. Instead of the SiO4 tetrahedra observed in silicate glasses, the building block for this glass phosphorus pentoxide (P2O5), which crystallizes in at least four different forms. The most familiar polymorph is the cagelike structure of P4O10. Phosphate glasses can be advantageous over silica glasses for optical fibers with a high concentration of doping rare-earth ions. A mix of fluoride glass and phosphate glass is fluorophosphate glass. Chalcogenide glass The chalcogens—the elements in group 16 of the periodic table—particularly sulfur (S), selenium (Se) and tellurium (Te)—react with more electropositive elements, such as silver, to form chalcogenides. These are extremely versatile compounds, in that they can be crystalline or amorphous, metallic or semiconducting, and conductors of ions or electrons. chalcogenide glass can be used to make fibers for far infrared transmission. Process Preform Standard optical fibers are made by first constructing a large-diameter preform with a carefully controlled refractive index profile, and then pulling the preform to form the long, thin optical fiber. The preform is commonly made by three chemical vapor deposition methods: inside vapor deposition, outside vapor deposition, and vapor axial deposition. With inside vapor deposition, the preform starts as a hollow glass tube approximately long, which is placed horizontally and rotated slowly on a lathe. Gases such as silicon tetrachloride (SiCl4) or germanium tetrachloride (GeCl4) are injected with oxygen in the end of the tube. The gases are then heated by means of an external hydrogen burner, bringing the temperature of the gas up to 1,900 K (1,600 °C, 3,000 °F), where the tetrachlorides react with oxygen to produce silica or germanium dioxide particles. When the reaction conditions are chosen to allow this reaction to occur in the gas phase throughout the tube volume, in contrast to earlier techniques where the reaction occurred only on the glass surface, this technique is called modified chemical vapor deposition. The oxide particles then agglomerate to form large particle chains, which subsequently deposit on the walls of the tube as soot. The deposition is due to the large difference in temperature between the gas core and the wall causing the gas to push the particles outward in a process known as thermophoresis. The torch is then traversed up and down the length of the tube to deposit the material evenly. After the torch has reached the end of the tube, it is then brought back to the beginning of the tube and the deposited particles are then melted to form a solid layer. This process is repeated until a sufficient amount of material has been deposited. For each layer the composition can be modified by varying the gas composition, resulting in precise control of the finished fiber's optical properties. In outside vapor deposition or vapor axial deposition, the glass is formed by flame hydrolysis, a reaction in which silicon tetrachloride and germanium tetrachloride are oxidized by reaction with water in an oxyhydrogen flame. In outside vapor deposition, the glass is deposited onto a solid rod, which is removed before further processing. In vapor axial deposition, a short seed rod is used, and a porous preform, whose length is not limited by the size of the source rod, is built up on its end. The porous preform is consolidated into a transparent, solid preform by heating to about 1,800 K (1,500 °C, 2,800 °F). Typical communications fiber uses a circular preform. For some applications such as double-clad fibers another form is preferred. In fiber lasers based on double-clad fiber, an asymmetric shape improves the filling factor for laser pumping. Because of the surface tension, the shape is smoothed during the drawing process, and the shape of the resulting fiber does not reproduce the sharp edges of the preform. Nevertheless, careful polishing of the preform is important, since any defects of the preform surface affect the optical and mechanical properties of the resulting fiber. Drawing The preform, regardless of construction, is placed in a device known as a drawing tower, where the preform tip is heated and the optical fiber is pulled out as a string. The tension on the fiber can be controlled to maintain the desired fiber thickness. Cladding The light is guided down the core of the fiber by an optical cladding with a lower refractive index that traps light in the core through total internal reflection. For some types of fiber, the cladding is made of glass and is drawn along with the core from a preform with radially varying index of refraction. For other types of fiber, the cladding made of plastic and is applied like a coating (see below). Coatings The cladding is coated by a buffer, (not to be confused with an actual buffer tube), that protects it from moisture and physical damage. These coatings are UV-cured urethane acrylate composite or polyimide materials applied to the outside of the fiber during the drawing process. The coatings protect the very delicate strands of glass fiber—about the size of a human hair—and allow it to survive the rigors of manufacturing, proof testing, cabling, and installation. The buffer coating must be stripped off the fiber for termination or splicing. Today's glass optical fiber draw processes employ a dual-layer coating approach. An inner primary coating is designed to act as a shock absorber to minimize attenuation caused by microbending. An outer secondary coating protects the primary coating against mechanical damage and acts as a barrier to lateral forces, and may be colored to differentiate strands in bundled cable constructions. These fiber optic coating layers are applied during the fiber draw, at speeds approaching . Fiber optic coatings are applied using one of two methods: wet-on-dry and wet-on-wet. In wet-on-dry, the fiber passes through a primary coating application, which is then UV cured, then through the secondary coating application, which is subsequently cured. In wet-on-wet, the fiber passes through both the primary and secondary coating applications, then goes to UV curing. The thickness of the coating is taken into account when calculating the stress that the fiber experiences under different bend configurations. When a coated fiber is wrapped around a mandrel, the stress experienced by the fiber is given by where is the fiber's Young's modulus, is the diameter of the mandrel, is the diameter of the cladding and is the diameter of the coating. In a two-point bend configuration, a coated fiber is bent in a U-shape and placed between the grooves of two faceplates, which are brought together until the fiber breaks. The stress in the fiber in this configuration is given by where is the distance between the faceplates. The coefficient 1.198 is a geometric constant associated with this configuration. Fiber optic coatings protect the glass fibers from scratches that could lead to strength degradation. The combination of moisture and scratches accelerates the aging and deterioration of fiber strength. When fiber is subjected to low stresses over a long period, fiber fatigue can occur. Over time or in extreme conditions, these factors combine to cause microscopic flaws in the glass fiber to propagate, which can ultimately result in fiber failure. Three key characteristics of fiber optic waveguides can be affected by environmental conditions: strength, attenuation, and resistance to losses caused by microbending. External optical fiber cable jackets and buffer tubes protect glass optical fiber from environmental conditions that can affect the fiber's performance and long-term durability. On the inside, coatings ensure the reliability of the signal being carried and help minimize attenuation due to microbending. Cable construction In practical fibers, the cladding is usually coated with a tough resin and features an additional buffer layer, which may be further surrounded by a jacket layer, usually plastic. These layers add strength to the fiber but do not affect its optical properties. Rigid fiber assemblies sometimes put light-absorbing glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces crosstalk between the fibers, or reduces flare in fiber bundle imaging applications. Multi-fiber cable usually uses colored buffers to identify each strand. Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, high voltage isolation, dual use as power lines, installation in conduit, lashing to aerial telephone poles, submarine installation, and insertion in paved streets. Some fiber optic cable versions are reinforced with aramid yarns or glass yarns as an intermediary strength member. In commercial terms, usage of the glass yarns are more cost-effective with no loss of mechanical durability. Glass yarns also protect the cable core against rodents and termites. Practical issues Installation Fiber cable can be very flexible, but traditional fiber's loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around corners. Bendable fibers, targeted toward easier installation in home environments, have been standardized as ITU-T G.657. This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed. Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage. Another important feature of cable is cable's ability to withstand tension which determines how much force can be applied to the cable during installation. Termination and splicing Optical fibers are connected to terminal equipment by optical fiber connectors. These connectors are usually of a standard type such as FC, SC, ST, LC, MTRJ, MPO or SMA. Optical fibers may be connected by connectors typically on a patch panel, or permanently by splicing, that is, joining two fibers together to form a continuous optical waveguide. The generally accepted splicing method is fusion splicing, which melts the fiber ends together. For quicker fastening jobs, a mechanical splice is used. All splicing techniques involve installing an enclosure that protects the splice. Fusion splicing is done with a specialized instrument. The fiber ends are first stripped of their protective polymer coating (as well as the more sturdy outer jacket, if present). The ends are cleaved with a precision cleaver to make them perpendicular, and are placed into special holders in the fusion splicer. The splice is usually inspected via a magnified viewing screen to check the cleaves before and fusion after the splice. The splicer uses small motors to align the end faces together, and emits a small spark between electrodes at the gap to burn off dust and moisture. Then the splicer generates a larger spark that raises the temperature above the melting point of the glass, fusing the ends permanently. The location and energy of the spark is carefully controlled so that the molten core and cladding do not mix, and this minimizes optical loss. A splice loss estimate is measured by the splicer by directing light through the cladding on one side and measuring the light leaking from the cladding on the other side. A splice loss under 0.1 dB is typical. The complexity of this process makes fiber splicing much more difficult than splicing copper wire. Mechanical fiber splices are designed to be quicker and easier to install, but there is still the need for stripping, careful cleaning, and precision cleaving. The fiber ends are aligned and held together by a precision sleeve, often using a clear index-matching gel that enhances the transmission of light across the joint. Mechanical splices typically have a higher optical loss and are less robust than fusion splices, especially if the gel is used. Fibers are terminated in connectors that hold the fiber end precisely and securely. An optical fiber connector is a rigid cylindrical barrel surrounded by a sleeve that holds the barrel in its mating socket. The mating mechanism can be push and click, turn and latch (bayonet mount), or screw-in (threaded). The barrel is typically free to move within the sleeve and may have a key that prevents the barrel and fiber from rotating as the connectors are mated. A typical connector is installed by preparing the fiber end and inserting it into the rear of the connector body. Quick-set adhesive is usually used to hold the fiber securely, and a strain relief is secured to the rear. Once the adhesive sets, the fiber's end is polished. Various polish profiles are used, depending on the type of fiber and the application. The resulting signal strength loss is called gap loss. For single-mode fiber, fiber ends are typically polished with a slight curvature that makes the mated connectors touch only at their cores. This is called a physical contact (PC) polish. The curved surface may be polished at an angle, to make an angled physical contact (APC) connection. Such connections have higher loss than PC connections but greatly reduced back reflection because light that reflects from the angled surface leaks out of the fiber core. APC fiber ends have low back reflection even when disconnected. In the 1990s, the number of parts per connector, polishing of the fibers, and the need to oven-bake the epoxy in each connector made terminating fiber optic cables difficult. Today, connector types on the market offer easier, less labor-intensive ways of terminating cables. Some of the most popular connectors are pre-polished at the factory and include a gel inside the connector. A cleave is made at a required length, to get as close to the polished piece already inside the connector. The gel surrounds the point where the two pieces meet inside the connector for very little light loss. For the most demanding installations, factory pre-polished pigtails of sufficient length to reach the first fusion splice enclosure assures good performance and minimizes on-site labor. Free-space coupling It is often necessary to align an optical fiber with another optical fiber or with an optoelectronic device such as a light-emitting diode, a laser diode, or a modulator. This can involve either carefully aligning the fiber and placing it in contact with the device, or can use a lens to allow coupling over an air gap. Typically the size of the fiber mode is much larger than the size of the mode in a laser diode or a silicon optical chip. In this case, a tapered or lensed fiber is used to match the fiber mode field distribution to that of the other element. The lens on the end of the fiber can be formed using polishing, laser cutting or fusion splicing. In a laboratory environment, a bare fiber end is coupled using a fiber launch system, which uses a microscope objective lens to focus the light down to a fine point. A precision translation stage (micro-positioning table) is used to move the lens, fiber, or device to allow the coupling efficiency to be optimized. Fibers with a connector on the end make this process much simpler: the connector is simply plugged into a pre-aligned fiber-optic collimator, which contains a lens that is either accurately positioned to the fiber or is adjustable. To achieve the best injection efficiency into a single-mode fiber, the direction, position, size, and divergence of the beam must all be optimized. With good optimization, 70 to 90% coupling efficiency can be achieved. With properly polished single-mode fibers, the emitted beam has an almost perfect Gaussian shape—even in the far field—if a good lens is used. The lens needs to be large enough to support the full numerical aperture of the fiber, and must not introduce aberrations in the beam. Aspheric lenses are typically used. Fiber fuse At optical intensities above 2 megawatts per square centimeter, when a fiber is subjected to a shock or is otherwise suddenly damaged, a fiber fuse can occur. The reflection from the damage vaporizes the fiber immediately before the break, and this new defect remains reflective so that the damage propagates back toward the transmitter at 1–3 meters per second (4–11 km/h, 2–8 mph). The open fiber control system, which ensures laser eye safety in the event of a broken fiber, can also effectively halt propagation of the fiber fuse. In situations, such as undersea cables, where high power levels might be used without the need for open fiber control, a fiber fuse protection device at the transmitter can break the circuit to minimize damage. Chromatic dispersion The refractive index of fibers varies slightly with the frequency of light, and light sources are not perfectly monochromatic. Modulation of the light source to transmit a signal also slightly widens the frequency band of the transmitted light. This has the effect that, over long distances and at high modulation speeds, different portions of light can take different times to arrive at the receiver, ultimately making the signal impossible to discern. This problem can be overcome in several ways, including the use of extra repeaters and the use of a relatively short length of fiber that has the opposite refractive index gradient. See also Fiber Bragg grating Fiber management system The Fiber Optic Association Gradient-index optics Leaky mode Modal bandwidth Optical mesh network Optical power meter Passive optical network Return loss Subwavelength-diameter optical fiber Notes References Further reading Mirabito, Michael M. A.; and Morgenstern, Barbara L., The New Communications Technologies: Applications, Policy, and Impact, 5th Edition. Focal Press, 2004. (). Mitschke F., Fiber Optics: Physics and Technology, Springer, 2009 () The book discusses how fiber optics has contributed to globalization, and has revolutionized communications, business, and even the distribution of capital among countries. GR-771, Generic Requirements for Fiber Optic Splice Closures, Telcordia Technologies, Issue 2, July 2008. Discusses fiber optic splice closures and the associated hardware intended to restore the mechanical and environmental integrity of one or more fiber cables entering the enclosure. External links Lennie Lightwave's Guide to Fiber Optics, The Fiber Optic Association, 2016. "Fibers", article in RP Photonics' Encyclopedia of Laser Physics and Technology "Fibre optic technologies", Mercury Communications Ltd, August 1992. "Photonics & the future of fibre", Mercury Communications Ltd, March 1993. Educational site from Arc Electronics MIT Video Lecture: Understanding Lasers and Fiberoptics Webdemo for chromatic dispersion at the Institute of Telecommunicatons, University of Stuttgart Articles containing video clips Glass engineering and science Glass production Telecommunications equipment
Optical fiber
[ "Materials_science", "Engineering" ]
11,352
[ "Glass engineering and science", "Glass production", "Materials science" ]
3,372,706
https://en.wikipedia.org/wiki/Accelerated%20aging
Accelerated aging is testing that uses aggravated conditions of heat, humidity, oxygen, sunlight, vibration, etc. to speed up the normal aging processes of items. It is used to help determine the long-term effects of expected levels of stress within a shorter time, usually in a laboratory by controlled standard test methods. It is used to estimate the useful lifespan of a product or its shelf life when actual lifespan data is unavailable. This occurs with products that have not existed long enough to have gone through their useful lifespan: for example, a new type of car engine or a new polymer for replacement joints. Physical testing or chemical testing is carried out by subjecting the product to representative levels of stress for long time periods, unusually high levels of stress used to accelerate the effects of natural aging, or levels of stress that intentionally force failures (for further analysis). Mechanical parts are run at very high speed, far in excess of what they would receive in normal usage. Polymers are often kept at elevated temperatures, in order to accelerate chemical breakdown. Environmental chambers are often used. Also, the device or material under test can be exposed to rapid (but controlled) changes in temperature, humidity, pressure, strain, etc. For example, cycles of heat and cold can simulate the effect of day and night for a few hours or minutes. Techniques and methods Accelerated aging employs a variety of controlled methods to replicate and speed up the effects of natural aging. These methods vary depending on the type of product, material, or environmental condition being simulated. Below are the most commonly used techniques: Environmental Stress testing Temperature cycling Samples are exposed to repeated cycles of extreme heat and cold, mimicking daily or seasonal temperature fluctuations. For example, in the automotive industry, components like engines and braking systems are tested using temperature cycling to simulate real-world conditions such as hot desert climates during the day and freezing temperatures at night. In electronics, printed circuit boards (PCBs) are subjected to rapid temperature shifts to evaluate solder joint reliability and material resilience. Thermal shock Thermal shock refers to the rapid exposure of materials or components to extreme temperature differences over a very short period. Unlike temperature cycling, which involves gradual changes between high and low temperatures, thermal shock imposes abrupt transitions that can lead to immediate stresses within a material. This method is often used to evaluate a product's resistance to cracking, warping, or other forms of failure caused by sudden thermal gradients. For example, glass or ceramic components in aerospace applications are subjected to thermal shock tests to ensure durability under high-speed atmospheric reentry conditions. Humidity testing - Humidity testing involves subjecting materials or products to high levels of moisture or fluctuating humidity conditions to simulate exposure to tropical, coastal, or industrial environments. This method is used to evaluate the effects of moisture on material degradation, corrosion, swelling, and overall performance. - For example, electronic devices undergo humidity testing to ensure their enclosures and seals can prevent moisture ingress, while construction materials such as wood or adhesives are tested to evaluate resistance to warping or delamination. - Humidity testing is often conducted in combination with elevated temperatures to accelerate the effects of moisture exposure, particularly for materials like polymers, metals, and composites. UV exposure UV testing is a component of aging tests designed to simulate the long-term effects of ultraviolet (UV) radiation exposure on materials, products, and coatings. UV radiation, a component of sunlight, is one of the primary contributors to material degradation over time. UV testing helps assess the durability and performance of materials under prolonged exposure to UV light, providing insights into their expected lifespan and identifying potential vulnerabilities. Purpose and Applications: The primary purpose of UV testing is to evaluate the resistance of materials to photodegradation, including fading, discoloration, cracking, embrittlement, or loss of mechanical properties. Common applications of UV testing include: Plastics and Polymers: Assessing the weatherability of polymers used in outdoor products. Coatings and Paints: Ensuring the durability of protective and decorative coatings exposed to sunlight. Textiles: Evaluating the fade resistance of fabrics and dyes. Testing Methods: Accelerated UV Testing: This approach uses specialized equipment, such as xenon arc or fluorescent UV lamps, to simulate UV radiation in a controlled environment. Common standards include ASTM G154 (fluorescent UV lamps) and ASTM G155 (xenon arc lamps). Oxygen and pollutant exposure Mechanical stress testing High-speed operation Vibration testing Chemical stability testing Thermal aging Chemical exposure Simulated use conditions Pressure cycling Strain testing Combined stress testing Validation of results Applications Library and archival preservation science Accelerated aging is also used in library and archival preservation science. In this context, a material, usually paper, is subjected to extreme conditions in an effort to speed up the natural aging process. Usually, the extreme conditions consist of elevated temperature, but tests making use of concentrated pollutants or intense light also exist. These tests may be used for several purposes. To predict the long-term effects of particular conservation treatments. In such a test, treated and untreated papers are both subjected to a single set of fixed, standardized conditions. The two are then compared in an effort to determine whether the treatment has a positive or negative effect on the lifespan of the paper. To study the basic processes of paper decay. In such a test, the purpose is not to predict a particular outcome for a specific type of paper, but rather to gain a greater understanding of the chemical mechanisms of decay. To predict the lifespan of a particular type of paper. In such a test, paper samples are generally subjected to several elevated temperatures and a constant level of relative humidity equivalent to the relative humidity in which they would be stored. The researcher then measures a relevant quality of the samples, such as folding endurance, at each temperature. This allows the researcher to determine how many days at each temperature it takes for a particular level of degradation to be reached. From the data collected, the researcher extrapolates the rate at which the samples might decay at lower temperatures, such as those at which the paper would be stored under normal conditions. In theory, this allows the researcher to predict the lifespan of the paper. This test is based on the Arrhenius equation. This type of test is, however, a subject of frequent criticism. There is no single recommended set of conditions at which these tests should be performed. In fact, temperatures from 22 to 160 degrees Celsius, relative humidities from 1% to 100%, and test durations from one hour to 180 days have all been used. ISO 5630-3 recommends accelerated aging at 80 degrees Celsius and 65% relative humidity when using a fixed set of conditions. Besides variations in the conditions to which the papers are subjected, there are also multiple ways in which the test can be set up. For instance, rather than simply placing single sheets in a climate controlled chamber, the Library of Congress recommends sealing samples in an air-tight glass tube and aging the papers in stacks, which more closely resembles the way in which they are likely to age under normal circumstances, rather than in single sheets. Limitations and criticisms Accelerated aging techniques, particularly those using the Arrhenius equation, have frequently been criticized in recent decades. While some researchers claim that the Arrhenius equation can be used to quantitatively predict the lifespan of tested papers, other researchers disagree. Many argue that this method cannot predict an exact lifespan for the tested papers, but that it can be used to rank papers by permanence. A few researchers claim that even such rankings can be deceptive, and that these types of accelerated aging tests can only be used to determine whether a particular treatment or paper quality has a positive or negative effect on the paper's permanence. There are several reasons for this skepticism. One argument is that entirely different chemical processes take place at higher temperatures than at lower temperatures, which means the accelerated aging process and natural aging process are not parallel. Another is that paper is a "complex system" and the Arrhenius equation only applicable to elementary reactions. Other researchers criticize the ways in which deterioration is measured during these experiments. Some point out that there is no standard point at which a paper is considered unusable for library and archival purposes. Others claim that the degree of correlation between macroscopic, mechanical properties of paper and molecular, chemical deterioration has not been convincingly proven. Reservations about the utility of this method in the automotive industry as a method for assessing corrosion performance have been documented In an effort to improve the quality of accelerated aging tests, some researchers have begun comparing materials which have undergone accelerated aging to materials which have undergone natural aging. The Library of Congress, for instance, began a long-term experiment in 2000 to compare artificially aged materials to materials allowed to undergo natural aging for a hundred years. History The technique of artificially accelerating the deterioration of paper through heat was known by 1899, when it was described by W. Herzberg. Accelerated aging was further refined during the 1920s, with tests using sunlight and elevated temperatures being used to rank the permanence of various papers in the United States and Sweden. In 1929, a frequently used method in which 72 hours at 100 degrees Celsius is considered equivalent to 18–25 years of natural aging was established by R. H. Rasch. In the 1950s, researchers began to question the validity of accelerated aging tests which relied on dry heat and a single temperature, pointing out that relative humidity affects the chemical processes which produce paper degradation and that the reactions which cause degradation have different activation energies. This led researchers like Baer and Lindström to advocate accelerated aging techniques using the Arrhenius equation and a realistic relative humidity. See also Arrhenius equation Environmental stress screening Environmental chamber Highly Accelerated Life Test Planned obsolescence References External links Medical Plastics and Biomaterials Magazine Safety engineering Product testing Environmental testing Senescence
Accelerated aging
[ "Chemistry", "Engineering", "Biology" ]
2,009
[ "Systems engineering", "Reliability engineering", "Safety engineering", "Senescence", "Cellular processes", "Environmental testing", "Metabolism" ]
3,373,650
https://en.wikipedia.org/wiki/Earth%20systems%20engineering%20and%20management
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman. Introduction to ESEM For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity". In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability". ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally. The issue is whether we will assume the ethical responsibility to do ESEM rationally and responsibly". Unlike the traditional engineering and management process "which assume a high degree of knowledge and certainty about the systems behavior and a defined endpoint to the process," ESEM "will be in constant dialog with [the systems], as they – and we and our cultures – change and coevolve together into the future". ESEM is a new concept, however there are a number of fields "such as industrial ecology, adaptive management, and systems engineering that can be relied on to enable rapid progress in developing" ESEM as a discipline. The premise of ESEM is that science and technology can provide successful and lasting solutions to human-created problems such as environmental pollution and climate-change. This assumption has recently been challenged in Techno-Fix: Why Technology Won't Save Us or the Environment. Topics Adaptive management Adaptive management is a key aspect of ESEM. Adaptive management is a way of approaching environmental management. It assumes that there is a great deal of uncertainty in environmental systems and holds that there is never a final solution to an earth systems problem. Therefore, once action has been taken, the Earth Systems Engineer will need to be in constant dialogue with the system, watching for changes and how the system evolves. This way of monitoring and managing ecosystems accepts nature's inherent uncertainty and embraces it by never concluding to one certain cure to a problem. Earth systems engineering Earth systems engineering is essentially the use of systems analysis methods in the examination of environmental problems. When analyzing complex environmental systems, there are numerous data sets, stakeholders and variables. It is therefore appropriate to approach such problems with a systems analysis method. Essentially there are "six major phases of a properly-conducted system study". The six phases are as follows: Determine goals of system Establish criteria for ranking alternative candidates Develop alternatives solutions Rank alternative candidates Iterate Act Part of the systems analysis process includes determining the goals of the system. The key components of goal development include the development of a Descriptive Scenario, a Normative Scenario and Transitive Scenario. Essentially, the Descriptive Scenario "describe[s] the situation as it is [and] tell[s] how it got to be that way" (Gibson, 1991). Another important part of the Descriptive Scenario is how it "point[s] out the good features and the unacceptable elements of the status quo". Next, the Normative Scenario shows the final outcome or the way the system should operate under ideal conditions once action has been taken. For the earth systems approach, the "Normative Scenario" will involve the most complicated analysis. The Normative Scenario will deal with stakeholders, creating a common trading zone or location for the free exchange of ideas to come up with a solution of where a system may be restored to or just how exactly a system should be modified. Finally the Transitive scenario comes up with the actual process of changing a system from a Descriptive state to a Normative state. Often, there is not one final solution, as noted in adaptive management. Typically an iterative process ensues as variables and inputs change and the system coevolves with the analysis. Environmental science When examining complex ecosystems there is an inherent need for the earth systems engineer to have a strong understanding of how natural processes function. A training in Environmental Science will be crucial to fully understand the possible unintended and undesired effects of a proposed earth systems design. Fundamental topics such as the carbon cycle or the water cycle are pivotal processes that need to be understood. Ethics and sustainability At the heart of ESEM is the social, ethical and moral responsibility of the earth systems engineer to stakeholders and to the natural system being engineered, to come up with an objective Transitive and Normative scenario. "ESEM is the cultural and ethical context itself". The earth systems engineer will be expected to explore the ethical implications of proposed solutions. "The perspective of environmental sustainability requires that we ask ourselves how each interaction with the natural environment will affect, and be judged by, our children in the future" ". "There is an increasing awareness that the process of development, left to itself, can cause irreversible damage to the environment, and that the resultant net addition to wealth and human welfare may very well be negative, if not catastrophic". With this notion in mind, there is now a new goal of sustainable environment-friendly development. Sustainable development is an important part to developing appropriate ESEM solutions to complex environmental problems. Industrial ecology Industrial ecology is the notion that major manufacturing and industrial processes need to shift from open loop systems to closed loop systems. This is essentially the recycling of waste to make new products. This reduces refuse and increases the effectiveness of resources. ESEM looks to minimize the impact of industrial processes on the environment, therefore the notion of recycling of industrial products is important to ESEM. Case study: Florida Everglades The Florida Everglades system is a prime example of a complex ecological system that underwent an ESEM analysis. Background The Florida Everglades is located in southern Florida. The ecosystem is essentially a subtropical fresh water marsh composed of a variety of flora and fauna. Of particular note is the saw grass and ridge slough formations that make the Everglades unique. Over the course of the past century mankind has had a rising presence in this region. Currently, all of the eastern shore of Florida is developed and the population has increased to over 6 million residents. This increased presence over the years has resulted in the channeling and redirecting of water from its traditional path through the Everglades and into the Gulf of Mexico and Atlantic Ocean. With this there have been a variety of deleterious effects upon the Florida Everglades. Descriptive scenario By 1993, the Everglades had been affected by numerous human developments. The water flow and quality had been affected by the construction of canals and levees, to the series of elevated highways running through the Everglades to the expansive Everglades Agricultural Area that had contaminated the Everglades with high amounts of nitrogen. The result of this reduced flow of water was dramatic. There was a 90 - 95% reduction in wading bird populations, declining fish populations and salt water intrusion into the ecosystem. If the Florida Everglades were to remain a US landmark, action needed to be taken. Normative scenario It was in 1993 that the Army Corps of Engineers analyzed the system. They determined that an ideal situation would be to "get the water right". In doing so there would be a better flow through the Everglades and a reduced number of canals and levees sending water to tide. Transitive scenario It was from the development of the Normative Scenario, that the Army Corps of Engineers developed CERP, the Comprehensive Everglades Restoration Plan. In the plan they created a time line of projects to be completed, the estimated cost and the ultimate results of improving the ecosystem by having native flora and fauna prosper. They also outline the human benefits of the project. Not only will the solution be sustainable, as future generations will be able to enjoy the Everglades, but the correction of the water flow and through the creation of storage facilities will reduce the occurrence of droughts and water shortages in southern Florida. See also Design review Environmental management Industrial ecology Sustainability Systems engineering Publications Allenby, B. R. (2000). Earth systems engineering: the world as human artifact. Bridge 30 (1), 5–13. Allenby, B. R. (2005). Reconstructing earth: Technology and environment in the age of humans. Washington, DC: Island Press. From https://www.loc.gov/catdir/toc/ecip059/2005006241.html Allenby, B. R. (2000, Winter). Earth systems engineering and management. IEEE Technology and Society Magazine, 0278-0079(Winter) 10-24. Davis, Steven, et al. Everglades: The Ecosystem and Its Restoration. Boca Raton: St Lucie Press, 1997. "Everglades." Comprehensive Everglades Restoration Plan. 10 April 2004. https://web.archive.org/web/20051214102114/http://www.evergladesplan.org/ Gibson, J. E. (1991). How to do A systems analysis and systems analyst decalog. In W. T. Scherer (Ed.), (Fall 2003 ed.) (pp. 29–238). Department of Systems and Information Engineering: U of Virginia. Retrieved October 29, 2005, Gorman, Michael. (2004). Syllabus Spring Semester 2004. Retrieved October 29, 2005 from https://web.archive.org/web/20110716231016/http://repo-nt.tcc.virginia.edu/classes/ESEM/syllabus.html Hall, J.W. and O'Connell, P.E. (2007). Earth Systems Engineering: turning vision into action. Civil Engineering, 160(3): 114-122. Newton, L. H. (2003). Ethics and sustainability: Sustainable development and the moral life. Upper Saddle River, N.J.: Prentice Hall. References External links Class Taught Spring 2004 at The University of Virginia on ESEM UVA article on Spring 2004 course Class Taught January 2007 at the University of Virginia on ESEM Allenby Article on ESEM Centre for Earth Systems Engineering Research @ Newcastle University Environmental engineering Industrial ecology Systems ecology Systems engineering Engineering and management
Earth systems engineering and management
[ "Chemistry", "Engineering", "Environmental_science" ]
2,356
[ "Systems engineering", "Systems ecology", "Chemical engineering", "Industrial engineering", "Civil engineering", "Environmental engineering", "Industrial ecology", "Environmental social science" ]
28,957
https://en.wikipedia.org/wiki/Southern%20blot
Southern blot is a method used for detection and quantification of a specific DNA sequence in DNA samples. This method is used in molecular biology. Briefly, purified DNA from a biological sample (such as blood or tissue) is digested with restriction enzymes, and the resulting DNA fragments are separated by electrophoresis using an electric current to move them through a sieve-like gel or matrix, which allows smaller fragments to move faster than larger fragments. The DNA fragments are transferred out of the gel or matrix onto a solid membrane, which is then exposed to a DNA probe labeled with a radioactive, fluorescent, or chemical tag. The tag allows any DNA fragments containing complementary sequences with the DNA probe sequence to be visualized within the Southern blot. The Southern blotting combines the transfer of electrophoresis-separated DNA fragments to a filter membrane in a process called blotting, and the subsequent fragment detection by probe hybridization. The method is named after the British biologist Edwin Southern, who first published it in 1975. Other blotting methods (i.e., western blot, northern blot, eastern blot, southwestern blot) that employ similar principles, but using RNA or protein, have later been named for compass directions as a sort of pun from Southern's name. As the label is eponymous, Southern is capitalized, as is conventional of proper nouns. The names for other blotting methods may follow this convention, by analogy. History Southern invented Southern blot after combining three innovations. The first one is the restriction endonucleases, which were developed at Johns Hopkins University by Tom Kelly and Hamilton Smith. Those restriction endonucleases are used to cut the DNA at a specific sequence. Kenneth and Noreen Murray introduced this technique as Southern. The second innovation is the gel electrophoresis that is based on separation of mixtures of DNA, RNA, or proteins according to molecular size, which was also developed at Johns Hopkins University, by Daniel Nathans and Kathleen Danna in 1971. The third innovation is the blotting-through method which was developed by Frederick Sanger, when he transferred RNA molecules to DEAE paper. Southern blot was invented in 1973 but it was not published until 1975. Although it was published later the technique was disseminated when Southern introduced the Southern blot technique to a scientist at Cold Spring Harbor Laboratory called Michael Mathews by drawing this technique on a paper. Method The genomic DNA is digested with either one or more than one restriction enzyme, then the DNA fragments are size-fractionated by gel electrophoresis. Before the DNA fragments are transferred to a solid membrane which is either nylon or nitrocellulose membrane they are first denatured by alkaline treatment. After the DNA fragments are immobilized on the membrane, prehybridization methods are used to reduce non-specific probe binding. Then the fragments on the membrane are hybridized with either radiolabeled or nonradioactive labeled DNA, RNA, or oligonucleotide probes that are complementary to the target DNA sequence. Then detection methods are used to visualize the target DNA. DNA Isolation: The DNA to be studied is isolated from various tissues. The most suitable source of DNA is known as blood tissue. However, it can be isolated from different tissues (hair, semen, saliva, etc.). DNA digestion: Restriction endonucleases are used to cut high-molecular-weight DNA strands into smaller fragments. This is done by adding the desired amount of DNA which can be changed according to the probe used and the intricacy of the DNA, with the restriction enzyme, enzyme buffer and purified water. Then everything is incubated at 37 °C overnight. Gel electrophoresis: The DNA fragments are then electrophoresed on an agarose gel to separate them by size. If some of the DNA fragments are larger than 15 kb, then before blotting, the gel may be treated with an acid, such as dilute HCl. This depurinates the DNA fragments, breaking the DNA into smaller pieces, thereby allowing more efficient transfer from the gel to membrane. Denaturation: If alkaline transfer methods are used, the DNA gel is placed into an alkaline solution (typically containing sodium hydroxide) to denature the double-stranded DNA. The denaturation in an alkaline environment may improve binding of the negatively charged thymine residues of DNA to a positively charged amino groups of membrane, separating it into single DNA strands for later hybridization to the probe (see below), and destroys any residual RNA that may still be present in the DNA. The choice of alkaline over neutral transfer methods, however, is often empirical and may result in equivalent results. Blotting: A sheet of nitrocellulose (or, alternatively, nylon) membrane is placed on top of (or below, depending on the direction of the transfer) the gel. Pressure is applied constantly to the gel (either using suction, or by placing a stack of paper towels and a weight on top of the membrane and gel), to ensure good and even contact between gel and membrane. If transferring by suction, 20X SSC buffer is used to ensure a seal and prevent drying of the gel. Buffer transfer by capillary action from a region of high water potential to a region of low water potential (usually filter paper and paper tissues) is then used to move the DNA from the gel onto the membrane; ion exchange interactions bind the DNA to the membrane due to the negative charge of the DNA and positive charge of the membrane. Five methods can be used to transfer DNA fragments to the solid membrane and they are: Upward capillary transfer: This method transfers the DNA fragment upward from the gel to the membrane where the flow of the liquid or the buffer will be upward. Downward capillary transfer: This method is done by placing the gel on the surface of the membrane (usually nylon charged membrane) and the DNA fragments will be transferred in a downward direction with the flow of the alkaline buffer. Simultaneous transfer to two membranes: This method is used to transfer DNA fragments of high concentration simultaneously from the gel to two membranes. Electrophoretic transfer: This method usually uses large electric current which makes it difficult to transfer the DNA efficiently due to the temperature of the buffer used, so these machines can be either equipped with cooling machines or used in a cold area. Vacuum transfer: This method uses a buffer from the upper chamber to transfer the DNA from the gel to the nitrocellulose or nylon membrane, the gel is placed directly on the membrane, and the membrane is placed on a porous screen on the vacuum chamber. Immobilization: The membrane is then baked in a vacuum or regular oven at 80 °C for 2 hours (standard conditions; nitrocellulose or nylon membrane) or exposed to ultraviolet radiation (nylon membrane) to permanently attach the transferred DNA to the membrane. Hybridization: After that, a hybridization probe—a single DNA fragment with a particular sequence whose presence in the target DNA is to be ascertained—is exposed to the membrane. The probe DNA is labelled so that it can be detected, usually by incorporating radioactivity or tagging the molecule with a fluorescent or chromogenic dye. In some cases, the hybridization probe may be made from RNA, rather than DNA. To ensure the specificity of the binding of the probe to the sample DNA, most common hybridization methods use salmon or herring sperm DNA for blocking of the membrane surface and target DNA, deionized formamide, and detergents such as SDS to reduce non-specific binding of the probe. Detection: After hybridization, excess probe is washed from the membrane (typically using SSC buffer), and the pattern of hybridization is visualized on X-ray film by autoradiography in the case of a radioactive or fluorescent probe, or by development of color on the membrane if a chromogenic detection method is used. Interpretation of results Hybridization of the probe to a specific DNA fragment on the filter membrane indicates that this fragment contains a DNA sequence that is complementary to the probe. The transfer step of the DNA from the electrophoresis gel to a membrane permits easy binding of the labeled hybridization probe to the size-fractionated DNA. It also allows for the fixation of the target-probe hybrids, required for analysis by autoradiography or other detection methods. Southern blots performed with restriction enzyme-digested genomic DNA may be used to determine the number of sequences (e.g., gene copies) in a genome. A probe that hybridizes only to a single DNA segment that has not been cut by the restriction enzyme will produce a single band on a Southern blot, whereas multiple bands will likely be observed when the probe hybridizes to several highly similar sequences (e.g., those that may be the result of sequence duplication). To improve specificity and reduce hybridization of the probe to sequences that are less than 100% identical, the hybridization parameters may be changed (for instance, by raising the hybridization temperature or lowering the salt content). Nylon membrane is more durable and has higher binding capacity to DNA fragments than nitrocellulose membrane, so the DNA fragments will be more fixed to the membrane even when the membrane is incubated in high temperatures. In addition, compared to the nitrocellulose membrane which requires a high ionic strength buffer to bind the DNA fragments to the membrane, nylon charged membranes use buffers with very low ionic strength to transfer even small fragments of DNA of about 50 bp to the membrane, usually the DNA to be transferred is separated by polyacrylamide gel. In the blotting step the most efficient method to transfer the DNA from the gel to the membrane is the vacuum transfer since it transfers the DNA more rapidly and quantitatively. Applications Southern blotting transfer may be used for homology-based cloning based on amino acid sequence of the protein product of the target gene. Oligonucleotides are designed so that they are complementary to the target sequence. The oligonucleotides are chemically synthesized, radiolabeled, and used to screen a DNA library, or other collections of cloned DNA fragments. Sequences that hybridize with the hybridization probe are further analyzed, for example, to obtain the full length sequence of the targeted gene. Normal chromosomal or gene rearrangement can be studied using this technique. Can be used to find similar sequences in other species or in the genome by decreasing the specificity of hybridization. In a mixture having different sizes of digested DNA, it is used to identify the restriction fragment of a specific size. It is useful in identifying changes that occur in genes including insertions, rearrangements, deletions, and point mutations that affect the restriction sites. Moreover it is used to identify a specific region that uses many different restriction enzymes in a restriction mapping. Also it is used to determine which recognition site has been altered due to a single nucleotide polymorphism that changes a specific restriction enzyme. Southern blotting can also be used to identify methylated sites in particular genes. Particularly useful are the restriction nucleases MspI and HpaII, both of which recognize and cleave within the same sequence. However, HpaII requires that a C within that site be methylated, whereas MspI cleaves only DNA unmethylated at that site. Therefore, any methylated sites within a sequence analyzed with a particular probe will be cleaved by the former, but not the latter, enzyme. Can be used in personal identification through fingerprinting, and in disease diagnosis. Limitations Compared to other tests, southern blot is a complex technique that has multiple steps and these steps require equipment and reagents that are expensive. High quality and large amounts of DNA are needed. Southern blotting is a time consuming method and can only estimate the size of the DNA since it is a semi-quantitative method. It cannot be used to detect mutations at base-pair level. See also Gel electrophoresis of nucleic acids Restriction fragment Genetic fingerprint Northern blot Western blot Eastern blot Southwestern blot Northwestern blot References External links OpenWetWare Molecular biology techniques Eponyms in biology
Southern blot
[ "Chemistry", "Biology" ]
2,551
[ "Molecular biology techniques", "Molecular biology" ]
29,040
https://en.wikipedia.org/wiki/Generalized%20Stokes%20theorem
In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem), also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, Green’s theorem and Stokes' theorem are the cases of a surface in or and the divergence theorem is the case of a volume in Hence, the theorem is sometimes referred to as the fundamental theorem of multivariate calculus. Stokes' theorem says that the integral of a differential form over the boundary of some orientable manifold is equal to the integral of its exterior derivative over the whole of , i.e., Stokes' theorem was formulated in its modern form by Élie Cartan in 1945, following earlier work on the generalization of the theorems of vector calculus by Vito Volterra, Édouard Goursat, and Henri Poincaré. This modern form of Stokes' theorem is a vast generalization of a classical result that Lord Kelvin communicated to George Stokes in a letter dated July 2, 1850. Stokes set the theorem as a question on the 1854 Smith's Prize exam, which led to the result bearing his name. It was first published by Hermann Hankel in 1861. This classical case relates the surface integral of the curl of a vector field over a surface (that is, the flux of ) in Euclidean three-space to the line integral of the vector field over the surface boundary. Introduction The second fundamental theorem of calculus states that the integral of a function over the interval can be calculated by finding an antiderivative of : Stokes' theorem is a vast generalization of this theorem in the following sense. By the choice of , . In the parlance of differential forms, this is saying that is the exterior derivative of the 0-form, i.e. function, : in other words, that . The general Stokes theorem applies to higher degree differential forms instead of just 0-forms such as . A closed interval is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points and . Integrating over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed: the manifold has to be orientable, and the form has to be compactly supported in order to give a well-defined integral. The two points and form the boundary of the closed interval. More generally, Stokes' theorem applies to oriented manifolds with boundary. The boundary of is itself a manifold and inherits a natural orientation from that of . For example, the natural orientation of the interval gives an orientation of the two boundary points. Intuitively, inherits the opposite orientation as , as they are at opposite ends of the interval. So, "integrating" over two boundary points , is taking the difference . In even simpler terms, one can consider the points as boundaries of curves, that is as 0-dimensional boundaries of 1-dimensional manifolds. So, just as one can find the value of an integral () over a 1-dimensional manifold () by considering the anti-derivative () at the 0-dimensional boundaries (), one can generalize the fundamental theorem of calculus, with a few additional caveats, to deal with the value of integrals () over -dimensional manifolds () by considering the antiderivative () at the -dimensional boundaries () of the manifold. So the fundamental theorem reads: Formulation for smooth manifolds with boundary Let be an oriented smooth manifold of dimension with boundary and let be a smooth -differential form that is compactly supported on . First, suppose that is compactly supported in the domain of a single, oriented coordinate chart . In this case, we define the integral of over as i.e., via the pullback of to . More generally, the integral of over is defined as follows: Let be a partition of unity associated with a locally finite cover of (consistently oriented) coordinate charts, then define the integral where each term in the sum is evaluated by pulling back to as described above. This quantity is well-defined; that is, it does not depend on the choice of the coordinate charts, nor the partition of unity. The generalized Stokes theorem reads: Here is the exterior derivative, which is defined using the manifold structure only. The right-hand side is sometimes written as to stress the fact that the -manifold has no boundary. (This fact is also an implication of Stokes' theorem, since for a given smooth -dimensional manifold , application of the theorem twice gives for any -form , which implies that .) The right-hand side of the equation is often used to formulate integral laws; the left-hand side then leads to equivalent differential formulations (see below). The theorem is often used in situations where is an embedded oriented submanifold of some bigger manifold, often , on which the form is defined. Topological preliminaries; integration over chains Let be a smooth manifold. A (smooth) singular -simplex in is defined as a smooth map from the standard simplex in to . The group of singular -chains on is defined to be the free abelian group on the set of singular -simplices in . These groups, together with the boundary map, , define a chain complex. The corresponding homology (resp. cohomology) group is isomorphic to the usual singular homology group (resp. the singular cohomology group ), defined using continuous rather than smooth simplices in . On the other hand, the differential forms, with exterior derivative, , as the connecting map, form a cochain complex, which defines the de Rham cohomology groups . Differential -forms can be integrated over a -simplex in a natural way, by pulling back to . Extending by linearity allows one to integrate over chains. This gives a linear map from the space of -forms to the th group of singular cochains, , the linear functionals on . In other words, a -form defines a functional on the -chains. Stokes' theorem says that this is a chain map from de Rham cohomology to singular cohomology with real coefficients; the exterior derivative, , behaves like the dual of on forms. This gives a homomorphism from de Rham cohomology to singular cohomology. On the level of forms, this means: closed forms, i.e., , have zero integral over boundaries, i.e. over manifolds that can be written as , and exact forms, i.e., , have zero integral over cycles, i.e. if the boundaries sum up to the empty set: . De Rham's theorem shows that this homomorphism is in fact an isomorphism. So the converse to 1 and 2 above hold true. In other words, if are cycles generating the th homology group, then for any corresponding real numbers, , there exist a closed form, , such that and this form is unique up to exact forms. Stokes' theorem on smooth manifolds can be derived from Stokes' theorem for chains in smooth manifolds, and vice versa. Formally stated, the latter reads: Underlying principle To simplify these topological arguments, it is worthwhile to examine the underlying principle by considering an example for dimensions. The essential idea can be understood by the diagram on the left, which shows that, in an oriented tiling of a manifold, the interior paths are traversed in opposite directions; their contributions to the path integral thus cancel each other pairwise. As a consequence, only the contribution from the boundary remains. It thus suffices to prove Stokes' theorem for sufficiently fine tilings (or, equivalently, simplices), which usually is not difficult. Classical vector analysis example Let be a piecewise smooth Jordan plane curve. The Jordan curve theorem implies that divides into two components, a compact one and another that is non-compact. Let denote the compact part that is bounded by and suppose is smooth, with . If is the space curve defined by and is a smooth vector field on , then: This classical statement is a special case of the general formulation after making an identification of vector field with a 1-form and its curl with a two form through Generalization to rough sets The formulation above, in which is a smooth manifold with boundary, does not suffice in many applications. For example, if the domain of integration is defined as the plane region between two -coordinates and the graphs of two functions, it will often happen that the domain has corners. In such a case, the corner points mean that is not a smooth manifold with boundary, and so the statement of Stokes' theorem given above does not apply. Nevertheless, it is possible to check that the conclusion of Stokes' theorem is still true. This is because and its boundary are well-behaved away from a small set of points (a measure zero set). A version of Stokes' theorem that allows for roughness was proved by Whitney. Assume that is a connected bounded open subset of . Call a standard domain if it satisfies the following property: there exists a subset of , open in , whose complement in has Hausdorff -measure zero; and such that every point of has a generalized normal vector. This is a vector such that, if a coordinate system is chosen so that is the first basis vector, then, in an open neighborhood around , there exists a smooth function such that is the graph and is the region . Whitney remarks that the boundary of a standard domain is the union of a set of zero Hausdorff -measure and a finite or countable union of smooth -manifolds, each of which has the domain on only one side. He then proves that if is a standard domain in , is an -form which is defined, continuous, and bounded on , smooth on , integrable on , and such that is integrable on , then Stokes' theorem holds, that is, The study of measure-theoretic properties of rough sets leads to geometric measure theory. Even more general versions of Stokes' theorem have been proved by Federer and by Harrison. Special cases The general form of the Stokes theorem using differential forms is more powerful and easier to use than the special cases. The traditional versions can be formulated using Cartesian coordinates without the machinery of differential geometry, and thus are more accessible. Further, they are older and their names are more familiar as a result. The traditional forms are often considered more convenient by practicing scientists and engineers but the non-naturalness of the traditional formulation becomes apparent when using other coordinate systems, even familiar ones like spherical or cylindrical coordinates. There is potential for confusion in the way names are applied, and the use of dual formulations. Classical (vector calculus) case This is a (dualized) (1 + 1)-dimensional case, for a 1-form (dualized because it is a statement about vector fields). This special case is often just referred to as Stokes' theorem in many introductory university vector calculus courses and is used in physics and engineering. It is also sometimes known as the curl theorem. The classical Stokes' theorem relates the surface integral of the curl of a vector field over a surface in Euclidean three-space to the line integral of the vector field over its boundary. It is a special case of the general Stokes theorem (with ) once we identify a vector field with a 1-form using the metric on Euclidean 3-space. The curve of the line integral, , must have positive orientation, meaning that points counterclockwise when the surface normal, , points toward the viewer. One consequence of this theorem is that the field lines of a vector field with zero curl cannot be closed contours. The formula can be rewritten as: Green's theorem Green's theorem is immediately recognizable as the third integrand of both sides in the integral in terms of , , and cited above. In electromagnetism Two of the four Maxwell equations involve curls of 3-D vector fields, and their differential and integral forms are related by the special 3-dimensional (vector calculus) case of Stokes' theorem. Caution must be taken to avoid cases with moving boundaries: the partial time derivatives are intended to exclude such cases. If moving boundaries are included, interchange of integration and differentiation introduces terms related to boundary motion not included in the results below (see Differentiation under the integral sign): The above listed subset of Maxwell's equations are valid for electromagnetic fields expressed in SI units. In other systems of units, such as CGS or Gaussian units, the scaling factors for the terms differ. For example, in Gaussian units, Faraday's law of induction and Ampère's law take the forms: respectively, where is the speed of light in vacuum. Divergence theorem Likewise, the divergence theorem is a special case if we identify a vector field with the -form obtained by contracting the vector field with the Euclidean volume form. An application of this is the case where is an arbitrary constant vector. Working out the divergence of the product gives Since this holds for all we find Volume integral of gradient of scalar field Let be a scalar field. Then where is the normal vector to the surface at a given point. Proof: Let be a vector. Then Since this holds for any (in particular, for every basis vector), the result follows. See also Chandrasekhar–Wentzel lemma Footnotes References Further reading External links Proof of the Divergence Theorem and Stokes' Theorem Calculus 3 – Stokes Theorem from lamar.edu – an expository explanation Differential topology Differential forms Duality theories Integration on manifolds Theorems in calculus Theorems in differential geometry
Generalized Stokes theorem
[ "Mathematics", "Engineering" ]
2,824
[ "Theorems in differential geometry", "Theorems in mathematical analysis", "Mathematical structures", "Tensors", "Theorems in calculus", "Calculus", "Differential forms", "Topology", "Category theory", "Duality theories", "Geometry", "Theorems in geometry", "Differential topology" ]
29,087
https://en.wikipedia.org/wiki/Security%20through%20obscurity
In security engineering, security through obscurity is the practice of concealing the details or mechanisms of a system to enhance its security. This approach relies on the principle of hiding something in plain sight, akin to a magician's sleight of hand or the use of camouflage. It diverges from traditional security methods, such as physical locks, and is more about obscuring information or characteristics to deter potential threats. Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number. While not a standalone solution, security through obscurity can complement other security measures in certain scenarios. Obscurity in the context of security engineering is the notion that information can be protected, to a certain extent, when it is difficult to access or comprehend. This concept hinges on the principle of making the details or workings of a system less visible or understandable, thereby reducing the likelihood of unauthorized access or manipulation. Security by obscurity alone is discouraged and not recommended by standards bodies. History An early opponent of security through obscurity was the locksmith Alfred Charles Hobbs, who in 1851 demonstrated to the public how state-of-the-art locks could be picked. In response to concerns that exposing security flaws in the design of locks could make them more vulnerable to criminals, he said: "Rogues are very keen in their profession, and know already much more than we can teach them." There is scant formal literature on the issue of security through obscurity. Books on security engineering cite Kerckhoffs' doctrine from 1883 if they cite anything at all. For example, in a discussion about secrecy and openness in nuclear command and control: [T]he benefits of reducing the likelihood of an accidental war were considered to outweigh the possible benefits of secrecy. This is a modern reincarnation of Kerckhoffs' doctrine, first put forward in the nineteenth century, that the security of a system should depend on its key, not on its design remaining obscure. Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships", as well as on how competition affects the incentives to disclose. There are conflicting stories about the origin of this term. Fans of MIT's Incompatible Timesharing System (ITS) say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. Within the ITS culture, the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. Typing Alt Alt Control-D set a flag that would prevent patching the system even if the user later got it right. In January 2020, NPR reported that Democratic Party officials in Iowa declined to share information regarding the security of its caucus app, to "make sure we are not relaying information that could be used against us." Cybersecurity experts replied that "to withhold the technical details of its app doesn't do much to protect the system." Criticism Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States recommends against this practice: "System security should not depend on the secrecy of the implementation or its components." The Common Weakness Enumeration project lists "Reliance on Security Through Obscurity" as CWE-656. A large number of telecommunication and digital rights management cryptosystems use security through obscurity, but have ultimately been broken. These include components of GSM, GMR encryption, GPRS encryption, a number of RFID encryption schemes, and most recently Terrestrial Trunked Radio (TETRA). One of the largest proponents of security through obscurity commonly seen today is anti-malware software. What typically occurs with this single point of failure, however, is an arms race of attackers finding novel ways to avoid detection and defenders coming up with increasingly contrived but secret signatures to flag on. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies. Obscurity in architecture vs. technique Knowledge of how the system is built differs from concealment and camouflage. The effectiveness of obscurity in operations security depends on whether the obscurity lives on top of other good security practices, or if it is being used alone. When used as an independent layer, obscurity is considered a valid security tool. In recent years, more advanced versions of "security through obscurity" have gained support as a methodology in cybersecurity through Moving Target Defense and cyber deception. NIST's cyber resiliency framework, 800-160 Volume 2, recommends the usage of security through obscurity as a complementary part of a resilient and secure computing environment. See also Steganography Code morphing Need to know Obfuscation (software) Secure by design AACS encryption key controversy Full disclosure (computer security) Code talker Obfuscation Concealment device References External links Eric Raymond on Cisco's IOS source code 'release' v Open Source Computer Security Publications: Information Economics, Shifting Liability and the First Amendment by Ethan M. Preston and John Lofton by Jay Beale Secrecy, Security and Obscurity & The Non-Security of Secrecy by Bruce Schneier "Security through obsolescence", Robin Miller, linux.com, June 6, 2002 Computer security procedures Cryptography Secrecy
Security through obscurity
[ "Mathematics", "Engineering" ]
1,230
[ "Systems engineering", "Cybersecurity engineering", "Cryptography", "Security engineering", "Applied mathematics", "Computer security procedures" ]
29,109
https://en.wikipedia.org/wiki/Semantic%20network
A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples. Semantic networks are used in neurolinguistics and natural language processing applications such as semantic parsing and word-sense disambiguation. Semantic networks can also be used as a method to analyze large texts and identify the main themes and topics (e.g., of social media posts), to reveal biases (e.g., in news coverage), or even to map an entire research field. History Examples of the use of semantic networks in logic, directed acyclic graphs as a mnemonic tool, dates back centuries. The earliest documented use being the Greek philosopher Porphyry's commentary on Aristotle's categories in the third century AD. In computing history, "Semantic Nets" for the propositional calculus were first implemented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an "interlingua" for machine translation of natural languages, although the importance of this work and the Cambridge Language Research Unit was only belatedly realized. Semantic networks were also independently implemented by Robert F. Simmons and Sheldon Klein, using the first-order predicate calculus as a base, after being inspired by a demonstration of Victor Yngve. The "line of research was originated by the first President of the Association for Computational Linguistics, Victor Yngve, who in 1960 had published descriptions of algorithms for using a phrase structure grammar to generate syntactically well-formed nonsense sentences. Sheldon Klein and I about 1962–1964 were fascinated by the technique and generalized it to a method for controlling the sense of what was generated by respecting the semantic dependencies of words as they occurred in text." Other researchers, most notably M. Ross Quillian and others at System Development Corporation helped contribute to their work in the early 1960s as part of the SYNTHEX project. It's these publications at System Development Corporation that most modern derivatives of the term "semantic network" cite as their background. Later prominent works were done by Allan M. Collins and Quillian (e.g., Collins and Quillian; Collins and Loftus Quillian). Still later in 2006, Hermann Helbig fully described MultiNet. In the late 1980s, two universities in the Netherlands, Groningen and Twente, jointly began a project called Knowledge Graphs, which are semantic networks but with the added constraint that edges are restricted to be from a limited set of possible relations, to facilitate algebras on the graph. In the subsequent decades, the distinction between semantic networks and knowledge graphs was blurred. In 2012, Google gave their knowledge graph the name Knowledge Graph. The semantic link network was systematically studied as a semantic social networking method. Its basic model consists of semantic nodes, semantic links between nodes, and a semantic space that defines the semantics of nodes and links and reasoning rules on semantic links. The systematic theory and model was published in 2004. This research direction can trace to the definition of inheritance rules for efficient model retrieval in 1998 and the Active Document Framework ADF. Since 2003, research has developed toward social semantic networking. This work is a systematic innovation at the age of the World Wide Web and global social networking rather than an application or simple extension of the Semantic Net (Network). Its purpose and scope are different from that of the Semantic Net (or network). The rules for reasoning and evolution and automatic discovery of implicit links play an important role in the Semantic Link Network. Recently it has been developed to support Cyber-Physical-Social Intelligence. It was used for creating a general summarization method. The self-organised Semantic Link Network was integrated with a multi-dimensional category space to form a semantic space to support advanced applications with multi-dimensional abstractions and self-organised semantic links It has been verified that Semantic Link Network play an important role in understanding and representation through text summarisation applications. Semantic Link Network has been extended from cyberspace to cyber-physical-social space. Competition relation and symbiosis relation as well as their roles in evolving society were studied in the emerging topic: Cyber-Physical-Social Intelligence More specialized forms of semantic networks has been created for specific use. For example, in 2008, Fawsy Bendeck's PhD thesis formalized the Semantic Similarity Network (SSN) that contains specialized relationships and propagation algorithms to simplify the semantic similarity representation and calculations. Basics of semantic networks A semantic network is used when one has knowledge that is best understood as a set of concepts that are related to one another. Most semantic networks are cognitively based. They consist of arcs (spokes) and nodes (hubs) which can be organized into a taxonomic hierarchy. Different semantic networks can also be connected by bridge nodes. Semantic networks contributed to the ideas of spreading activation, inheritance, and nodes as proto-objects. One process of constructing semantic networks, known also as co-occurrence networks, includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network. In linguistics In the field of linguistics, semantic networks represent how the human mind handles associated concepts. Typically, concepts in a semantic network can have one of two different relationships: either semantic or associative. If semantic in relation, the two concepts are linked by any of the following semantic relationships: synonymy, antonymy, hypernymy, hyponymy, holonymy, meronymy, or metonymy, or polysemy. These are not the only semantic relationships, but some of the most common. If associative in relation, the two concepts are linked based on their frequency to occur together. These associations are accidental, meaning that nothing about their individual meanings requires them to be associated with one another, only that they typically are. Examples of this would be pig and farm, pig and trough, or pig and mud. While nothing about the meaning of pig forces it to be associated with farms, as pigs can be wild, the fact that pigs are so frequently found on farms creates an accidental associated relationship. These thematic relationships are common within semantic networks and are notable results in free association tests. As the initial word is given, activation of the most closely related concepts begin, spreading outward to the lesser associated concepts. An example of this would be the initial word pig prompting mammal, then animal, and then breathes. This example shows that taxonomic relationships are inherent within semantic networks. The most closely related concepts typically share semantic features, which are determinants of semantic similarity scores. Words with higher similarity scores are more closely related, thus have higher probability of being a close word in the semantic network. These relationships can be suggested into the brain through priming, where previous examples of the same relationship are shown before the target word is shown. The effect of priming on a semantic network linking can be seen through the speed of the reaction time to the word. Priming can help to reveal the structure of a semantic network and which words are most closely associated with the original word. Disruption of a semantic network can lead to a semantic deficit, not the same as semantic dementia. In the brain There exists physical manifestation of semantic relationships in the brain as well. Category-specific semantic circuits show that words belonging to different categories are processed in circuits differently located throughout the brain. For example, the semantic circuits for a word associated with the face or mouth (such as lick) is located in a different place of the brain than a word associated with the leg or foot (such as kick). This is a primary result of a 2013 study published by Friedemann Pulvermüller. These semantic circuits are directly tied to their sensorimotor areas of the brain. This is known as embodied semantics, a subtopic of embodied language processing. If brain damage occurs, the normal processing of semantic networks could be disrupted, leading to preference into what kind of relationships dominate the semantic network in the mind. Examples In Lisp The following code shows an example of a semantic network in the Lisp programming language using an association list. (setq *database* '((canary (is-a bird) (color yellow) (size small)) (penguin (is-a bird) (movement swim)) (bird (is-a vertebrate) (has-part wings) (reproduction egg-laying)))) To extract all the information about the "canary" type, one would use the assoc function with a key of "canary". WordNet An example of a semantic network is WordNet, a lexical database of English. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets. Some of the most common semantic relations defined are meronymy (A is a meronym of B if A is part of B), holonymy (B is a holonym of A if B contains A), hyponymy (or troponymy) (A is subordinate of B; A is kind of B), hypernymy (A is superordinate of B), synonymy (A denotes the same as B) and antonymy (A denotes the opposite of B). WordNet properties have been studied from a network theory perspective and compared to other semantic networks created from Roget's Thesaurus and word association tasks. From this perspective the three of them are a small world structure. Other examples It is also possible to represent logical descriptions using semantic networks such as the existential graphs of Charles Sanders Peirce or the related conceptual graphs of John F. Sowa. These have expressive power equal to or exceeding standard first-order predicate logic. Unlike WordNet or other lexical or browsing networks, semantic networks using these representations can be used for reliable automated logical deduction. Some automated reasoners exploit the graph-theoretic features of the networks during processing. Other examples of semantic networks are Gellish models. Gellish English with its Gellish English dictionary, is a formal language that is defined as a network of relations between concepts and names of concepts. Gellish English is a formal subset of natural English, just as Gellish Dutch is a formal subset of Dutch, whereas multiple languages share the same concepts. Other Gellish networks consist of knowledge models and information models that are expressed in the Gellish language. A Gellish network is a network of (binary) relations between things. Each relation in the network is an expression of a fact that is classified by a relation type. Each relation type itself is a concept that is defined in the Gellish language dictionary. Each related thing is either a concept or an individual thing that is classified by a concept. The definitions of concepts are created in the form of definition models (definition networks) that together form a Gellish Dictionary. A Gellish network can be documented in a Gellish database and is computer interpretable. SciCrunch is a collaboratively edited knowledge base for scientific resources. It provides unambiguous identifiers (Research Resource IDentifiers or RRIDs) for software, lab tools etc. and it also provides options to create links between RRIDs and from communities. Another example of semantic networks, based on category theory, is ologs. Here each type is an object, representing a set of things, and each arrow is a morphism, representing a function. Commutative diagrams also are prescribed to constrain the semantics. In the social sciences people sometimes use the term semantic network to refer to co-occurrence networks. The basic idea is that words that co-occur in a unit of text, e.g. a sentence, are semantically related to one another. Ties based on co-occurrence can then be used to construct semantic networks. This process includes identifying keywords in the text, constructing co-occurrence networks, and analyzing the networks to find central words and clusters of themes in the network. It is a particularly useful method to analyze large text and big data. Software tools There are also elaborate types of semantic networks connected with corresponding sets of software tools used for lexical knowledge engineering, like the Semantic Network Processing System (SNePS) of Stuart C. Shapiro or the MultiNet paradigm of Hermann Helbig, especially suited for the semantic representation of natural language expressions and used in several NLP applications. Semantic networks are used in specialized information retrieval tasks, such as plagiarism detection. They provide information on hierarchical relations in order to employ semantic compression to reduce language diversity and enable the system to match word meanings, independently from sets of words used. The Knowledge Graph proposed by Google in 2012 is actually an application of semantic network in search engine. Modeling multi-relational data like semantic networks in low-dimensional spaces through forms of embedding has benefits in expressing entity relationships as well as extracting relations from mediums like text. There are many approaches to learning these embeddings, notably using Bayesian clustering frameworks or energy-based frameworks, and more recently, TransE (NeurIPS 2013). Applications of embedding knowledge base data include Social network analysis and Relationship extraction. See also Abstract semantic graph Chunking (psychology) CmapTools Concept map Formal semantics (natural language) Knowledge base Network diagram Ontology (information science) Repertory grid Semantic lexicon Semantic similarity network Semantic neural network SemEval – an ongoing series of evaluations of computational semantic analysis systems Sparse distributed memory Taxonomy (general) Unified Medical Language System (UMLS) Word-sense disambiguation (WSD) Resource Description Framework Other examples Cognition Network Technology Lexipedia OpenCog Open Mind Common Sense (OMCS) Schema.org Semantic computing SNOMED CT Universal Networking Language (UNL) Wikidata Freebase References Further reading Allen, J. and A. Frisch (1982). "What's in a Semantic Network". In: Proceedings of the 20th. annual meeting of ACL, Toronto, pp. 19–27. John F. Sowa, Alexander Borgida (1991). Principles of Semantic Networks: Explorations in the Representation of Knowledge. Segev, E. (Ed.) (2022). Semantic Network Analysis in Social Sciences. New York: Routledge. External links "Semantic Networks" by John F. Sowa "Semantic Link Network" by Hai Zhuge Knowledge representation Networks Semantic relations
Semantic network
[ "Mathematics" ]
3,025
[ "Applied mathematics", "Mathematical linguistics" ]
29,122
https://en.wikipedia.org/wiki/Signals%20intelligence
Signals intelligence (SIGINT) is the act and field of intelligence-gathering by interception of signals, whether communications between people (communications intelligence—abbreviated to COMINT) or from electronic signals not directly used in communication (electronic intelligence—abbreviated to ELINT). As classified and sensitive information is usually encrypted, signals intelligence may necessarily involve cryptanalysis (to decipher the messages). Traffic analysis—the study of who is signaling to whom and in what quantity—is also used to integrate information, and it may complement cryptanalysis. History Origins Electronic interceptions appeared as early as 1900, during the Boer War of 1899–1902. The British Royal Navy had installed wireless sets produced by Marconi on board their ships in the late 1890s, and the British Army used some limited wireless signalling. The Boers captured some wireless sets and used them to make vital transmissions. Since the British were the only people transmitting at the time, the British did not need special interpretation of the signals that they were. The birth of signals intelligence in a modern sense dates from the Russo-Japanese War of 1904–1905. As the Russian fleet prepared for conflict with Japan in 1904, the British ship HMS Diana stationed in the Suez Canal intercepted Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history. Development in World War I Over the course of the First World War, a new method of signals intelligence reached maturity. Russia’s failure to properly protect its communications fatally compromised the Russian Army’s advance early in World War I and led to their disastrous defeat by the Germans under Ludendorff and Hindenburg at the Battle of Tannenberg. In 1918, French intercept personnel captured a message written in the new ADFGVX cipher, which was cryptanalyzed by Georges Painvin. This gave the Allies advance warning of the German 1918 Spring Offensive. The British in particular, built up great expertise in the newly emerging field of signals intelligence and codebreaking (synonymous with cryptanalysis). On the declaration of war, Britain cut all German undersea cables. This forced the Germans to communicate exclusively via either (A) a telegraph line that connected through the British network and thus could be tapped; or (B) through radio which the British could then intercept. Rear Admiral Henry Oliver appointed Sir Alfred Ewing to establish an interception and decryption service at the Admiralty; Room 40. An interception service known as 'Y' service, together with the post office and Marconi stations, grew rapidly to the point where the British could intercept almost all official German messages. The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of the High Seas Fleet, to infer from the routes they chose where defensive minefields had been placed and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place, and a warning could be given. Detailed information about submarine movements was also available. The use of radio-receiving equipment to pinpoint the location of any single transmitter was also developed during the war. Captain H.J. Round, working for Marconi, began carrying out experiments with direction-finding radio equipment for the army in France in 1915. By May 1915, the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports. Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into the North Sea. The battle of Dogger Bank was won in no small part due to the intercepts that allowed the Navy to position its ships in the right place. It played a vital role in subsequent naval clashes, including at the Battle of Jutland as the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines, and Zeppelins. The system was so successful that by the end of the war, over 80 million words, comprising the totality of German wireless transmission over the course of the war, had been intercepted by the operators of the Y-stations and decrypted. However, its most astonishing success was in decrypting the Zimmermann Telegram, a telegram from the German Foreign Office sent via Washington to its ambassador Heinrich von Eckardt in Mexico. Postwar consolidation With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. In 1919, the British Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peace-time codebreaking agency should be created. The Government Code and Cypher School (GC&CS) was the first peace-time codebreaking agency, with a public function "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also with a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October. By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems. The US Cipher Bureau was established in 1919 and achieved some success at the Washington Naval Conference in 1921, through cryptanalysis by Herbert Yardley. Secretary of War Henry L. Stimson closed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail." World War II The use of SIGINT had even greater implications during World War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra", managed from Government Code and Cypher School at Bletchley Park. Properly used, the German Enigma and Lorenz ciphers should have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks feasible. Bletchley's work was essential to defeating the U-boats in the Battle of the Atlantic, and to the British naval victories in the Battle of Cape Matapan and the Battle of North Cape. In 1941, Ultra exerted a powerful effect on the North African desert campaign against German forces under General Erwin Rommel. General Sir Claude Auchinleck wrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". Ultra decrypts featured prominently in the story of Operation SALAM, László Almásy's mission across the desert behind Allied lines in 1942. Prior to the Normandy landings on D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western Front divisions. Winston Churchill was reported to have told King George VI: "It is thanks to the secret weapon of General Menzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander, Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory. Official historian of British Intelligence in World War II Sir Harry Hinsley argued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended. At a lower level, German cryptanalysis, direction finding, and traffic analysis were vital to Rommel's early successes in the Western Desert Campaign until British forces tightened their communications discipline and Australian raiders destroyed his principle SIGINT Company. Technical definitions The United States Department of Defense has defined the term "signals intelligence" as: A category of intelligence comprising either individually or in combination all communications intelligence (COMINT), electronic intelligence (ELINT), and foreign instrumentation signals intelligence (FISINT), however transmitted. Intelligence derived from communications, electronic, and foreign instrumentation signals. Being a broad field, SIGINT has many sub-disciplines. The two main ones are communications intelligence (COMINT) and electronic intelligence (ELINT). Disciplines shared across the branches Targeting A collection system has to know to look for a particular signal. "System", in this context, has several nuances. Targeting is the process of developing collection requirements: "1. An intelligence need considered in the allocation of intelligence resources. Within the Department of Defense, these collection requirements fulfill the essential elements of information and other intelligence needs of a commander, or an agency. "2. An established intelligence need, validated against the appropriate allocation of intelligence resources (as a requirement) to fulfill the essential elements of information and other intelligence needs of an intelligence consumer." Need for multiple, coordinated receivers First, atmospheric conditions, sunspots, the target's transmission schedule and antenna characteristics, and other factors create uncertainty that a given signal intercept sensor will be able to "hear" the signal of interest, even with a geographically fixed target and an opponent making no attempt to evade interception. Basic countermeasures against interception include frequent changing of radio frequency, polarization, and other transmission characteristics. An intercept aircraft could not get off the ground if it had to carry antennas and receivers for every possible frequency and signal type to deal with such countermeasures. Second, locating the transmitter's position is usually part of SIGINT. Triangulation and more sophisticated radio location techniques, such as time of arrival methods, require multiple receiving points at different locations. These receivers send location-relevant information to a central point, or perhaps to a distributed system in which all participate, such that the information can be correlated and a location computed. Intercept management Modern SIGINT systems, therefore, have substantial communications among intercept platforms. Even if some platforms are clandestine, there is still a broadcast of information telling them where and how to look for signals. A United States targeting system under development in the late 1990s, PSTS, constantly sends out information that helps the interceptors properly aim their antennas and tune their receivers. Larger intercept aircraft, such as the EP-3 or RC-135, have the on-board capability to do some target analysis and planning, but others, such as the RC-12 GUARDRAIL, are completely under ground direction. GUARDRAIL aircraft are fairly small and usually work in units of three to cover a tactical SIGINT requirement, whereas the larger aircraft tend to be assigned strategic/national missions. Before the detailed process of targeting begins, someone has to decide there is a value in collecting information about something. While it would be possible to direct signals intelligence collection at a major sports event, the systems would capture a great deal of noise, news signals, and perhaps announcements in the stadium. If, however, an anti-terrorist organization believed that a small group would be trying to coordinate their efforts using short-range unlicensed radios at the event, SIGINT targeting of radios of that type would be reasonable. Targeting would not know where in the stadium the radios might be located or the exact frequency they are using; those are the functions of subsequent steps such as signal detection and direction finding. Once the decision to target is made, the various interception points need to cooperate, since resources are limited. Knowing what interception equipment to use becomes easier when a target country buys its radars and radios from known manufacturers, or is given them as military aid. National intelligence services keep libraries of devices manufactured by their own country and others, and then use a variety of techniques to learn what equipment is acquired by a given country. Knowledge of physics and electronic engineering further narrows the problem of what types of equipment might be in use. An intelligence aircraft flying well outside the borders of another country will listen for long-range search radars, not short-range fire control radars that would be used by a mobile air defense. Soldiers scouting the front lines of another army know that the other side will be using radios that must be portable and not have huge antennas. Signal detection Even if a signal is human communications (e.g., a radio), the intelligence collection specialists have to know it exists. If the targeting function described above learns that a country has a radar that operates in a certain frequency range, the first step is to use a sensitive receiver, with one or more antennas that listen in every direction, to find an area where such a radar is operating. Once the radar is known to be in the area, the next step is to find its location. If operators know the probable frequencies of transmissions of interest, they may use a set of receivers, preset to the frequencies of interest. These are the frequency (horizontal axis) versus power (vertical axis) produced at the transmitter, before any filtering of signals that do not add to the information being transmitted. Received energy on a particular frequency may start a recorder, and alert a human to listen to the signals if they are intelligible (i.e., COMINT). If the frequency is not known, the operators may look for power on primary or sideband frequencies using a spectrum analyzer. Information from the spectrum analyzer is then used to tune receivers to signals of interest. For example, in this simplified spectrum, the actual information is at 800 kHz and 1.2 MHz. Real-world transmitters and receivers usually are directional. In the figure to the left, assume that each display is connected to a spectrum analyzer connected to a directional antenna aimed in the indicated direction. Countermeasures to interception Spread-spectrum communications is an electronic counter-countermeasures (ECCM) technique to defeat looking for particular frequencies. Spectrum analysis can be used in a different ECCM way to identify frequencies not being jammed or not in use. Direction-finding The earliest, and still common, means of direction finding is to use directional antennas as goniometers, so that a line can be drawn from the receiver through the position of the signal of interest. (See HF/DF.) Knowing the compass bearing, from a single point, to the transmitter does not locate it. Where the bearings from multiple points, using goniometry, are plotted on a map, the transmitter will be located at the point where the bearings intersect. This is the simplest case; a target may try to confuse listeners by having multiple transmitters, giving the same signal from different locations, switching on and off in a pattern known to their user but apparently random to the listener. Individual directional antennas have to be manually or automatically turned to find the signal direction, which may be too slow when the signal is of short duration. One alternative is the Wullenweber array technique. In this method, several concentric rings of antenna elements simultaneously receive the signal, so that the best bearing will ideally be clearly on a single antenna or a small set. Wullenweber arrays for high-frequency signals are enormous, referred to as "elephant cages" by their users. A more advance approach is Amplitude comparison. An alternative to tunable directional antennas or large omnidirectional arrays such as the Wullenweber is to measure the time of arrival of the signal at multiple points, using GPS or a similar method to have precise time synchronization. Receivers can be on ground stations, ships, aircraft, or satellites, giving great flexibility. A more accurate approach is Interferometer. Modern anti-radiation missiles can home in on and attack transmitters; military antennas are rarely a safe distance from the user of the transmitter. Traffic analysis When locations are known, usage patterns may emerge, from which inferences may be drawn. Traffic analysis is the discipline of drawing patterns from information flow among a set of senders and receivers, whether those senders and receivers are designated by location determined through direction finding, by addressee and sender identifications in the message, or even MASINT techniques for "fingerprinting" transmitters or operators. Message content other than the sender and receiver is not necessary to do traffic analysis, although more information can be helpful. For example, if a certain type of radio is known to be used only by tank units, even if the position is not precisely determined by direction finding, it may be assumed that a tank unit is in the general area of the signal. The owner of the transmitter can assume someone is listening, so might set up tank radios in an area where he wants the other side to believe he has actual tanks. As part of Operation Quicksilver, part of the deception plan for the invasion of Europe at the Battle of Normandy, radio transmissions simulated the headquarters and subordinate units of the fictitious First United States Army Group (FUSAG), commanded by George S. Patton, to make the German defense think that the main invasion was to come at another location. In like manner, fake radio transmissions from Japanese aircraft carriers, before the Battle of Pearl Harbor, were made from Japanese local waters, while the attacking ships moved under strict radio silence. Traffic analysis need not focus on human communications. For example, a sequence of a radar signal, followed by an exchange of targeting data and a confirmation, followed by observation of artillery fire, may identify an automated counterbattery fire system. A radio signal that triggers navigational beacons could be a radio landing aid for an airstrip or helicopter pad that is intended to be low-profile. Patterns do emerge. A radio signal with certain characteristics, originating from a fixed headquarters, may strongly suggest that a particular unit will soon move out of its regular base. The contents of the message need not be known to infer the movement. There is an art as well as science of traffic analysis. Expert analysts develop a sense for what is real and what is deceptive. Harry Kidder, for example, was one of the star cryptanalysts of World War II, a star hidden behind the secret curtain of SIGINT. Electronic order of battle Generating an electronic order of battle (EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizational order of battle. EOB covers both COMINT and ELINT. The Defense Intelligence Agency maintains an EOB by location. The Joint Spectrum Center (JSC) of the Defense Information Systems Agency supplements this location database with five more technical databases: FRRS: Frequency Resource Record System BEI: Background Environment Information SCS: Spectrum Certification System EC/S: Equipment Characteristics/Space TACDB: platform lists, sorted by nomenclature, which contain links to the C-E equipment complement of each platform, with links to the parametric data for each piece of equipment, military unit lists and their subordinate units with equipment used by each unit. For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify the medium- and long-range counter-artillery radars in a given area. Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability. Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted: U1 to U2, requesting permission to proceed to checkpoint X. U2 to U1, approved. please report at arrival. (20 minutes later) U1 to U2, all vehicles have arrived to checkpoint X. This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding and radio frequency MASINT could help confirm that the traffic is not deception. The EOB buildup process is divided as following: Signal separation Measurements optimization Data fusion Networks build-up Separation of the intercepted spectrum and the signals intercepted from each sensor must take place in an extremely small period of time, in order to separate the different signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g., hopping or time-division multiple access (TDMA)). By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate than the basic measurements of a standard direction finding sensor. By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved. Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc. Network build-up, or analysis of emitters (communication transmitters) in a target region over a sufficient period of time, enables creation of the communications flows of a battlefield. Communications intelligence COMINT (communications intelligence) is a sub-category of signals intelligence that engages in dealing with messages or voice information derived from the interception of foreign communications. COMINT is commonly referred to as SIGINT, which can cause confusion when talking about the broader intelligence disciplines. The US Joint Chiefs of Staff defines it as "Technical information and intelligence derived from foreign communications by other than the intended recipients". COMINT, which is defined to be communications among people, will reveal some or all of the following: Who is transmitting Where they are located, and, if the transmitter is moving, the report may give a plot of the signal against location If known, the organizational function of the transmitter The time and duration of transmission, and the schedule if it is a periodic transmission The frequencies and other technical characteristics of their transmission If the transmission is encrypted or not, and if it can be decrypted. If it is possible to intercept either an originally transmitted cleartext or obtain it through cryptanalysis, the language of the communication and a translation (when needed). The addresses, if the signal is not a general broadcast and if addresses are retrievable from the message. These stations may also be COMINT (e.g., a confirmation of the message or a response message), ELINT (e.g., a navigation beacon being activated) or both. Rather than, or in addition to, an address or other identifier, there may be information on the location and signal characteristics of the responder. Voice interception A basic COMINT technique is to listen for voice communications, usually over radio but possibly "leaking" from telephones or from wiretaps. If the voice communications are encrypted, traffic analysis may still give information. In the Second World War, for security the United States used Native American volunteer communicators known as code talkers, who used languages such as Navajo, Comanche and Choctaw, which would be understood by few people, even in the U.S. Even within these uncommon languages, the code talkers used specialized codes, so a "butterfly" might be a specific Japanese aircraft. British forces made limited use of Welsh speakers for the same reason. While modern electronic encryption does away with the need for armies to use obscure languages, it is likely that some groups might use rare dialects that few outside their ethnic group would understand. Text interception Morse code interception was once very important, but Morse code telegraphy is now obsolete in the western world, although possibly used by special operations forces. Such forces, however, now have portable cryptographic equipment. Specialists scan radio frequencies for character sequences (e.g., electronic mail) and fax. Signaling channel interception A given digital communications link can carry thousands or millions of voice communications, especially in developed countries. Without addressing the legality of such actions, the problem of identifying which channel contains which conversation becomes much simpler when the first thing intercepted is the signaling channel that carries information to set up telephone calls. In civilian and many military use, this channel will carry messages in Signaling System 7 protocols. Retrospective analysis of telephone calls can be made from Call detail record (CDR) used for billing the calls. Monitoring friendly communications More a part of communications security than true intelligence collection, SIGINT units still may have the responsibility of monitoring one's own communications or other electronic emissions, to avoid providing intelligence to the enemy. For example, a security monitor may hear an individual transmitting inappropriate information over an unencrypted radio network, or simply one that is not authorized for the type of information being given. If immediately calling attention to the violation would not create an even greater security risk, the monitor will call out one of the BEADWINDOW codes used by Australia, Canada, New Zealand, the United Kingdom, the United States, and other nations working under their procedures. Standard BEADWINDOW codes (e.g., "BEADWINDOW 2") include: Position: (e.g., disclosing, in an insecure or inappropriate way), "Friendly or enemy position, movement or intended movement, position, course, speed, altitude or destination or any air, sea or ground element, unit or force." Capabilities: "Friendly or enemy capabilities or limitations. Force compositions or significant casualties to special equipment, weapons systems, sensors, units or personnel. Percentages of fuel or ammunition remaining." Operations: "Friendly or enemy operation – intentions progress, or results. Operational or logistic intentions; mission participants flying programmes; mission situation reports; results of friendly or enemy operations; assault objectives." Electronic warfare (EW): "Friendly or enemy electronic warfare (EW) or emanations control (EMCON) intentions, progress, or results. Intention to employ electronic countermeasures (ECM); results of friendly or enemy ECM; ECM objectives; results of friendly or enemy electronic counter-countermeasures (ECCM); results of electronic support measures/tactical SIGINT (ESM); present or intended EMCON policy; equipment affected by EMCON policy." Friendly or enemy key personnel: "Movement or identity of friendly or enemy officers, visitors, commanders; movement of key maintenance personnel indicating equipment limitations." Communications security (COMSEC): "Friendly or enemy COMSEC breaches. Linkage of codes or codewords with plain language; compromise of changing frequencies or linkage with line number/circuit designators; linkage of changing call signs with previous call signs or units; compromise of encrypted/classified call signs; incorrect authentication procedure." Wrong circuit: "Inappropriate transmission. Information requested, transmitted or about to be transmitted which should not be passed on the subject circuit because it either requires greater security protection or it is not appropriate to the purpose for which the circuit is provided." Other codes as appropriate for the situation may be defined by the commander. In WWII, for example, the Japanese Navy, by poor practice, identified a key person's movement over a low-security cryptosystem. This made possible Operation Vengeance, the interception and death of the Combined Fleet commander, Admiral Isoroku Yamamoto. Electronic signals intelligence Electronic signals intelligence (ELINT) refers to intelligence-gathering by use of electronic sensors. Its primary focus lies on non-communications signals intelligence. The Joint Chiefs of Staff define it as "Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources." Signal identification is performed by analyzing the collected parameters of a specific signal, and either matching it to known criteria, or recording it as a possible new emitter. ELINT data are usually highly classified, and are protected as such. The data gathered are typically pertinent to the electronics of an opponent's defense network, especially the electronic parts such as radars, surface-to-air missile systems, aircraft, etc. ELINT can be used to detect ships and aircraft by their radar and other electromagnetic radiation; commanders have to make choices between not using radar (EMCON), intermittently using it, or using it and expecting to avoid defenses. ELINT can be collected from ground stations near the opponent's territory, ships off their coast, aircraft near or in their airspace, or by satellite. Complementary relationship to COMINT Combining other sources of information and ELINT allows traffic analysis to be performed on electronic emissions which contain human encoded messages. The method of analysis differs from SIGINT in that any human encoded message which is in the electronic transmission is not analyzed during ELINT. What is of interest is the type of electronic transmission and its location. For example, during the Battle of the Atlantic in World War II, Ultra COMINT was not always available because Bletchley Park was not always able to read the U-boat Enigma traffic. But high-frequency direction finding ("huff-duff") was still able to detect U-boats by analysis of radio transmissions and the positions through triangulation from the direction located by two or more huff-duff systems. The Admiralty was able to use this information to plot courses which took convoys away from high concentrations of U-boats. Other ELINT disciplines include intercepting and analyzing enemy weapons control signals, or the identification, friend or foe responses from transponders in aircraft used to distinguish enemy craft from friendly ones. Role in air warfare A very common area of ELINT is intercepting radars and learning their locations and operating procedures. Attacking forces may be able to avoid the coverage of certain radars, or, knowing their characteristics, electronic warfare units may jam radars or send them deceptive signals. Confusing a radar electronically is called a "soft kill", but military units will also send specialized missiles at radars, or bomb them, to get a "hard kill". Some modern air-to-air missiles also have radar homing guidance systems, particularly for use against large airborne radars. Knowing where each surface-to-air missile and anti-aircraft artillery system is and its type means that air raids can be plotted to avoid the most heavily defended areas and to fly on a flight profile which will give the aircraft the best chance of evading ground fire and fighter patrols. It also allows for the jamming or spoofing of the enemy's defense network (see electronic warfare). Good electronic intelligence can be very important to stealth operations; stealth aircraft are not totally undetectable and need to know which areas to avoid. Similarly, conventional aircraft need to know where fixed or semi-mobile air defense systems are so that they can shut them down or fly around them. ELINT and ESM Electronic support measures (ESM) or electronic surveillance measures are ELINT techniques using various electronic surveillance systems, but the term is used in the specific context of tactical warfare. ESM give the information needed for electronic attack (EA) such as jamming, or directional bearings (compass angle) to a target in signals intercept such as in the huff-duff radio direction finding (RDF) systems so critically important during the World War II Battle of the Atlantic. After WWII, the RDF, originally applied only in communications, was broadened into systems to also take in ELINT from radar bandwidths and lower frequency communications systems, giving birth to a family of NATO ESM systems, such as the shipboard US AN/WLR-1—AN/WLR-6 systems and comparable airborne units. EA is also called electronic counter-measures (ECM). ESM provides information needed for electronic counter-counter measures (ECCM), such as understanding a spoofing or jamming mode so one can change one's radar characteristics to avoid them. ELINT for meaconing Meaconing is the combined intelligence and electronic warfare of learning the characteristics of enemy navigation aids, such as radio beacons, and retransmitting them with incorrect information. Foreign instrumentation signals intelligence FISINT (Foreign instrumentation signals intelligence) is a sub-category of SIGINT, monitoring primarily non-human communication. Foreign instrumentation signals include (but not limited to) telemetry (TELINT), tracking systems, and video data links. TELINT is an important part of national means of technical verification for arms control. Counter-ELINT Still at the research level are techniques that can only be described as counter-ELINT, which would be part of a SEAD campaign. It may be informative to compare and contrast counter-ELINT with ECCM. SIGINT and MASINT comparison Signals intelligence and measurement and signature intelligence (MASINT) are closely, and sometimes confusingly, related. The signals intelligence disciplines of communications and electronic intelligence focus on the information in those signals themselves, as with COMINT detecting the speech in a voice communication or ELINT measuring the frequency, pulse repetition rate, and other characteristics of a radar. MASINT also works with collected signals, but is more of an analysis discipline. There are, however, unique MASINT sensors, typically working in different regions or domains of the electromagnetic spectrum, such as infrared or magnetic fields. While NSA and other agencies have MASINT groups, the Central MASINT Office is in the Defense Intelligence Agency (DIA). Where COMINT and ELINT focus on the intentionally transmitted part of the signal, MASINT focuses on unintentionally transmitted information. For example, a given radar antenna will have sidelobes emanating from a direction other than that in which the main antenna is aimed. The RADINT (radar intelligence) discipline involves learning to recognize a radar both by its primary signal, captured by ELINT, and its sidelobes, perhaps captured by the main ELINT sensor, or, more likely, a sensor aimed at the sides of the radio antenna. MASINT associated with COMINT might involve the detection of common background sounds expected with human voice communications. For example, if a given radio signal comes from a radio used in a tank, if the interceptor does not hear engine noise or higher voice frequency than the voice modulation usually uses, even though the voice conversation is meaningful, MASINT might suggest it is a deception, not coming from a real tank. See HF/DF for a discussion of SIGINT-captured information with a MASINT flavor, such as determining the frequency to which a receiver is tuned, from detecting the frequency of the beat frequency oscillator of the superheterodyne receiver. Legality Since the invention of the radio, the international consensus has been that the radio-waves are no one's property, and thus the interception itself is not illegal. There can, however, be national laws on who is allowed to collect, store, and process radio traffic, and for what purposes. Monitoring traffic in cables (i.e. telephone and Internet) is far more controversial, since it most of the time requires physical access to the cable and thereby violating ownership and expected privacy. See also Central Intelligence Agency Directorate of Science & Technology COINTELPRO ECHELON Foreign Intelligence Surveillance Act of 1978 Amendments Act of 2008 Geospatial intelligence Human intelligence (espionage) Imagery intelligence Intelligence Branch (Canadian Forces) List of intelligence gathering disciplines Listening station Open-source intelligence Radio Reconnaissance Platoon RAF Intelligence Signals intelligence by alliances, nations and industries Signals intelligence operational platforms by nation for current collection systems SOT-A TEMPEST US signals intelligence in the Cold War Venona Zircon satellite Vulkan files leak References Further reading Bamford, James, Body of Secrets: How America's NSA and Britain's GCHQ Eavesdrop on the World (Century, London, 2001) West, Nigel, The SIGINT Secrets: The Signals Intelligence War, 1900 to Today (William Morrow, New York, 1988) External links Part I of IV Articles On Evolution of Army Signal Corps COMINT and SIGINT into NSA NSA's overview of SIGINT USAF Pamphlet on sources of intelligence German WWII SIGINT/COMINT (PDF) Intelligence Programs and Systems The U.S. Intelligence Community by Jeffrey T. Richelson Secrets of Signals Intelligence During the Cold War and Beyond by Matthew Aid et al. Maritime SIGINT Architecture Technical Standards Handbook (PDF) Command and control Cryptography Cyberwarfare Intelligence gathering disciplines Military intelligence
Signals intelligence
[ "Mathematics", "Engineering" ]
7,562
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
29,124
https://en.wikipedia.org/wiki/Soviet%20submarine%20K-219
K-219 was a Project 667A Navaga-class ballistic missile submarine (NATO reporting name Yankee I) of the Soviet Navy. It carried 16 R-27U liquid-fuel missiles powered by UDMH with nitrogen tetroxide (NTO). K-219 was involved in what has become one of the most controversial submarine incidents during the Cold War on Friday 3 October 1986. The 15-year-old vessel, which was on an otherwise routine Cold War nuclear deterrence patrol in the North Atlantic northeast of Bermuda, suffered an explosion and fire in a missile tube. While underway, a submerged seal in a missile hatch cover failed, allowing high-pressure seawater to enter the missile tube and owing to the pressure differential ruptured the missile fuel tanks, allowing the missile's liquid fuel to mix and ultimately combust. Though there was no official announcement, the Soviet Union claimed the leak was caused by a collision with the submarine . Although Augusta was operating within the area, both the United States Navy and the commander of K-219, Captain Second Rank Igor Britanov, deny that a collision took place. The incident was novelized in the book Hostile Waters, which reconstructed the incident from descriptions by the survivors, ships' logs, the official investigations, and participants both ashore and afloat from the Soviet and the American sides. Explosion Shortly after 0530 Moscow time, seawater leaking into silo six of K-219 reacted with missile fuel, producing chlorine and nitrogen dioxide gases and sufficient heat to explosively decompose additional fuming nitric acid to produce more nitrogen dioxide gas. K-219 weapons officer Alexander Petrachkov attempted to deal with this by disengaging the hatch cover and venting the missile tube to the sea. Shortly after 0532, an explosion occurred in the silo. K-219 had previously experienced a similar event; one of her missile tubes was already disabled and welded shut, having been permanently sealed after an explosion caused by reaction between seawater leaking into the silo and missile fuel residue. An article in Undersea Warfare by Captain First Rank, Igor Kurdin, Russian Navy – K-219s previous XO (executive officer) – and Lieutenant Commander Wayne Grasdock, USN described the explosion occurrence as follows: Two sailors were killed outright in the explosion, and a third died soon afterward from toxic gas poisoning. Through a breach in the hull, the vessel immediately started taking on seawater, quickly sinking from its original depth of 40 meters (130 ft) to eventually reaching a depth of over 300 meters (980 ft). Sealing all of the compartments and full engagement of the seawater pumps in the stricken compartments enabled the depth to be stabilized. Up to 25 sailors were trapped in a sealed section, and it was only after a conference with his incident specialists that the Captain allowed the Chief Engineer to open the hatch and save the 25 lives. It could be seen from instruments that although the nuclear reactor should have automatically been shut down, it was not. Lt. Nikolai Belikov, one of the reactor control officers, entered the reactor compartment but ran out of oxygen after turning just one of the four rod assemblies on the first reactor. Twenty-year-old enlisted seaman Sergei Preminin then volunteered to shut down the reactor by following the instructions of the Chief Engineer. Working with a full-face gas mask, he successfully shut down the reactor. A large fire had developed within the compartment, raising the pressure. When Preminin tried to reach his comrades on the other side of a door, the pressure difference prevented him from opening it, and he died of asphyxiation in the reactor compartment. For his actions, Sergei Preminin was posthumously awarded the title Hero of the Russian Federation. In a nuclear safe condition, and with sufficient stability to allow it to surface, Captain Britanov surfaced K-219 on battery power alone. He was then ordered to have the ship towed by a Soviet freighter back to her home port of Gadzhiyevo, away. Although a towline was attached, towing attempts were unsuccessful, and after subsequent poison gas leaks into the final aft compartments and against orders, Britanov ordered the crew to evacuate onto the towing ship, but remained aboard K-219 himself. Displeased with Britanov's inability to repair his submarine and continue his patrol, Moscow ordered Valery Pshenichny, K-219 security officer, to assume command, transfer the surviving crew back to the submarine, and return to duty. Before those orders could be carried out the flooding reached a point beyond recovery and on 6 October 1986 the K-219 sank to the bottom of the Hatteras Abyssal Plain at a depth of about 6,000 m (18,000 ft). Britanov abandoned ship shortly before the sinking. K-219 full complement of nuclear weapons was lost along with the vessel. Aftermath Preminin was posthumously awarded the Order of the Red Star for his bravery in securing the reactors. Britanov was charged with negligence, sabotage, and treason. He was never imprisoned, but waited for his trial in Sverdlovsk. On 30 May 1987, Defense Minister Sergey Sokolov was dismissed as a result of the Mathias Rust incident two days earlier, and replaced by Dmitry Yazov; the charges against Britanov were subsequently dismissed. In popular culture In 1997, the British BBC television film Hostile Waters, co-produced with HBO and starring Rutger Hauer, Martin Sheen, and Max von Sydow, was released in the United States by Warner Bros. It was based on the book by the same name, which claimed to describe the loss of K-219. In 2001, Captain Britanov filed suit, claiming Warner Bros. did not seek or get his permission to use his story or his character, and that the film did not portray the events accurately and made him look incompetent. After three years of hearing, the court ruled in Britanov's favor. Russian media reported that the filmmaker paid a settlement totaling under $100,000. After the release of the movie, the U.S. Navy issued the following statement regarding both the book and the movie: An article on the U.S. Navy's website posted by Captain 1st Rank (Ret.) Igor Kurdin (former XO of K-219) and Lieutenant Commander Wayne Grasdock denied any collision between K-219 and Augusta. Captain Britanov also denies a collision, and he has stated that he was not asked to be a guest speaker at Russian functions, because he refuses to follow the Russian government's interpretation of the K-219 incident. In a BBC interview recorded in February 2013, Admiral of the Fleet Vladimir Chernavin, the Commander-in-Chief of the Soviet Navy at the time of the K-219 incident, says the accident was caused by a malfunction in a missile tube, and makes no mention of a collision with an American submarine. The interview was conducted for the BBC2 series The Silent War. See also List of sunken nuclear submarines Notes Citations Bibliography Книга памяти – К-219 . , . Yankee-class submarines Ships built in the Soviet Union 1971 ships Cold War submarines of the Soviet Union Lost submarines of the Soviet Union Sunken nuclear submarines Foreign relations of the Soviet Union Soviet Union–United States relations Nuclear accidents and incidents Maritime incidents in 1986 Ships built by Sevmash
Soviet submarine K-219
[ "Chemistry" ]
1,516
[ "Nuclear accidents and incidents", "Radioactivity" ]
29,192
https://en.wikipedia.org/wiki/Space%20elevator
A space elevator, also referred to as a space bridge, star ladder, and orbital lift, is a proposed type of planet-to-space transportation system, often depicted in science fiction. The main component would be a cable (also called a tether) anchored to the surface and extending into space. An Earth-based space elevator would consist of a cable with one end attached to the surface near the equator and the other end attached to a counterweight in space beyond geostationary orbit (35,786 km altitude). The competing forces of gravity, which is stronger at the lower end, and the upward centrifugal pseudo-force (it is actually the inertia of the counterweight that creates the tension on the space side), which is stronger at the upper end, would result in the cable being held up, under tension, and stationary over a single position on Earth. With the tether deployed, climbers (crawlers) could repeatedly climb up and down the tether by mechanical means, releasing their cargo to and from orbit. The design would permit vehicles to travel directly between a planetary surface, such as the Earth's, and orbit, without the use of large rockets. History Early concept The idea of the space elevator appears to have developed independently in different times and places. The earliest models originated with two Russian scientists in the late nineteenth century. In his 1895 collection Dreams of Earth and Sky, Konstantin Tsiolkovsky envisioned a massive sky ladder to reach the stars as a way to overcome gravity. Decades later, in 1960, Yuri Artsutanov independently developed the concept of a "Cosmic Railway", a space elevator tethered from an orbiting satellite to an anchor on the equator, aiming to provide a safer and more efficient alternative to rockets. In 1966, Isaacs and his colleagues introduced the concept of the 'Sky-Hook', proposing a satellite in geostationary orbit with a cable extending to Earth. Innovations and designs The space elevator concept reached America in 1975 when Jerome Pearson began researching the idea, inspired by Arthur C. Clarke's 1969 speech before Congress. After working as an engineer for NASA and the Air Force Research Laboratory, he developed a design for an "Orbital Tower", intended to harness Earth's rotational energy to transport supplies into low Earth orbit. In his publication in Acta Astronautica, the cable would be thickest at geostationary orbit where tension is greatest, and narrowest at the tips to minimize weight per unit area. He proposed extending a counterweight to 144,000 kilometers (89,000 miles) as without a large counterweight, the upper cable would need to be longer due to the way gravitational and centrifugal forces change with distance from Earth. His analysis included the Moon's gravity, wind, and moving payloads. Building the elevator would have required thousands of Space Shuttle trips, though material could be transported once a minimum strength strand reached the ground or be manufactured in space from asteroidal or lunar ore. Pearson's findings, published in Acta Astronautica, caught Clarke's attention and led to technical consultations for Clarke's science fiction novel The Fountains of Paradise (1979), which features a space elevator. The first gathering of multiple experts who wanted to investigate this alternative to space flight took place at the 1999 NASA conference 'Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether Space Elevator Concepts'. in Huntsville, Alabama. D.V. Smitherman, Jr., published the findings in August of 2000 under the title Space Elevators: An Advanced Earth-Space Infrastructure for the New Millennium, concluding that the space elevator could not be built for at least another 50 years due to concerns about the cable's material, deployment, and upkeep. Dr. B.C. Edwards suggested that a long paper-thin ribbon, utilizing a carbon nanotube composite material could solve the tether issue due to their high tensile strength and low weight The proposed wide-thin ribbon-like cross-section shape instead of earlier circular cross-section concepts would increase survivability against meteoroid impacts. With support from NASA Institute for Advanced Concepts (NIAC), his work was involved more than 20 institutions and 50 participants. The Space Elevator NIAC Phase II Final Report, in combination with the book The Space Elevator: A Revolutionary Earth-to-Space Transportation System (Edwards and Westling, 2003) summarized all effort to design a space elevator including deployment scenario, climber design, power delivery system, orbital debris avoidance, anchor system, surviving atomic oxygen, avoiding lightning and hurricanes by locating the anchor in the western equatorial Pacific, construction costs, construction schedule, and environmental hazards. Additionally, he researched the structural integrity and load-bearing capabilities of space elevator cables, emphasizing their need for high tensile strength and resilience. His space elevator concept never reached NIAC's third phase, which he attributed to submitting his final proposal during the week of the Space Shuttle Columbia disaster. 21st century advancements To speed space elevator development, proponents have organized several competitions, similar to the Ansari X Prize, for relevant technologies. Among them are Elevator:2010, which organized annual competitions for climbers, ribbons and power-beaming systems from 2005 to 2009, the Robogames Space Elevator Ribbon Climbing competition, as well as NASA's Centennial Challenges program, which, in March 2005, announced a partnership with the Spaceward Foundation (the operator of Elevator:2010), raising the total value of prizes to US$400,000. The first European Space Elevator Challenge (EuSEC) to establish a climber structure took place in August 2011. In 2005, "the LiftPort Group of space elevator companies announced that it will be building a carbon nanotube manufacturing plant in Millville, New Jersey, to supply various glass, plastic and metal companies with these strong materials. Although LiftPort hopes to eventually use carbon nanotubes in the construction of a space elevator, this move will allow it to make money in the short term and conduct research and development into new production methods." Their announced goal was a space elevator launch in 2010. On 13 February 2006, the LiftPort Group announced that, earlier the same month, they had tested a mile of "space-elevator tether" made of carbon-fiber composite strings and fiberglass tape measuring wide and (approx. 13 sheets of paper) thick, lifted with balloons. In April 2019, Liftport CEO Michael Laine admitted little progress has been made on the company's lofty space elevator ambitions, even after receiving more than $200,000 in seed funding. The carbon nanotube manufacturing facility that Liftport announced in 2005 was never built. In 2007, Elevator:2010 held the 2007 Space Elevator games, which featured US$500,000 awards for each of the two competitions ($1,000,000 total), as well as an additional $4,000,000 to be awarded over the next five years for space elevator related technologies. No teams won the competition, but a team from MIT entered the first 2-gram (0.07 oz), 100-percent carbon nanotube entry into the competition. Japan held an international conference in November 2008 to draw up a timetable for building the elevator. In 2012, the Obayashi Corporation announced that it could build a space elevator by 2050 using carbon nanotube technology. The design's passenger climber would be able to reach the level of geosynchronous equatorial orbit (GEO) after an 8-day trip. Further details were published in 2016. In 2013, the International Academy of Astronautics published a technological feasibility assessment which concluded that the critical capability improvement needed was the tether material, which was projected to achieve the necessary specific strength within 20 years. The four-year long study looked into many facets of space elevator development including missions, development schedules, financial investments, revenue flow, and benefits. It was reported that it would be possible to operationally survive smaller impacts and avoid larger impacts, with meteors and space debris, and that the estimated cost of lifting a kilogram of payload to GEO and beyond would be $500. In 2014, Google X's Rapid Evaluation R&D team began the design of a Space Elevator, eventually finding that no one had yet manufactured a perfectly formed carbon nanotube strand longer than a meter. They thus put the project in "deep freeze" and also keep tabs on any advances in the carbon nanotube field. In 2018, researchers at Japan's Shizuoka University launched STARS-Me, two CubeSats connected by a tether, which a mini-elevator will travel on. The experiment was launched as a test bed for a larger structure. In 2019, the International Academy of Astronautics published "Road to the Space Elevator Era", a study report summarizing the assessment of the space elevator as of summer 2018. The essence is that a broad group of space professionals gathered and assessed the status of the space elevator development, each contributing their expertise and coming to similar conclusions: (a) Earth Space Elevators seem feasible, reinforcing the IAA 2013 study conclusion (b) Space Elevator development initiation is nearer than most think. This last conclusion is based on a potential process for manufacturing macro-scale single crystal graphene with higher specific strength than carbon nanotubes. Materials A significant difficulty with making a space elevator for the Earth is strength of materials. Since the structure must hold up its own weight in addition to the payload it may carry, the strength to weight ratio, or Specific strength, of the material it is made of must be extremely high. Since 1959, most ideas for space elevators have focused on purely tensile structures, with the weight of the system held up from above by centrifugal forces. In the tensile concepts, a space tether reaches from a large mass (the counterweight) beyond geostationary orbit to the ground. This structure is held in tension between Earth and the counterweight like an upside-down plumb bob. The cable thickness is tapered based on tension; it has its maximum at a geostationary orbit and the minimum on the ground. The concept is applicable to other planets and celestial bodies. For locations in the Solar System with weaker gravity than Earth's (such as the Moon or Mars), the strength-to-density requirements for tether materials are not as problematic. Currently available materials (such as Kevlar) are strong and light enough that they could be practical as the tether material for elevators there. Available materials are not strong and light enough to make an Earth space elevator practical. Some sources expect that future advances in carbon nanotubes (CNTs) could lead to a practical design. Other sources believe that CNTs will never be strong enough. Possible future alternatives include boron nitride nanotubes, diamond nanothreads and macro-scale single crystal graphene. In fiction In 1979, space elevators were introduced to a broader audience with the simultaneous publication of Arthur C. Clarke's novel, The Fountains of Paradise, in which engineers construct a space elevator on top of a mountain peak in the fictional island country of "Taprobane" (loosely based on Sri Lanka, albeit moved south to the Equator), and Charles Sheffield's first novel, The Web Between the Worlds, also featuring the building of a space elevator. Three years later, in Robert A. Heinlein's 1982 novel Friday, the principal character mentions a disaster at the “Quito Sky Hook” and makes use of the "Nairobi Beanstalk" in the course of her travels. In Kim Stanley Robinson's 1993 novel Red Mars, colonists build a space elevator on Mars that allows both for more colonists to arrive and also for natural resources mined there to be able to leave for Earth. Larry Niven's book Rainbow Mars describes a space elevator built on Mars. In David Gerrold's 2000 novel, Jumping Off The Planet, a family excursion up the Ecuador "beanstalk" is actually a child-custody kidnapping. Gerrold's book also examines some of the industrial applications of a mature elevator technology. The concept of a space elevator, called the Beanstalk, is also depicted in John Scalzi's 2005 novel Old Man's War. In a biological version, Joan Slonczewski's 2011 novel The Highest Frontier depicts a college student ascending a space elevator constructed of self-healing cables of anthrax bacilli. The engineered bacteria can regrow the cables when severed by space debris. Physics Apparent gravitational field An Earth space elevator cable rotates along with the rotation of the Earth. Therefore, the cable, and objects attached to it, would experience upward centrifugal force in the direction opposing the downward gravitational force. The higher up the cable the object is located, the less the gravitational pull of the Earth, and the stronger the upward centrifugal force due to the rotation, so that more centrifugal force opposes less gravity. The centrifugal force and the gravity are balanced at geosynchronous equatorial orbit (GEO). Above GEO, the centrifugal force is stronger than gravity, causing objects attached to the cable there to pull upward on it. Because the counterweight, above GEO, is rotating about the Earth faster than the natural orbital speed for that altitude, it exerts a centrifugal pull on the cable and thus holds the whole system aloft. The net force for objects attached to the cable is called the apparent gravitational field. The apparent gravitational field for attached objects is the (downward) gravity minus the (upward) centrifugal force. The apparent gravity experienced by an object on the cable is zero at GEO, downward below GEO, and upward above GEO. The apparent gravitational field can be represented this way: where At some point up the cable, the two terms (downward gravity and upward centrifugal force) are equal and opposite. Objects fixed to the cable at that point put no weight on the cable. This altitude (r1) depends on the mass of the planet and its rotation rate. Setting actual gravity equal to centrifugal acceleration gives: This is above Earth's surface, the altitude of geostationary orbit. On the cable below geostationary orbit, downward gravity would be greater than the upward centrifugal force, so the apparent gravity would pull objects attached to the cable downward. Any object released from the cable below that level would initially accelerate downward along the cable. Then gradually it would deflect eastward from the cable. On the cable above the level of stationary orbit, upward centrifugal force would be greater than downward gravity, so the apparent gravity would pull objects attached to the cable upward. Any object released from the cable above the geosynchronous level would initially accelerate upward along the cable. Then gradually it would deflect westward from the cable. Cable section Historically, the main technical problem has been considered the ability of the cable to hold up, with tension, the weight of itself below any given point. The greatest tension on a space elevator cable is at the point of geostationary orbit, above the Earth's equator. This means that the cable material, combined with its design, must be strong enough to hold up its own weight from the surface up to . A cable which is thicker in cross section area at that height than at the surface could better hold up its own weight over a longer length. How the cross section area tapers from the maximum at to the minimum at the surface is therefore an important design factor for a space elevator cable. To maximize the usable excess strength for a given amount of cable material, the cable's cross section area would need to be designed for the most part in such a way that the stress (i.e., the tension per unit of cross sectional area) is constant along the length of the cable. The constant-stress criterion is a starting point in the design of the cable cross section area as it changes with altitude. Other factors considered in more detailed designs include thickening at altitudes where more space junk is present, consideration of the point stresses imposed by climbers, and the use of varied materials. To account for these and other factors, modern detailed designs seek to achieve the largest safety margin possible, with as little variation over altitude and time as possible. In simple starting-point designs, that equates to constant-stress. For a constant-stress cable with no safety margin, the cross-section-area as a function of distance from Earth's center is given by the following equation: where Safety margin can be accounted for by dividing T by the desired safety factor. Cable materials Using the above formula, the ratio between the cross-section at geostationary orbit and the cross-section at Earth's surface, known as taper ratio, can be calculated: The taper ratio becomes very large unless the specific strength of the material used approaches 48 (MPa)/(kg/m3). Low specific strength materials require very large taper ratios which equates to large (or astronomical) total mass of the cable with associated large or impossible costs. Structure There are a variety of space elevator designs proposed for many planetary bodies. Almost every design includes a base station, a cable, climbers, and a counterweight. For an Earth Space Elevator the Earth's rotation creates upward centrifugal force on the counterweight. The counterweight is held down by the cable while the cable is held up and taut by the counterweight. The base station anchors the whole system to the surface of the Earth. Climbers climb up and down the cable with cargo. Base station Modern concepts for the base station/anchor are typically mobile stations, large oceangoing vessels or other mobile platforms. Mobile base stations would have the advantage over the earlier stationary concepts (with land-based anchors) by being able to maneuver to avoid high winds, storms, and space debris. Oceanic anchor points are also typically in international waters, simplifying and reducing the cost of negotiating territory use for the base station. Stationary land-based platforms would have simpler and less costly logistical access to the base. They also would have the advantage of being able to be at high altitudes, such as on top of mountains. In an alternate concept, the base station could be a tower, forming a space elevator which comprises both a compression tower close to the surface, and a tether structure at higher altitudes. Combining a compression structure with a tension structure would reduce loads from the atmosphere at the Earth end of the tether, and reduce the distance into the Earth's gravity field that the cable needs to extend, and thus reduce the critical strength-to-density requirements for the cable material, all other design factors being equal. Cable A space elevator cable would need to carry its own weight as well as the additional weight of climbers. The required strength of the cable would vary along its length. This is because at various points it would have to carry the weight of the cable below, or provide a downward force to retain the cable and counterweight above. Maximum tension on a space elevator cable would be at geosynchronous altitude so the cable would have to be thickest there and taper as it approaches Earth. Any potential cable design may be characterized by the taper factor – the ratio between the cable's radius at geosynchronous altitude and at the Earth's surface. The cable would need to be made of a material with a high tensile strength/density ratio. For example, the Edwards space elevator design assumes a cable material with a tensile strength of at least 100 gigapascals. Since Edwards consistently assumed the density of his carbon nanotube cable to be 1300 kg/m3, that implies a specific strength of 77 megapascal/(kg/m3). This value takes into consideration the entire weight of the space elevator. An untapered space elevator cable would need a material capable of sustaining a length of of its own weight at sea level to reach a geostationary altitude of without yielding. Therefore, a material with very high strength and lightness is needed. For comparison, metals like titanium, steel or aluminium alloys have breaking lengths of only 20–30 km (0.2–0.3 MPa/(kg/m3)). Modern fiber materials such as kevlar, fiberglass and carbon/graphite fiber have breaking lengths of 100–400 km (1.0–4.0 MPa/(kg/m3)). Nanoengineered materials such as carbon nanotubes and, more recently discovered, graphene ribbons (perfect two-dimensional sheets of carbon) are expected to have breaking lengths of 5000–6000 km (50–60 MPa/(kg/m3)), and also are able to conduct electrical power. For a space elevator on Earth, with its comparatively high gravity, the cable material would need to be stronger and lighter than currently available materials. For this reason, there has been a focus on the development of new materials that meet the demanding specific strength requirement. For high specific strength, carbon has advantages because it is only the sixth element in the periodic table. Carbon has comparatively few of the protons and neutrons which contribute most of the dead weight of any material. Most of the interatomic bonding forces of any element are contributed by only the outer few electrons. For carbon, the strength and stability of those bonds is high compared to the mass of the atom. The challenge in using carbon nanotubes remains to extend to macroscopic sizes the production of such material that are still perfect on the microscopic scale (as microscopic defects are most responsible for material weakness). As of 2014, carbon nanotube technology allowed growing tubes up to a few tenths of meters. In 2014, diamond nanothreads were first synthesized. Since they have strength properties similar to carbon nanotubes, diamond nanothreads were quickly seen as candidate cable material as well. Climbers A space elevator cannot be an elevator in the typical sense (with moving cables) due to the need for the cable to be significantly wider at the center than at the tips. While various designs employing moving cables have been proposed, most cable designs call for the "elevator" to climb up a stationary cable. Climbers cover a wide range of designs. On elevator designs whose cables are planar ribbons, most propose to use pairs of rollers to hold the cable with friction. Climbers would need to be paced at optimal timings so as to minimize cable stress and oscillations and to maximize throughput. Lighter climbers could be sent up more often, with several going up at the same time. This would increase throughput somewhat, but would lower the mass of each individual payload. The horizontal speed, i.e. due to orbital rotation, of each part of the cable increases with altitude, proportional to distance from the center of the Earth, reaching low orbital speed at a point approximately 66 percent of the height between the surface and geostationary orbit, or a height of about 23,400 km. A payload released at this point would go into a highly eccentric elliptical orbit, staying just barely clear from atmospheric reentry, with the periapsis at the same altitude as low earth orbit (LEO) and the apoapsis at the release height. With increasing release height the orbit would become less eccentric as both periapsis and apoapsis increase, becoming circular at geostationary level. When the payload has reached GEO, the horizontal speed is exactly the speed of a circular orbit at that level, so that if released, it would remain adjacent to that point on the cable. The payload can also continue climbing further up the cable beyond GEO, allowing it to obtain higher speed at jettison. If released from 100,000 km, the payload would have enough speed to reach the asteroid belt. As a payload is lifted up a space elevator, it would gain not only altitude, but horizontal speed (angular momentum) as well. The angular momentum is taken from the Earth's rotation. As the climber ascends, it is initially moving slower than each successive part of cable it is moving on to. This is the Coriolis force: the climber "drags" (westward) on the cable, as it climbs, and slightly decreases the Earth's rotation speed. The opposite process would occur for descending payloads: the cable is tilted eastward, thus slightly increasing Earth's rotation speed. The overall effect of the centrifugal force acting on the cable would cause it to constantly try to return to the energetically favorable vertical orientation, so after an object has been lifted on the cable, the counterweight would swing back toward the vertical, a bit like a pendulum. Space elevators and their loads would be designed so that the center of mass is always well-enough above the level of geostationary orbit to hold up the whole system. Lift and descent operations would need to be carefully planned so as to keep the pendulum-like motion of the counterweight around the tether point under control. Climber speed would be limited by the Coriolis force, available power, and by the need to ensure the climber's accelerating force does not break the cable. Climbers would also need to maintain a minimum average speed in order to move material up and down economically and expeditiously. At the speed of a very fast car or train of it will take about 5 days to climb to geosynchronous orbit. Powering climbers Both power and energy are significant issues for climbers – the climbers would need to gain a large amount of potential energy as quickly as possible to clear the cable for the next payload. Various methods have been proposed to provide energy to the climber: Transfer the energy to the climber through wireless energy transfer while it is climbing. Transfer the energy to the climber through some material structure while it is climbing. Store the energy in the climber before it starts – requires an extremely high specific energy such as nuclear energy. Solar power – After the first 40 km it is possible to use solar energy to power the climber Wireless energy transfer such as laser power beaming is currently considered the most likely method, using megawatt-powered free electron or solid state lasers in combination with adaptive mirrors approximately wide and a photovoltaic array on the climber tuned to the laser frequency for efficiency. For climber designs powered by power beaming, this efficiency is an important design goal. Unused energy would need to be re-radiated away with heat-dissipation systems, which add to weight. Yoshio Aoki, a professor of precision machinery engineering at Nihon University and director of the Japan Space Elevator Association, suggested including a second cable and using the conductivity of carbon nanotubes to provide power. Counterweight Several solutions have been proposed to act as a counterweight: a heavy, captured asteroid a space dock, space station or spaceport positioned past geostationary orbit a further upward extension of the cable itself so that the net upward pull would be the same as an equivalent counterweight parked spent climbers that had been used to thicken the cable during construction, other junk, and material lifted up the cable for the purpose of increasing the counterweight. Extending the cable has the advantage of some simplicity of the task and the fact that a payload that went to the end of the counterweight-cable would acquire considerable velocity relative to the Earth, allowing it to be launched into interplanetary space. Its disadvantage is the need to produce greater amounts of cable material as opposed to using just anything available that has mass. Applications Launching into deep space An object attached to a space elevator at a radius of approximately 53,100 km would be at escape velocity when released. Transfer orbits to the L1 and L2 Lagrangian points could be attained by release at 50,630 and 51,240 km, respectively, and transfer to lunar orbit from 50,960 km. At the end of Pearson's cable, the tangential velocity is 10.93 kilometers per second (6.79 mi/s). That is more than enough to escape Earth's gravitational field and send probes at least as far out as Jupiter. Once at Jupiter, a gravitational assist maneuver could permit solar escape velocity to be reached. Extraterrestrial elevators A space elevator could also be constructed on other planets, asteroids and moons. A Martian tether could be much shorter than one on Earth. Mars' surface gravity is 38 percent of Earth's, while it rotates around its axis in about the same time as Earth. Because of this, Martian stationary orbit is much closer to the surface, and hence the elevator could be much shorter. Current materials are already sufficiently strong to construct such an elevator. Building a Martian elevator would be complicated by the Martian moon Phobos, which is in a low orbit and intersects the Equator regularly (twice every orbital period of 11 h 6 min). Phobos and Deimos may get in the way of an areostationary space elevator; on the other hand, they may contribute useful resources to the project. Phobos is projected to contain high amounts of carbon. If carbon nanotubes become feasible for a tether material, there will be an abundance of carbon near Mars. This could provide readily available resources for future colonization on Mars. Phobos is tide-locked: one side always faces its primary, Mars. An elevator extending 6,000 km from that inward side would end about 28 kilometers above the Martian surface, just out of the denser parts of the atmosphere of Mars. A similar cable extending 6,000 km in the opposite direction would counterbalance the first, so the center of mass of this system remains in Phobos. In total the space elevator would extend out over 12,000 km which would be below areostationary orbit of Mars (17,032 km). A rocket launch would still be needed to get the rocket and cargo to the beginning of the space elevator 28 km above the surface. The surface of Mars is rotating at 0.25 km/s at the equator and the bottom of the space elevator would be rotating around Mars at 0.77 km/s, so only 0.52 km/s (1872 km/h) of Delta-v would be needed to get to the space elevator. Phobos orbits at 2.15 km/s and the outermost part of the space elevator would rotate around Mars at 3.52 km/s. The Earth's Moon is a potential location for a Lunar space elevator, especially as the specific strength required for the tether is low enough to use currently available materials. The Moon does not rotate fast enough for an elevator to be supported by centrifugal force (the proximity of the Earth means there is no effective lunar-stationary orbit), but differential gravity forces means that an elevator could be constructed through Lagrangian points. A near-side elevator would extend through the Earth-Moon L1 point from an anchor point near the center of the visible part of Earth's Moon: the length of such an elevator must exceed the maximum L1 altitude of 59,548 km, and would be considerably longer to reduce the mass of the required apex counterweight. A far-side lunar elevator would pass through the L2 Lagrangian point and would need to be longer than on the near-side; again, the tether length depends on the chosen apex anchor mass, but it could also be made of existing engineering materials. Rapidly spinning asteroids or moons could use cables to eject materials to convenient points, such as Earth orbits; or conversely, to eject materials to send a portion of the mass of the asteroid or moon to Earth orbit or a Lagrangian point. Freeman Dyson, a physicist and mathematician, suggested using such smaller systems as power generators at points distant from the Sun where solar power is uneconomical. A space elevator using presently available engineering materials could be constructed between mutually tidally locked worlds, such as Pluto and Charon or the components of binary asteroid 90 Antiope, with no terminus disconnect, according to Francis Graham of Kent State University. However, spooled variable lengths of cable must be used due to ellipticity of the orbits. Construction The construction of a space elevator would need reduction of some technical risk. Some advances in engineering, manufacturing and physical technology are required. Once a first space elevator is built, the second one and all others would have the use of the previous ones to assist in construction, making their costs considerably lower. Such follow-on space elevators would also benefit from the great reduction in technical risk achieved by the construction of the first space elevator. Prior to the work of Edwards in 2000, most concepts for constructing a space elevator had the cable manufactured in space. That was thought to be necessary for such a large and long object and for such a large counterweight. Manufacturing the cable in space would be done in principle by using an asteroid or Near-Earth object for source material. These earlier concepts for construction require a large preexisting space-faring infrastructure to maneuver an asteroid into its needed orbit around Earth. They also required the development of technologies for manufacture in space of large quantities of exacting materials. Since 2001, most work has focused on simpler methods of construction requiring much smaller space infrastructures. They conceive the launch of a long cable on a large spool, followed by deployment of it in space. The spool would be initially parked in a geostationary orbit above the planned anchor point. A long cable would be dropped "downward" (toward Earth) and would be balanced by a mass being dropped "upward" (away from Earth) for the whole system to remain on the geosynchronous orbit. Earlier designs imagined the balancing mass to be another cable (with counterweight) extending upward, with the main spool remaining at the original geosynchronous orbit level. Most current designs elevate the spool itself as the main cable is payed out, a simpler process. When the lower end of the cable is long enough to reach the surface of the Earth (at the equator), it would be anchored. Once anchored, the center of mass would be elevated more (by adding mass at the upper end or by paying out more cable). This would add more tension to the whole cable, which could then be used as an elevator cable. One plan for construction uses conventional rockets to place a "minimum size" initial seed cable of only 19,800 kg. This first very small ribbon would be adequate to support the first 619 kg climber. The first 207 climbers would carry up and attach more cable to the original, increasing its cross section area and widening the initial ribbon to about 160 mm wide at its widest point. The result would be a 750-ton cable with a lift capacity of 20 tons per climber. Safety issues and construction challenges For early systems, transit times from the surface to the level of geosynchronous orbit would be about five days. On these early systems, the time spent moving through the Van Allen radiation belts would be enough that passengers would need to be protected from radiation by shielding, which would add mass to the climber and decrease payload. A space elevator would present a navigational hazard, both to aircraft and spacecraft. Aircraft could be diverted by air-traffic control restrictions. All objects in stable orbits that have perigee below the maximum altitude of the cable that are not synchronous with the cable would impact the cable eventually, unless avoiding action is taken. One potential solution proposed by Edwards is to use a movable anchor (a sea anchor) to allow the tether to "dodge" any space debris large enough to track. Impacts by space objects such as meteoroids, micrometeorites and orbiting man-made debris pose another design constraint on the cable. A cable would need to be designed to maneuver out of the way of debris, or absorb impacts of small debris without breaking. Economics With a space elevator, materials might be sent into orbit at a fraction of the current cost. As of 2022, conventional rocket designs cost about US$12,125 per kilogram (US$5,500 per pound) for transfer to geostationary orbit. Current space elevator proposals envision payload prices starting as low as $220 per kilogram ($100 per pound), similar to the $5–$300/kg estimates of the Launch loop, but higher than the $310/ton to 500 km orbit quoted to Dr. Jerry Pournelle for an orbital airship system. Philip Ragan, co-author of the book Leaving the Planet by Space Elevator, states that "The first country to deploy a space elevator will have a 95 percent cost advantage and could potentially control all space activities." International Space Elevator Consortium (ISEC) The International Space Elevator Consortium (ISEC) is a US Non-Profit 501(c)(3) Corporation formed to promote the development, construction, and operation of a space elevator as "a revolutionary and efficient way to space for all humanity". It was formed after the Space Elevator Conference in Redmond, Washington in July 2008 and became an affiliate organization with the National Space Society in August 2013. ISEC hosts an annual Space Elevator conference at the Seattle Museum of Flight. ISEC coordinates with the two other major societies focusing on space elevators: the Japanese Space Elevator Association and EuroSpaceward. ISEC supports symposia and presentations at the International Academy of Astronautics and the International Astronautical Federation Congress each year. Related concepts The conventional current concept of a "Space Elevator" has evolved from a static compressive structure reaching to the level of GEO, to the modern baseline idea of a static tensile structure anchored to the ground and extending to well above the level of GEO. In the current usage by practitioners (and in this article), a "Space Elevator" means the Tsiolkovsky-Artsutanov-Pearson type as considered by the International Space Elevator Consortium. This conventional type is a static structure fixed to the ground and extending into space high enough that cargo can climb the structure up from the ground to a level where simple release will put the cargo into an orbit. Some concepts related to this modern baseline are not usually termed a "Space Elevator", but are similar in some way and are sometimes termed "Space Elevator" by their proponents. For example, Hans Moravec published an article in 1977 called "A Non-Synchronous Orbital Skyhook" describing a concept using a rotating cable. The rotation speed would exactly match the orbital speed in such a way that the tip velocity at the lowest point was zero compared to the object to be "elevated". It would dynamically grapple and then "elevate" high flying objects to orbit or low orbiting objects to higher orbit. The original concept envisioned by Tsiolkovsky was a compression structure, a concept similar to an aerial mast. While such structures might reach space (100 km, 62 mi), they are unlikely to reach geostationary orbit. The concept of a Tsiolkovsky tower combined with a classic space elevator cable (reaching above the level of GEO) has been suggested. Other ideas use very tall compressive towers to reduce the demands on launch vehicles. The vehicle is "elevated" up the tower, which may extend as high as above the atmosphere, and is launched from the top. Such a tall tower to access near-space altitudes of has been proposed by various researchers. The aerovator is a concept invented by a Yahoo Group discussing space elevators, and included in a 2009 book about space elevators. It would consist of a >1000 km long ribbon extending diagonally upwards from a ground-level hub and then levelling out to become horizontal. Aircraft would pull on the ribbon while flying in a circle, causing the ribbon to rotate around the hub once every 13 minutes with its tip travelling at 8 km/s. The ribbon would stay in the air through a mix of aerodynamic lift and centrifugal force. Payloads would climb up the ribbon and then be launched from the fast-moving tip into orbit. Other concepts for non-rocket spacelaunch related to a space elevator (or parts of a space elevator) include an orbital ring, a space fountain, a launch loop, a skyhook, a space tether, and a buoyant "SpaceShaft". Notes See also Gravity elevator Orbital ring References Further reading A conference publication based on findings from the Advanced Space Infrastructure Workshop on Geostationary Orbiting Tether "Space Elevator" Concepts (PDF), held in 1999 at the NASA Marshall Space Flight Center, Huntsville, Alabama. Compiled by D.V. Smitherman Jr., published August 2000 "The Political Economy of Very Large Space Projects" HTML PDF, John Hickman, Ph.D. Journal of Evolution and Technology Vol. 4 – November 1999 A Hoist to the Heavens By Bradley Carl Edwards Ziemelis K. (2001) "Going up". In New Scientist 2289: 24–27. Republished in SpaceRef . Title page: "The great space elevator: the dream machine that will turn us all into astronauts." The Space Elevator Comes Closer to Reality. An overview by Leonard David of space.com, published 27 March 2002 Krishnaswamy, Sridhar. Stress Analysis – The Orbital Tower (PDF) LiftPort's Roadmap for Elevator To Space SE Roadmap (PDF) Alexander Bolonkin, "Non Rocket Space Launch and Flight". Elsevier, 2005. 488 pgs. External links The Economist: Waiting For The Space Elevator (8 June 2006 – subscription required) CBC Radio Quirks and Quarks November 3, 2001 Riding the Space Elevator Times of London Online: Going up ... and the next floor is outer space The Space Elevator: 'Thought Experiment', or Key to the Universe? . By Sir Arthur C. Clarke. Address to the XXXth International Astronautical Congress, Munich, 20 September 1979 International Space Elevator Consortium Website Space Elevator entry at The Encyclopedia of Science Fiction Articles containing video clips Exploratory engineering Hypothetical technology Space access Space colonization Spacecraft propulsion Spaceflight technology Vertical transport devices
Space elevator
[ "Astronomy", "Technology" ]
8,624
[ "Exploratory engineering", "Astronomical hypotheses", "Transport systems", "Space elevator", "Vertical transport devices" ]
29,218
https://en.wikipedia.org/wiki/Sodium%20thiopental
Sodium thiopental, also known as Sodium Pentothal (a trademark of Abbott Laboratories), thiopental, thiopentone, or Trapanal (also a trademark), is a rapid-onset short-acting barbiturate general anesthetic. It is the thiobarbiturate analog of pentobarbital, and an analog of thiobarbital. Sodium thiopental was a core medicine in the World Health Organization's List of Essential Medicines, but was supplanted by propofol. Despite this, thiopental is listed as an acceptable alternative to propofol, depending on local availability and cost of these agents. It was the first of three drugs administered during most lethal injections in the United States until the US division of Hospira objected and stopped manufacturing the drug in 2011, and the European Union banned the export of the drug for this purpose. Although thiopental abuse carries a dependency risk, its recreational use is rare. Sodium thiopental is well-known in popular culture, especially under the name "sodium pentothal," as a "truth serum," although its efficacy in this role has been questioned. Uses Anesthesia Sodium thiopental is an ultra-short-acting barbiturate and has been used commonly in the induction phase of general anesthesia. Its use has been largely replaced with that of propofol, but may retain some popularity as an induction agent for rapid-sequence induction and intubation, such as in obstetrics. Following intravenous injection, the drug rapidly reaches the brain and causes unconsciousness within 30–45 seconds. At one minute, the drug attains a peak concentration of about 60% of the total dose in the brain. Thereafter, the drug distributes to the rest of the body, and in about 5–10 minutes the concentration is low enough in the brain that consciousness returns. A normal dose of sodium thiopental (usually 4–6 mg/kg) given to a pregnant woman for operative delivery (caesarean section) rapidly makes her unconscious, but the baby in her uterus remains conscious. However, larger or repeated doses can depress the baby's consciousness. Sodium thiopental is not used to maintain anesthesia in surgical procedures because, in infusion, it displays zero-order elimination pharmacokinetics, leading to a long period before consciousness is regained. Instead, anesthesia is usually maintained with an inhaled anesthetic (gas) agent. Inhaled anesthetics are eliminated relatively quickly, so that stopping the inhaled anesthetic will allow rapid return of consciousness. Sodium thiopental would have to be given in large amounts to maintain unconsciousness during anaesthesia due to its rapid redistribution throughout the body (as it has a high volume of distribution). Since its half-life of 5.5 to 26 hours is quite long, consciousness would take a long time to return. In veterinary medicine, sodium thiopental is used to induce anesthesia in animals. Since it is redistributed to fat, certain lean breeds of dogs such as sighthounds will have prolonged recoveries from sodium thiopental due to their lack of body fat and their lean body mass. Conversely, obese animals will have rapid recoveries, but it will take much longer for the drug to be entirely removed (metabolized) from their bodies. Sodium thiopental is always administered intravenously, as it can be fairly irritating to tissue and is a vesicant; severe tissue necrosis and sloughing can occur if it is injected incorrectly into the tissue around a vein. Medically-induced coma In addition to anesthesia induction, sodium thiopental was historically used to induce medical comas. It has now been superseded by drugs such as propofol because their effects wear off more quickly than thiopental. Patients with brain swelling, causing elevation of intracranial pressure, either secondary to trauma or following surgery, may benefit from this drug. Sodium thiopental, and the barbiturate class of drugs, decrease neuronal activity thereby decreasing cerebral metabolic rate of oxygen consumption (CMRO2), decrease the cerebrovascular response to carbon dioxide, which in turn decreases intracranial pressure. Patients with refractory elevated intracranial pressure (RICH) due to traumatic brain injury (TBI) may have improved long term outcome when barbiturate coma is added to their neurointensive care treatment. Reportedly, thiopental has been shown to be superior to pentobarbital in reducing intracranial pressure. This phenomenon is also called an inverse steal or Robin Hood effect as cerebral perfusion to all parts of the brain is reduced (due to the decreased cerebrovascular response to carbon dioxide) allowing optimal perfusion to ischaemic areas of the brain which have higher metabolic demands, since vessels supplying ischaemic areas of the brain would already be maximally dilated because of the metabolic demand. Status epilepticus In refractory status epilepticus, thiopental may be used to terminate a seizure. Euthanasia Sodium thiopental is used intravenously for the purposes of euthanasia. In both Belgium and the Netherlands, where active euthanasia is allowed by law, the standard protocol recommends sodium thiopental as the ideal agent to induce coma, followed by pancuronium bromide to paralyze muscles and stop breathing. Intravenous administration is the most reliable and rapid way to accomplish euthanasia. Death is quick. A coma is first induced by intravenous administration of 20 mg/kg thiopental sodium (Nesdonal) in a small volume (10 mL physiological saline). Then, a triple dose of a non-depolarizing neuromuscular blocking drug is given, such as 20 mg pancuronium bromide (Pavulon) or 20 mg vecuronium bromide (Norcuron). The muscle relaxant should be given intravenously to ensure optimal bioavailability but pancuronium bromide may be administered intramuscularly at an increased dosage level of 40 mg. Lethal injection Along with pancuronium bromide and potassium chloride, thiopental is used in 34 states of the US to execute prisoners by lethal injection. A very large dose is given to ensure rapid loss of consciousness. Although death usually occurs within ten minutes of the beginning of the injection process, some have been known to take longer. The use of sodium thiopental in execution protocols was challenged in court after a study in the medical journal The Lancet reported autopsies of executed inmates showed the level of thiopental in their bloodstream was insufficient to cause unconsciousness although this is dependent on different factors and not just on the drug itself. On December 8, 2009, Ohio became the first state to use a single dose of sodium thiopental for an execution, following the failed use of the standard three-drug cocktail during a prior execution, due to inability to locate suitable veins. Kenneth Biros was executed using the single-drug method. Washington State became the second state in the US to use the single-dose sodium thiopental injections for executions. On September 10, 2010, the execution of Cal Coburn Brown was the first in the state to use a single-dose, single-drug injection. His death was pronounced approximately one and a half minutes after the intravenous administration of five grams of the drug. After its use for the execution of Jeffrey Landrigan in the US, the United Kingdom introduced a ban on the export of sodium thiopental in December 2010, after it was established that no European supplies to the US were being used for any other purpose. The restrictions were based on "the European Union Torture Regulation (including licensing of drugs used in execution by lethal injection)". From 21 December 2011, the EU extended trade restrictions to prevent the export of certain medicinal products for capital punishment, stating that "the Union disapproves of capital punishment in all circumstances and works towards its universal abolition". Truth serum Thiopental is still used in some places as a truth serum to weaken the resolve of a subject and make the individual more compliant to pressure. Barbiturates decrease both higher cortical brain function and inhibition. It is thought that because lying is a more involved process than telling the truth, suppression of the higher cortical functions may lead to the uncovering of the truth. The drug tends to make subjects verbose and cooperative with interrogators; however, the reliability of confessions made under thiopental is questionable. Psychiatry Psychiatrists have used thiopental to desensitize patients with phobias and to "facilitate the recall of painful repressed memories." One psychiatrist who worked with thiopental is Jan Bastiaans, who used this procedure to help relieve trauma in surviving victims of the Holocaust. Mechanism of action Sodium thiopental is a member of the barbiturate class of drugs, which are relatively non-selective compounds that bind to an entire superfamily of ligand-gated ion channels, of which the GABAA receptor channel is one of several representatives. This superfamily of ion channels includes the neuronal nicotinic acetylcholine receptor (nAChR), the 5-HT3 receptor, the glycine receptor and others. Surprisingly, while GABAA receptor currents are increased by barbiturates (and other general anesthetics), ligand-gated ion channels that are predominantly permeable for cationic ions are blocked by these compounds. For example, neuronal nAChR are blocked by clinically relevant anesthetic concentrations of both sodium thiopental and pentobarbital. Such findings implicate (non-GABAergic) ligand-gated ion channels, e.g. the neuronal nAChR, in mediating some of the (side) effects of barbiturates. The GABAA receptor is an inhibitory channel that decreases neuronal activity, and barbiturates enhance the inhibitory action of the GABAA receptor. Controversies Following a shortage that led a court to delay an execution in California, a company spokesman for Hospira, the sole American manufacturer of the drug, objected to the use of thiopental in lethal injection. "Hospira manufactures this product because it improves or saves lives, and the company markets it solely for use as indicated on the product labeling. The drug is not indicated for capital punishment and Hospira does not support its use in this procedure." On January 21, 2011, the company announced that it would stop production of sodium thiopental from its plant in Italy, because it could not provide Italian authorities with guarantees that exported doses would not be used in executions. According to a company spokesperson, Italy was the only viable place where it could produce the drug, leaving the US without a supplier. In October 2015 the U.S. Food and Drug Administration confiscated an overseas shipment of thiopental destined for the states of Arizona and Texas. FDA spokesman Jeff Ventura said in a statement, "Courts have concluded that sodium thiopental for the injection in humans is an unapproved drug and may not be imported into the country". Metabolism Thiopental rapidly and easily crosses the blood–brain barrier as it is a lipophilic molecule. As with all lipid-soluble anaesthetic drugs, the short duration of action of sodium thiopental is due almost entirely to its redistribution away from central circulation into muscle and fatty tissue, due to its very high lipid–water partition coefficient (approximately 10), leading to sequestration in fatty tissue. Once redistributed, the free fraction in the blood is metabolized in the liver by zero-order kinetics. Sodium thiopental is mainly metabolized to pentobarbital, 5-ethyl-5-(1'-methyl-3'-hydroxybutyl)-2-thiobarbituric acid, and 5-ethyl-5-(1'-methyl-3'-carboxypropyl)-2-thiobarbituric acid. Dosage The usual dose range for induction of anesthesia using thiopental is from 3 to 6 mg/kg; however, there are many factors that can alter this. Premedication with sedatives such as benzodiazepines or clonidine will reduce requirements due to drug synergy, as do specific disease states and other patient factors. Among patient factors are: age, sex, and lean body mass. Specific disease conditions that can alter the dose requirements of thiopentone and for that matter any other intravenous anaesthetic are: hypovolemia, burns, azotemia, liver failure, hypoproteinemia, etc. Side effects As with nearly all anesthetic drugs, thiopental causes cardiovascular and respiratory depression resulting in hypotension, apnea, and airway obstruction. For these reasons, thiopental should only be administered by suitably trained medical personnel, who should give thiopental in an environment equipped to provide (respiratory) support, such as mechanical ventilation. Other side-effects include headache, agitated emergence, prolonged somnolence, and nausea. Intravenous administration of sodium thiopental is followed instantly by an odor and/or taste sensation, sometimes described as being similar to rotting onions, or to garlic. Residual side-effects may last up to 36 hours. Although each molecule of thiopental contains one sulfur atom, it is not a sulfonamide, and does not show the allergic reactions of sulfa/sulpha drugs. Contraindications Thiopental should be used with caution in cases of liver disease, Addison's disease, myxedema, severe heart disease, severe hypotension, a severe breathing disorder, or a family history of porphyria. Co-administration of pentoxifylline and thiopental causes death by acute pulmonary edema in rats. This pulmonary edema was not mediated by cardiac failure or by pulmonary hypertension but was due to increased pulmonary vascular permeability. History Sodium thiopental was discovered in the early 1930s by Ernest H. Volwiler and Donalee L. Tabern, working for Abbott Laboratories. It was first used in human beings on March 8, 1934, by Dr. Ralph M. Waters in an investigation of its properties, which were short-term anesthesia and surprisingly little analgesia. Three months later, Dr. John S. Lundy started a clinical trial of thiopental at the Mayo Clinic at the request of Abbott. Abbott continued to make the drug until 2004, when it spun off its hospital-products division as Hospira. Thiopental is famously associated with a number of anesthetic deaths in victims of the attack on Pearl Harbor. These deaths, relatively soon after the drug's introduction, were said to be due to excessive doses given to shocked trauma patients. However, recent evidence available through freedom of information legislation was reviewed in the British Journal of Anaesthesia, which has suggested that this story was grossly exaggerated. Of the 344 wounded that were admitted to the Tripler Army Hospital, only 13 did not survive, and it is unlikely that thiopentone overdose was responsible for more than a few of these. See also Pentobarbital References External links PubChem Substance Summary: Thiopental Vassallo, Susi, M.D. "Thiopental in Lethal Injection" (Archive). Fordham Urban Law Journal. Vol. 35, issue 4. June 18, 2008. pp. 957–968. Drugs developed by AbbVie AMPA receptor antagonists GABAA receptor positive allosteric modulators General anesthetics Glycine receptor agonists Hypnotics Kainate receptor antagonists Lethal injection components Nicotinic antagonists Organic sodium salts Drugs developed by Pfizer Sedatives Thiobarbiturates World Health Organization essential medicines
Sodium thiopental
[ "Chemistry", "Biology" ]
3,376
[ "Hypnotics", "Behavior", "Salts", "Organic sodium salts", "Sleep" ]
29,229
https://en.wikipedia.org/wiki/Slot%20machine
A slot machine, fruit machine (British English), poker machine or pokies (Australian English and New Zealand English) is a gambling machine that creates a game of chance for its customers. A slot machine's standard layout features a screen displaying three or more reels that "spin" when the game is activated. Some modern slot machines still include a lever as a skeuomorphic design trait to trigger play. However, the mechanical operations of early machines have been superseded by random number generators, and most are now operated using buttons and touchscreens. Slot machines include one or more currency detectors that validate the form of payment, whether coin, banknote, voucher, or token. The machine pays out according to the pattern of symbols displayed when the reels stop "spinning". Slot machines are the most popular gambling method in casinos and contribute about 70% of the average U.S. casino's income. Digital technology has resulted in variations in the original slot machine concept. As the player is essentially playing a video game, manufacturers can offer more interactive elements, such as advanced bonus rounds and more varied video graphics. Terms and their sources The "slot machine" term derives from the slots on the machine for inserting and retrieving coins. "Fruit machine" comes from the traditional fruit images on the spinning reels such as lemons and cherries. Slot machines are also known pejoratively as "one-armed bandits", alluding to the large mechanical levers affixed to the sides of early mechanical machines, and to the games' ability to empty players' pockets and wallets as thieves would. History Sittman and Pitt of Brooklyn, New York, developed a gambling machine in 1891 that was a precursor to the modern slot machine. It contained five drums holding a total of 50 card faces and was based on poker. The machine proved extremely popular, and soon many bars in the city had one or more of them. Players would insert a nickel and pull a lever, which would spin the drums and the cards that they held, the player hoping for a good poker hand. There was no direct payout mechanism, so a pair of kings might get the player a free beer, whereas a royal flush could pay out cigars or drinks; the prizes were wholly dependent upon what the establishment would offer. To improve the odds for the house, two cards were typically removed from the deck, the ten of spades and the jack of hearts, doubling the odds against winning a royal flush. The drums could also be rearranged to further reduce a player's chance of winning. Because of the vast number of possible wins in the original poker-based game, it proved practically impossible to make a machine capable of awarding an automatic payout for all possible winning combinations. At some time between 1887 and 1895, Charles Fey of San Francisco, California, devised a much simpler automatic mechanism with three spinning reels containing a total of five symbols: horseshoes, diamonds, spades, hearts and a Liberty Bell; the bell gave the machine its name. By replacing ten cards with five symbols and using three reels instead of five drums, the complexity of reading a win was considerably reduced, allowing Fey to design an effective automatic payout mechanism. Three bells in a row produced the biggest payoff, ten nickels (50¢). Liberty Bell was a huge success and spawned a thriving mechanical gaming device industry. After a few years, the devices were banned in California, but Fey still could not keep up with the demand for them elsewhere. The Liberty Bell machine was so popular that it was copied by many slot machine manufacturers. The first of these, also called the "Liberty Bell", was produced by the manufacturer Herbert Mills in 1907. By 1908, "bell" machines had been installed in cigar stores, brothels and barber shops. Early machines, including an 1899 Liberty Bell, are now part of the Nevada State Museum's Fey Collection. The first Liberty Bell machines produced by Mills used the same symbols on the reels as did Charles Fey's original. Soon afterward, another version was produced with patriotic symbols, such as flags and wreaths, on the wheels. Later, a similar machine called the Operator's Bell was produced that included the option of adding a gum-vending attachment. As the gum offered was fruit-flavored, fruit symbols were placed on the reels: lemons, cherries, oranges and plums. A bell was retained, and a picture of a stick of Bell-Fruit Gum, the origin of the bar symbol, was also present. This set of symbols proved highly popular and was used by other companies that began to make their own slot machines: Caille, Watling, Jennings and Pace. A commonly used technique to avoid gambling laws in several states was to award food prizes. For this reason, several gumball and other vending machines were regarded with mistrust by the courts. The two Iowa cases of State v. Ellis and State v. Striggles are both used in criminal law classes to illustrate the concept of reliance upon authority as it relates to the axiomatic ignorantia juris non excusat ("ignorance of the law is no excuse"). In these cases, a mint vending machine was declared to be a gambling device because the machine would, by internally manufactured chance, occasionally give the next user several tokens exchangeable for more candy. Despite the display of the result of the next use on the machine, the courts ruled that "[t]he machine appealed to the player's propensity to gamble, and that is [a] vice." In 1963, Bally developed the first fully electromechanical slot machine called Money Honey (although earlier machines such as Bally's High Hand draw-poker machine had exhibited the basics of electromechanical construction as early as 1940). Its electromechanical workings made Money Honey the first slot machine with a bottomless hopper and automatic payout of up to 500 coins without the help of an attendant. The popularity of this machine led to the increasing predominance of electronic games, with the side lever soon becoming vestigial. The first video slot machine was developed in 1976 in Kearny Mesa, California by the Las Vegas–based Fortune Coin Co. This machine used a modified Sony Trinitron color receiver for the display and logic boards for all slot-machine functions. The prototype was mounted in a full-size, show-ready slot-machine cabinet. The first production units went on trial at the Las Vegas Hilton Hotel. After some modifications to defeat cheating attempts, the video slot machine was approved by the Nevada State Gaming Commission and eventually found popularity on the Las Vegas Strip and in downtown casinos. Fortune Coin Co. and its video slot-machine technology were purchased by IGT (International Gaming Technology) in 1978. The first American video slot machine to offer a "second screen" bonus round was Reel ’Em In, developed by WMS Industries in 1996. This type of machine had appeared in Australia from at least 1994 with the Three Bags Full game. With this type of machine, the display changes to provide a different game in which an additional payout may be awarded. Operation Depending on the machine, the player can insert cash or, in "ticket-in, ticket-out" machines, a paper ticket with a barcode, into a designated slot on the machine. The machine is then activated by means of a lever or button (either physical or on a touchscreen), which activates reels that spin and stop to rearrange the symbols. If a player matches a winning combination of symbols, the player earns credits based on the paytable. Symbols vary depending on the theme of the machine. Classic symbols include objects such as fruits, bells, and stylized lucky sevens. Most slot games have a theme, such as a specific style, location, or character. Symbols and other bonus features of the game are typically aligned with the theme. Some themes are licensed from popular media franchises, including films, television series (including game shows such as Wheel of Fortune, which has been one of the most popular lines of slot machines overall), entertainers, and musicians. Multi-line slot machines have become more popular since the 1990s. These machines have more than one payline, meaning that visible symbols that are not aligned on the main horizontal may be considered as winning combinations. Traditional three-reel slot machines commonly have one, three, or five paylines while video slot machines may have 9, 15, 25, or as many as 1024 different paylines. Most accept variable numbers of credits to play, with 1 to 15 credits per line being typical. The higher the amount bet, the higher the payout will be if the player wins. One of the main differences between video slot machines and reel machines is in the way payouts are calculated. With reel machines, the only way to win the maximum jackpot is to play the maximum number of coins (usually three, sometimes four or even five coins per spin). With video machines, the fixed payout values are multiplied by the number of coins per line that is being bet. In other words: on a reel machine, the odds are more favorable if the gambler plays with the maximum number of coins available. However, depending on the structure of the game and its bonus features, some video slots may still include features that improve chances at payouts by making increased wagers. "Multi-way" games eschew fixed paylines in favor of allowing symbols to pay anywhere, as long as there is at least one in at least three consecutive reels from left to right. Multi-way games may be configured to allow players to bet by-reel: for example, on a game with a 3x5 pattern (often referred to as a 243-way game), playing one reel allows all three symbols in the first reel to potentially pay, but only the center row pays on the remaining reels (often designated by darkening the unused portions of the reels). Other multi-way games use a 4x5 or 5x5 pattern, where there are up to five symbols in each reel, allowing for up to 1,024 and 3,125 ways to win respectively. The Australian manufacturer Aristocrat brands games featuring this system as "Reel Power", "Xtra Reel Power" and "Super Reel Power" respectively. A variation involves patterns where symbols are adjacent to one another. Most of these games have a hexagonal reel formation, and much like multi-way games, any patterns not played are darkened out of use. Denominations can range from 1 cent ("penny slots") all the way up to $100.00 or more per credit. The latter are typically known as "high limit" machines, and machines configured to allow for such wagers are often located in dedicated areas (which may have a separate team of attendants to cater to the needs of those who play there). The machine automatically calculates the number of credits the player receives in exchange for the cash inserted. Newer machines often allow players to choose from a selection of denominations on a splash screen or menu. Terminology A bonus is a special feature of the particular game theme, which is activated when certain symbols appear in a winning combination. Bonuses and the number of bonus features vary depending upon the game. Some bonus rounds are a special session of free spins (the number of which is often based on the winning combination that triggers the bonus), often with a different or modified set of winning combinations as the main game and/or other multipliers or increased frequencies of symbols, or a "hold and re-spin" mechanic in which specific symbols (usually marked with values of credits or other prizes) are collected and locked in place over a finite number of spins. In other bonus rounds, the player is presented with several items on a screen from which to choose. As the player chooses items, a number of credits is revealed and awarded. Some bonuses use a mechanical device, such as a spinning wheel, that works in conjunction with the bonus to display the amount won. A candle is a light on top of the slot machine. It flashes to alert the operator that change is needed, hand pay is requested, a potential problem with the machine or the progressive jackpot has been won. It can be lit by the player by pressing the "service" or "help" button. Carousel refers to a grouping of slot machines, usually in a circle or oval formation. A coin hopper is a container where the coins that are immediately available for payouts are held. The hopper is a mechanical device that rotates coins into the coin tray when a player collects credits/coins (by pressing a "Cash Out" button). When a certain preset coin capacity is reached, a coin diverter automatically redirects, or "drops", excess coins into a "drop bucket" or "drop box". (Unused coin hoppers can still be found even on games that exclusively employ Ticket-In, Ticket-Out technology, as a vestige.) The credit meter is a display of the amount of money or number of credits on the machine. On mechanical slot machines, this is usually a seven-segment display, but video slot machines typically use stylized text that suits the game's theme and user interface. The drop bucket or drop box is a container located in a slot machine's base where excess coins are diverted from the hopper. Typically, a drop bucket is used for low-denomination slot machines and a drop box is used for high-denomination slot machines. A drop box contains a hinged lid with one or more locks whereas a drop bucket does not contain a lid. The contents of drop buckets and drop boxes are collected and counted by the casino on a scheduled basis. EGM is short for "Electronic Gaming Machine". Free spins are a common form of bonus, where a series of spins are automatically played at no charge at the player's current wager. Free spins are usually triggered via a scatter of at least three designated symbols (with the number of spins dependent on the number of symbols that land). Some games allow the free spins bonus to "retrigger", which adds additional spins on top of those already awarded. There is no theoretical limit to the number of free spins obtainable. Some games may have other features that can also trigger over the course of free spins. A hand pay refers to a payout made by an attendant or at an exchange point ("cage"), rather than by the slot machine itself. A hand pay occurs when the amount of the payout exceeds the maximum amount that was preset by the slot machine's operator. Usually, the maximum amount is set at the level where the operator must begin to deduct taxes. A hand pay could also be necessary as a result of a short pay. Hopper fill slip is a document used to record the replenishment of the coin in the coin hopper after it becomes depleted as a result of making payouts to players. The slip indicates the amount of coin placed into the hoppers, as well as the signatures of the employees involved in the transaction, the slot machine number and the location and the date. MEAL book (Machine entry access log or Machine entry authorization log, depending on the jurisdiction or venue) is a log of the employee's entries into the machine. Low-level or slant-top slot machines include a stool so the player may sit down. Stand-up or upright slot machines are played while standing. Optimal play is a payback percentage based on a gambler using the optimal strategy in a skill-based slot machine game. Payline is a line that crosses through one symbol on each reel, along which a winning combination is evaluated. Classic spinning reel machines usually have up to nine paylines, while video slot machines may have as many as one hundred. Paylines could be of various shapes (horizontal, vertical, oblique, triangular, zigzag, etc.) Persistent state refers to passive features on some slot machines, some of which able to trigger bonus payouts or other special features if certain conditions are met over time by players on that machine. Roll-up is the process of dramatizing a win by playing sounds while the meters count up to the amount that has been won. Short pay refers to a partial payout made by a slot machine, which is less than the amount due to the player. This occurs if the coin hopper has been depleted as a result of making earlier payouts to players. The remaining amount due to the player is either paid as a hand pay or an attendant will come and refill the machine. A scatter is a pay combination based on occurrences of a designated symbol landing anywhere on the reels, rather than falling in sequence on the same payline. A scatter pay usually requires a minimum of three symbols to land, and the machine may offer increased prizes or jackpots depending on the number that land. Scatters are frequently used to trigger bonus games, such as free spins (with the number of spins multiplying based on the number of scatter symbols that land). The scatter symbol usually cannot be matched using wilds, and some games may require the scatter symbols to appear on consecutive reels in order to pay. On some multiway games, scatter symbols still pay in unused areas. Taste is a reference to the small amount often paid out to keep a player seated and continuously betting. Only rarely will machines fail to pay even the minimum out over the course of several pulls. Tilt is a term derived from electromechanical slot machines' "tilt switches", which would make or break a circuit when they were tilted or otherwise tampered with that triggered an alarm. While modern machines no longer have tilt switches, any kind of technical fault (door switch in the wrong state, reel motor failure, out of paper, etc.) is still called a "tilt". A theoretical hold worksheet is a document provided by the manufacturer for every slot machine that indicates the theoretical percentage the machine should hold based on the amount paid in. The worksheet also indicates the reel strip settings, number of coins that may be played, the payout schedule, the number of reels and other information descriptive of the particular type of slot machine. Volatility or variance refers to the measure of risk associated with playing a slot machine. A low-volatility slot machine has regular but smaller wins, while a high-variance slot machine has fewer but bigger wins. Weight count is an American term referring to the total value of coins or tokens removed from a slot machine's drop bucket or drop box for counting by the casino's hard count team through the use of a weigh scale. Wild symbols substitute for most other symbols in the game (similarly to a joker card), usually excluding scatter and jackpot symbols (or offering a lower prize on non-natural combinations that include wilds). How jokers behave are dependent on the specific game and whether the player is in a bonus or free games mode. Sometimes wild symbols may only appear on certain reels, or have a chance to "stack" across the entire reel. Pay table Each machine has a table that lists the number of credits the player will receive if the symbols listed on the pay table line up on the pay line of the machine. Some symbols are wild and can represent many, or all, of the other symbols to complete a winning line. Especially on older machines, the pay table is listed on the face of the machine, usually above and below the area containing the wheels. On video slot machines, they are usually contained within a help menu, along with information on other features. Technology Reels Historically, all slot machines used revolving mechanical reels to display and determine results. Although the original slot machine used five reels, simpler, and therefore more reliable, three reel machines quickly became the standard. A problem with three reel machines is that the number of combinations is only cubic – the original slot machine with three physical reels and 10 symbols on each reel had only 103 = 1,000 possible combinations. This limited the manufacturer's ability to offer large jackpots since even the rarest event had a likelihood of 0.1%. The maximum theoretical payout, assuming 100% return to player would be 1000 times the bet, but that would leave no room for other pays, making the machine very high risk, and also quite boring. Although the number of symbols eventually increased to about 22, allowing 10,648 combinations, this still limited jackpot sizes as well as the number of possible outcomes. In the 1980s, however, slot machine manufacturers incorporated electronics into their products and programmed them to weight particular symbols. Thus the odds of losing symbols appearing on the payline became disproportionate to their actual frequency on the physical reel. A symbol would only appear once on the reel displayed to the player, but could, in fact, occupy several stops on the multiple reel. In 1984, Inge Telnaes received a patent for a device titled, "Electronic Gaming Device Utilizing a Random Number Generator for Selecting the Reel Stop Positions" (US Patent 4448419), which states: "It is important to make a machine that is perceived to present greater chances of payoff than it actually has within the legal limitations that games of chance must operate." The patent was later bought by International Game Technology and has since expired. A virtual reel that has 256 virtual stops per reel would allow up to 2563 = 16,777,216 final positions. The manufacturer could choose to offer a $1 million jackpot on a $1 bet, confident that it will only happen, over the long term, once every 16.8 million plays. Computerization With microprocessors now ubiquitous, the computers inside modern slot machines allow manufacturers to assign a different probability to every symbol on every reel. To the player, it might appear that a winning symbol was "so close", whereas in fact the probability is much lower. In the 1980s in the U.K., machines embodying microprocessors became common. These used a number of features to ensure the payout was controlled within the limits of the gambling legislation. As a coin was inserted into the machine, it could go either directly into the cashbox for the benefit of the owner or into a channel that formed the payout reservoir, with the microprocessor monitoring the number of coins in this channel. The drums themselves were driven by stepper motors, controlled by the processor and with proximity sensors monitoring the position of the drums. A "look-up table" within the software allows the processor to know what symbols were being displayed on the drums to the gambler. This allowed the system to control the level of payout by stopping the drums at positions it had determined. If the payout channel had filled up, the payout became more generous; if nearly empty, the payout became less so (thus giving good control of the odds). Video slot machines Video slot machines do not use mechanical reels, but use graphical reels on a computerized display. As there are no mechanical constraints on the design of video slot machines, games often use at least five reels, and may also use non-standard layouts. This greatly expands the number of possibilities: a machine can have 50 or more symbols on a reel, giving odds as high as 300 million to 1 against – enough for even the largest jackpot. As there are so many combinations possible with five reels, manufacturers do not need to weight the payout symbols (although some may still do so). Instead, higher paying symbols will typically appear only once or twice on each reel, while more common symbols earning a more frequent payout will appear many times. Video slot machines usually make more extensive use of multimedia, and can feature more elaborate minigames as bonuses. Modern cabinets typically use flat-panel displays, but cabinets using larger curved screens (which can provide a more immersive experience for the player) are not uncommon. Video slot machines typically encourage the player to play multiple "lines": rather than simply taking the middle of the three symbols displayed on each reel, a line could go from top left to the bottom right or any other pattern specified by the manufacturer. As each symbol is equally likely, there is no difficulty for the manufacturer in allowing the player to take as many of the possible lines on offer as desired – the long-term return to the player will be the same. The difference for the player is that the more lines they play, the more likely they are to get paid on a given spin (because they are betting more). To avoid seeming as if the player's money is simply ebbing away (whereas a payout of 100 credits on a single-line machine would be 100 bets and the player would feel they had made a substantial win, on a 20-line machine, it would only be five bets and not seem as significant), manufacturers commonly offer bonus games, which can return many times their bet. The player is encouraged to keep playing to reach the bonus: even if they are losing, the bonus game could allow them to win back their losses. Payout percentage Slot machines are typically programmed to pay out as winnings 0% to 99% of the money that is wagered by players. This is known as the "theoretical payout percentage" or RTP, "return to player". The minimum theoretical payout percentage varies among jurisdictions and is typically established by law or regulation. For example, the minimum payout in Nevada is 75%, in New Jersey 83%, and in Mississippi 80%. The winning patterns on slot machines – the amounts they pay and the frequencies of those payouts – are carefully selected to yield a certain fraction of the money paid to the "house" (the operator of the slot machine) while returning the rest to the players during play. Suppose that a certain slot machine costs $1 per spin and has a return to player (RTP) of 95%. It can be calculated that, over a sufficiently long period such as 1,000,000 spins, the machine will return an average of $950,000 to its players, who have inserted $1,000,000 during that time. In this (simplified) example, the slot machine is said to pay out 95%. The operator keeps the remaining $50,000. Within some EGM development organizations this concept is referred to simply as "par". "Par" also manifests itself to gamblers as promotional techniques: "Our 'Loose Slots' have a 93% payback! Play now!" A slot machine's theoretical payout percentage is set at the factory when the software is written. Changing the payout percentage after a slot machine has been placed on the gaming floor requires a physical swap of the software or firmware, which is usually stored on an EPROM but may be loaded onto non-volatile random access memory (NVRAM) or even stored on CD-ROM or DVD, depending on the capabilities of the machine and the applicable regulations. In certain jurisdictions, such as New Jersey, the EPROM has a tamper-evident seal and can only be changed in the presence of Gaming Control Board officials. Other jurisdictions, including Nevada, randomly audit slot machines to ensure that they contain only approved software. Historically, many casinos, both online and offline, have been unwilling to publish individual game RTP figures, making it impossible for the player to know whether they are playing a "loose" or a "tight" game. Since the turn of the century, some information regarding these figures has started to come into the public domain either through various casinos releasing them—primarily this applies to online casinos—or through studies by independent gambling authorities. The return to player is not the only statistic that is of interest. The probabilities of every payout on the pay table is also critical. For example, consider a hypothetical slot machine with a dozen different values on the pay table. However, the probabilities of getting all the payouts are zero except the largest one. If the payout is 4,000 times the input amount, and it happens every 4,000 times on average, the return to player is exactly 100%, but the game would be dull to play. Also, most people would not win anything, and having entries on the paytable that have a return of zero would be deceptive. As these individual probabilities are closely guarded secrets, it is possible that the advertised machines with high return to player simply increase the probabilities of these jackpots. The casino could legally place machines of a similar style payout and advertise that some machines have 100% return to player. The added advantage is that these large jackpots increase the excitement of the other players. The table of probabilities for a specific machine is called the Probability and Accounting Report or PAR sheet, also PARS commonly understood as Paytable and Reel Strips. Mathematician Michael Shackleford revealed the PARS for one commercial slot machine, an original International Gaming Technology Red White and Blue machine. This game, in its original form, is obsolete, so these specific probabilities do not apply. He only published the odds after a fan of his sent him some information provided on a slot machine that was posted on a machine in the Netherlands. The psychology of the machine design is quickly revealed. There are 13 possible payouts ranging from 1:1 to 2,400:1. The 1:1 payout comes every 8 plays. The 5:1 payout comes every 33 plays, whereas the 2:1 payout comes every 600 plays. Most players assume the likelihood increases proportionate to the payout. The one mid-size payout that is designed to give the player a thrill is the 80:1 payout. It is programmed to occur an average of once every 219 plays. The 80:1 payout is high enough to create excitement, but not high enough that it makes it likely that the player will take their winnings and abandon the game. More than likely the player began the game with at least 80 times his bet (for instance there are 80 quarters in $20). In contrast the 150:1 payout occurs only on average of once every 6,241 plays. The highest payout of 2,400:1 occurs only on average of once every 643 = 262,144 plays since the machine has 64 virtual stops. The player who continues to feed the machine is likely to have several mid-size payouts, but unlikely to have a large payout. He quits after he is bored or has exhausted his bankroll. Despite their confidentiality, occasionally a PAR sheet is posted on a website. They have limited value to the player, because usually a machine will have 8 to 12 different possible programs with varying payouts. In addition, slight variations of each machine (e.g., with double jackpots or five times play) are always being developed. The casino operator can choose which EPROM chip to install in any particular machine to select the payout desired. The result is that there is not really such a thing as a high payback type of machine, since every machine potentially has multiple settings. From October 2001 to February 2002, columnist Michael Shackleford obtained PAR sheets for five different nickel machines; four IGT games Austin Powers, Fortune Cookie, Leopard Spots and Wheel of Fortune and one game manufactured by WMS; Reel 'em In. Without revealing the proprietary information, he developed a program that would allow him to determine with usually less than a dozen plays on each machine which EPROM chip was installed. Then he did a survey of over 400 machines in 70 different casinos in Las Vegas. He averaged the data, and assigned an average payback percentage to the machines in each casino. The resultant list was widely publicized for marketing purposes (especially by the Palms casino which had the top ranking). One reason that the slot machine is so profitable to a casino is that the player must play the high house edge and high payout wagers along with the low house edge and low payout wagers. In a more traditional wagering game like craps, the player knows that certain wagers have almost a 50/50 chance of winning or losing, but they only pay a limited multiple of the original bet (usually no higher than three times). Other bets have a higher house edge, but the player is rewarded with a bigger win (up to thirty times in craps). The player can choose what kind of wager he wants to make. A slot machine does not afford such an opportunity. Theoretically, the operator could make these probabilities available, or allow the player to choose which one so that the player is free to make a choice. However, no operator has ever enacted this strategy. Different machines have different maximum payouts, but without knowing the odds of getting the jackpot, there is no rational way to differentiate. In many markets where central monitoring and control systems are used to link machines for auditing and security purposes, usually in wide area networks of multiple venues and thousands of machines, player return must usually be changed from a central computer rather than at each machine. A range of percentages is set in the game software and selected remotely. In 2006, the Nevada Gaming Commission began working with Las Vegas casinos on technology that would allow the casino's management to change the game, the odds, and the payouts remotely. The change cannot be done instantaneously, but only after the selected machine has been idle for at least four minutes. After the change is made, the machine must be locked to new players for four minutes and display an on-screen message informing potential players that a change is being made. Linked machines Some varieties of slot machines can be linked together in a setup sometimes known as a "community" game. The most basic form of this setup involves progressive jackpots that are shared between the bank of machines, but may include multiplayer bonuses and other features. In some cases multiple machines are linked across multiple casinos. In these cases, the machines may be owned by the manufacturer, who is responsible for paying the jackpot. The casinos lease the machines rather than owning them outright. Casinos in New Jersey, Nevada, Louisiana, Arkansas, and South Dakota now offer multi-state progressive jackpots, which now offer bigger jackpot pools. Fraud Mechanical slot machines and their coin acceptors were sometimes susceptible to cheating devices and other scams. One historical example involved spinning a coin with a short length of plastic wire. The weight and size of the coin would be accepted by the machine and credits would be granted. However, the spin created by the plastic wire would cause the coin to exit through the reject chute into the payout tray. This particular scam has become obsolete due to improvements in newer slot machines. Another obsolete method of defeating slot machines was to use a light source to confuse the optical sensor used to count coins during payout. Modern slot machines are controlled by EPROM computer chips and, in large casinos, coin acceptors have become obsolete in favor of bill acceptors. These machines and their bill acceptors are designed with advanced anti-cheating and anti-counterfeiting measures and are difficult to defraud. Early computerized slot machines were sometimes defrauded through the use of cheating devices, such as the "slider", "monkey paw", "lightwand" and "the tongue". Many of these old cheating devices were made by the late Tommy Glenn Carmichael, a slot machine fraudster who reportedly stole over $5 million. In the modern day, computerized slot machines are fully deterministic and thus outcomes can be sometimes successfully predicted. Skill stops Skill stop buttons predated the Bally electromechanical slot machines of the 1960s and 1970s. They appeared on mechanical slot machines manufactured by Mills Novelty Co. as early as the mid 1920s. These machines had modified reel-stop arms, which allowed them to be released from the timing bar, earlier than in a normal play, simply by pressing the buttons on the front of the machine, located between each reel. "Skill stop" buttons were added to some slot machines by Zacharias Anthony in the early 1970s. These enabled the player to stop each reel, allowing a degree of "skill" so as to satisfy the New Jersey gaming laws of the day which required that players were able to control the game in some way. The original conversion was applied to approximately 50 late-model Bally slot machines. Because the typical machine stopped the reels automatically in less than 10 seconds, weights were added to the mechanical timers to prolong the automatic stopping of the reels. By the time the New Jersey Alcoholic Beverages Commission (ABC) had approved the conversion for use in New Jersey arcades, the word was out and every other distributor began adding skill stops. The machines were a huge hit on the Jersey Shore and the remaining unconverted Bally machines were destroyed as they had become instantly obsolete. Legislation United States In the United States, the public and private availability of slot machines is highly regulated by state governments. Many states have established gaming control boards to regulate the possession and use of slot machines and other forms of gaming. Nevada is the only state that has no significant restrictions against slot machines both for public and private use. In New Jersey, slot machines are only allowed in hotel casinos operated in Atlantic City. Several states (Indiana, Louisiana and Missouri) allow slot machines (as well as any casino-style gambling) only on licensed riverboats or permanently anchored barges. Since Hurricane Katrina, Mississippi has removed the requirement that casinos on the Gulf Coast operate on barges and now allows them on land along the shoreline. Delaware allows slot machines at three horse tracks; they are regulated by the state lottery commission. In Wisconsin, bars and taverns are allowed to have up to five machines. These machines usually allow a player to either take a payout, or gamble it on a double-or-nothing "side game". The territory of Puerto Rico places significant restrictions on slot machine ownership, but the law is widely flouted and slot machines are common in bars and coffeeshops. In regards to tribal casinos located on Native American reservations, slot machines played against the house and operating independently from a centralized computer system are classified as "Class III" gaming by the Indian Gaming Regulatory Act (IGRA), and sometimes promoted as "Vegas-style" slot machines. In order to offer Class III gaming, tribes must enter into a compact (agreement) with the state that is approved by the Department of the Interior, which may contain restrictions on the types and quantity of such games. As a workaround, some casinos may operate slot machines as "Class II" games—a category that includes games where players play exclusively against at least one other opponent and not the house, such as bingo or any related games (such as pull-tabs). In these cases, the reels are an entertainment display with a pre-determined outcome based on a centralized game played against other players. Under the IGRA, Class II games are regulated by individual tribes and the National Indian Gaming Commission, and do not require any additional approval if the state already permits tribal gaming. Some historical race wagering terminals operate in a similar manner, with the machines using slots as an entertainment display for outcomes paid using the parimutuel betting system, based on results of randomly-selected, previously-held horse races (with the player able to view selected details about the race and adjust their picks before playing the credit, or otherwise use an auto-bet system). Private ownership Alaska, Arizona, Arkansas, Kentucky, Maine, Minnesota, Nevada, Ohio, Rhode Island, Texas, Utah, Virginia, and West Virginia place no restrictions on private ownership of slot machines. Conversely, in Connecticut, Hawaii, Nebraska, South Carolina, and Tennessee, private ownership of any slot machine is completely prohibited. The remaining states allow slot machines of a certain age (typically 25–30 years) or slot machines manufactured before a specific date. Canada The Government of Canada has minimal involvement in gambling beyond the Canadian Criminal Code. In essence, the term "lottery scheme" used in the code means slot machines, bingo and table games normally associated with a casino. These fall under the jurisdiction of the province or territory without reference to the federal government; in practice, all Canadian provinces operate gaming boards that oversee lotteries, casinos and video lottery terminals under their jurisdiction. OLG piloted a classification system for slot machines at the Grand River Raceway developed by University of Waterloo professor Kevin Harrigan, as part of its PlaySmart initiative for responsible gambling. Inspired by nutrition labels on foods, they displayed metrics such as volatility and frequency of payouts. OLG has also deployed electronic gaming machines with pre-determined outcomes based on a bingo or pull-tab game, initially branded as "TapTix", which visually resemble slot machines. In Ontario, 4 April 2022 saw the re-introduction of the online gambling market. This became possible when the Canadian Criminal Code was amended to allow single-event wagering August 2021. The province is expected to generate about $800 million in gross revenue per year. Australia In Australia "Poker Machines" or "pokies" are officially termed "gaming machines". In Australia, gaming machines are a matter for state governments, so laws vary between states. Gaming machines are found in casinos (approximately one in each major city), pubs and clubs in some states (usually sports, social, or RSL clubs). The first Australian state to legalize this style of gambling was New South Wales, when in 1956 they were made legal in all registered clubs in the state. There are suggestions that the proliferation of poker machines has led to increased levels of problem gambling; however, the precise nature of this link is still open to research. In 1999 the Australian Productivity Commission reported that nearly half Australia's gaming machines were in New South Wales. At the time, 21% of all the gambling machines in the world were operating in Australia and, on a per capita basis, Australia had roughly five times as many gaming machines as the United States. Australia ranks 8th in total number of gaming machines after Japan, U.S.A., Italy, U.K., Spain and Germany. This primarily is because gaming machines have been legal in the state of New South Wales since 1956; over time, the number of machines has grown to 97,103 (at December 2010, including the Australian Capital Territory). By way of comparison, the U.S. State of Nevada, which legalised gaming including slots several decades before N.S.W., had 190,135 slots operating. Revenue from gaming machines in pubs and clubs accounts for more than half of the $4 billion in gambling revenue collected by state governments in fiscal year 2002–03. In Queensland, gaming machines in pubs and clubs must provide a return rate of 85%, while machines located in casinos must provide a return rate of 90%. Most other states have similar provisions. In Victoria, gaming machines must provide a minimum return rate of at least 85% (including jackpot contribution), are prohibited from accepting bills greater than $50 in denomination, and each wager must be manually initiated by the player (thus prohibiting "autoplay" mechanisms). Western Australia has the most restrictive regulations on electronic gaming machines (EGMs) in general. They may only be operated at the Crown Perth casino resort, which is the only casino in Western Australia, and have a return rate of 90%. Many EGMs operate games that are nearly identical to slot machines, but with modifications to comply with state law: EGMs are prohibited from using spinning reels, and must not use symbols associated with poker machines used elsewhere. Each wager must take at least three seconds to play, and each wager must be initiated by the user. This policy has an extensive political history, reaffirmed by the 1974 Royal Commission into Gambling: Despite the state having praised its restrictions for keeping gaming machines from being widely available to the public as in other states, the machines have faced criticism for being almost indistinguishable to a normal slot machine, and thus having the same addictive qualities. In March 2022, a royal commission found Crown Gaming to be unfit to hold a gaming license in WA, citing issues surrounding money laundering, failing to minimise harms from problem gambling, and the regulatory framework of the Gaming and Wagering Commission being considered outdated. To implement the recommendations of the Commission, EGMs were limited to maximum bets of $10 beginning in July 2023, while also requiring the implementation of weekly limits on play and losses, and the implementation of cashless machines requiring pre-loaded player cards to function. Nick Xenophon was elected on an independent No Pokies ticket in the South Australian Legislative Council at the 1997 South Australian state election on 2.9 percent, re-elected at the 2006 election on 20.5 percent, and elected to the Australian Senate at the 2007 federal election on 14.8 percent. Independent candidate Andrew Wilkie, an anti-pokies campaigner, was elected to the Australian House of Representatives seat of Denison at the 2010 federal election. Wilkie was one of four crossbenchers who supported the Gillard Labor government following the hung parliament result. Wilkie immediately began forging ties with Xenophon as soon as it was apparent that he was elected. In exchange for Wilkie's support, the Labor government are attempting to implement precommitment technology for high-bet/high-intensity poker machines, against opposition from the Tony Abbott Coalition and Clubs Australia. During the COVID-19 pandemic of 2020, every establishment in the country that facilitated poker machines was shut down, in an attempt to curb the spread of the virus, bringing Australia's usage of poker machines effectively to zero. Russia In Russia, "slot clubs" appeared quite late, only in 1992. Before 1992, slot machines were only in casinos and small shops, but later slot clubs began appearing all over the country. The most popular and numerous were "Vulcan 777" and "Taj Mahal". Since 2009, when gambling establishments were banned, almost all slot clubs disappeared and are found only in a specially authorized gambling zones. United Kingdom Slot machines are covered by the Gambling Act 2005, which superseded the Gaming Act 1968. Slot machines in the U.K. are categorised by definitions produced by the Gambling Commission as part of the Gambling Act of 2005. Casinos built under the provisions of the 1968 Act are allowed to house either up to twenty machines of categories B–D or any number of C–D machines. As defined by the 2005 Act, large casinos can have a maximum of one hundred and fifty machines in any combination of categories B–D (subject to a machine-to-table ratio of 5:1); small casinos can have a maximum of eighty machines in any combination of categories B–D (subject to a machine-to-table ratio of 2:1). Category A Category A games were defined in preparation for the planned "Super Casinos". Despite a lengthy bidding process with Manchester being chosen as the single planned location, the development was cancelled soon after Gordon Brown became Prime Minister of the United Kingdom. As a result, there are no lawful Category A games in the U.K. Category B Category B games are divided into subcategories. The differences between B1, B3 and B4 games are mainly the stake and prizes as defined in the above table. Category B2 games – Fixed odds betting terminals (FOBTs) – have quite different stake and prize rules: FOBTs are mainly found in licensed betting shops, or bookmakers, usually in the form of electronic roulette. The games are based on a random number generator; thus each game's probability of getting the jackpot is independent of any other game: probabilities are all equal. If a pseudorandom number generator is used instead of a truly random one, probabilities are not independent since each number is determined at least in part by the one generated before it. Category C Category C games are often referred to as fruit machines, one-armed bandits and AWP (amusement with prize). Fruit machines are commonly found in pubs, clubs, and arcades. Machines commonly have three but can be found with four or five reels, each with 16–24 symbols printed around them. The reels are spun each play, from which the appearance of particular combinations of symbols result in payment of their associated winnings by the machine (or alternatively initiation of a subgame). These games often have many extra features, trails and subgames with opportunities to win money; usually more than can be won from just the payouts on the reel combinations. Fruit machines in the U.K. almost universally have the following features, generally selected at random using a pseudorandom number generator: A player (known in the industry as a punter) may be given the opportunity to hold one or more reels before spinning, meaning they will not be spun but instead retain their displayed symbols yet otherwise count normally for that play. This can sometimes increase the chance of winning, especially if two or more reels are held. A player may also be given a number of nudges following a spin (or, in some machines, as a result in a subgame). A nudge is a step rotation of a reel chosen by the player (the machine may not allow all reels to be nudged for a particular play). Cheats can also be made available on the internet or through emailed newsletters to subscribers. These cheats give the player the impression of an advantage, whereas in reality the payout percentage remains exactly the same. The most widely used cheat is known as hold after a nudge and increases the chance that the player will win following an unsuccessful nudge. Machines from the early 1990s did not advertise the concept of hold after a nudge when this feature was first introduced, it became so well known amongst players and widespread amongst new machine releases that it is now well-advertised on the machine during play. This is characterized by messages on the display such as DON'T HOLD ANY or LET 'EM SPIN and is a designed feature of the machine, not a cheat at all. Holding the same pair three times on three consecutive spins also gives a guaranteed win on most machines that offer holds. It is known for machines to pay out multiple jackpots, one after the other (this is known as a "repeat") but each jackpot requires a new game to be played so as not to violate the law about the maximum payout on a single play. Typically this involves the player only pressing the Start button at the "repeat" prompt, for which a single credit is taken, regardless of whether this causes the reels to spin or not. Machines are also known to intentionally set aside money, which is later awarded in a series of wins, known as a "streak". The minimum payout percentage is 70%, with pubs often setting the payout at around 78%. Japan Japanese slot machines, known as or pachislot from the words "pachinko" and "slot machine", are a descendant of the traditional Japanese pachinko game. Slot machines are a fairly new phenomenon and they can be found mostly in pachinko parlors and the adult sections of amusement arcades, known as game centers. Jackpot disputes Electronic slot machines can malfunction. When the displayed amount is smaller than the one it is supposed to be, the error usually goes unnoticed. When it happens the other way, disputes are likely. Below are some notable arguments caused by the owners of the machines saying that the displayed amounts were far larger than the ones patrons should get. United States Two such cases occurred in casinos in Colorado in 2010, where software errors led to indicated jackpots of $11 million and $42 million. Analysis of machine records by the state Gaming Commission revealed faults, with the true jackpot being substantially smaller. State gaming laws did not require a casino to honour payouts in that case. Vietnam On October 25, 2009, while a Vietnamese American man, Ly Sam, was playing a slot machine in the Palazzo Club at the Sheraton Saigon Hotel in Ho Chi Minh City, Vietnam, it displayed that he had hit a jackpot of US$55,542,296.73. The casino refused to pay, saying it was a machine error, Ly sued the casino. On January 7, 2013, the District 1 People's Court in Ho Chi Minh City decided that the casino had to pay the amount Ly claimed in full, not trusting the error report from an inspection company hired by the casino. Both sides appealed thereafter, and Ly asked for interest while the casino refused to pay him. In January, 2014, the news reported that the case had been settled out of court, and Ly had received an undisclosed sum. Problem gambling and slot machines Natasha Dow Schüll, associate professor in New York University's Department of Media, Culture and Communication, uses the term "machine zone" to describe the state of immersion that users of slot machines experience when gambling, where they lose a sense of time, space, bodily awareness, and monetary value. Mike Dixon, PhD, professor of psychology at the University of Waterloo, studies the relationship between slot players and machines. In one of Dixon's studies, players were observed experiencing heightened arousal from the sensory stimulus coming from the machines. They "sought to show that these 'losses disguised as wins' (LDWs) would be as arousing as wins, and more arousing than regular losses." Psychologists Robert Breen and Marc Zimmerman found that players of video slot machines reach a debilitating level of involvement with gambling three times as rapidly as those who play traditional casino games, even if they have engaged in other forms of gambling without problems. Eye-tracking research in local bookkeepers' offices in the UK suggested that, in slots games, the reels dominated players' visual attention, and that problem gamblers looked more frequently at amount-won messages than did those without gambling problems. The 2011 60 Minutes report "Slot Machines: The Big Gamble" focused on the link between slot machines and gambling addiction. See also Casino European Gaming & Amusement Federation List of probability topics Multi-armed bandit Pachinko Problem gambling Progressive jackpot Quiz machine United States state slot machine ownership regulations Video bingo Video lottery terminal (VLT) Video poker References Bibliography Brisman, Andrew. The American Mensa Guide to Casino Gambling: Winning Ways (Stirling, 1999) Grochowski, John. The Slot Machine Answer Book: How They Work, How They've Changed, and How to Overcome the House Advantage (Bonus Books, 2005) Legato, Frank. How to Win Millions Playing Slot Machines! ...Or Lose Trying (Bonus Books, 2004) External links American inventions Arcade games Gaming devices Commercial machines Gambling games
Slot machine
[ "Physics", "Technology" ]
11,249
[ "Physical systems", "Commercial machines", "Machines" ]
29,293
https://en.wikipedia.org/wiki/Optical%20spectrometer
An optical spectrometer (spectrophotometer, spectrograph or spectroscope) is an instrument used to measure properties of light over a specific portion of the electromagnetic spectrum, typically used in spectroscopic analysis to identify materials. The variable measured is most often the irradiance of the light but could also, for instance, be the polarization state. The independent variable is usually the wavelength of the light or a closely derived physical quantity, such as the corresponding wavenumber or the photon energy, in units of measurement such as centimeters, reciprocal centimeters, or electron volts, respectively. A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometers may operate over a wide range of non-optical wavelengths, from gamma rays and X-rays into the far infrared. If the instrument is designed to measure the spectrum on an absolute scale rather than a relative one, then it is typically called a spectrophotometer. The majority of spectrophotometers are used in spectral regions near the visible spectrum. A spectrometer that is calibrated for measurement of the incident optical power is called a spectroradiometer. In general, any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies (that is, at microwave and radio frequencies), the spectrum analyzer is a closely related electronic device. Spectrometers are used in many fields. For example, they are used in astronomy to analyze the radiation from objects and deduce their chemical composition. The spectrometer uses a prism or a grating to spread the light into a spectrum. This allows astronomers to detect many of the chemical elements by their characteristic spectral lines. These lines are named for the elements which cause them, such as the hydrogen alpha, beta, and gamma lines. A glowing object will show bright spectral lines. Dark lines are made by absorption, for example by light passing through a gas cloud, and these absorption lines can also identify chemical compounds. Much of our knowledge of the chemical makeup of the universe comes from spectra. Spectroscopes Spectroscopes are often used in astronomy and some branches of chemistry. Early spectroscopes were simply prisms with graduations marking wavelengths of light. Modern spectroscopes generally use a diffraction grating, a movable slit, and some kind of photodetector, all automated and controlled by a computer. Recent advances have seen increasing reliance of computational algorithms in a range of miniaturised spectrometers without diffraction gratings, for example, through the use of quantum dot-based filter arrays on to a CCD chip or a series of photodetectors realised on a single nanostructure. Joseph von Fraunhofer developed the first modern spectroscope by combining a prism, diffraction slit and telescope in a manner that increased the spectral resolution and was reproducible in other laboratories. Fraunhofer also went on to invent the first diffraction spectroscope. Gustav Robert Kirchhoff and Robert Bunsen discovered the application of spectroscopes to chemical analysis and used this approach to discover caesium and rubidium. Kirchhoff and Bunsen's analysis also enabled a chemical explanation of stellar spectra, including Fraunhofer lines. When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. For example, the element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers, the color of which will be familiar to anyone who has seen a low pressure sodium vapor lamp. In the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light then passed through a prism (in hand-held spectroscopes, usually an Amici prism) that refracted the beam into a spectrum because different wavelengths were refracted different amounts due to dispersion. This image was then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement. With the development of photographic film, the more accurate spectrograph was created. It was based on the same principle as the spectroscope, but it had a camera in place of the viewing tube. In recent years, the electronic circuits built around the photomultiplier tube have replaced the camera, allowing real-time spectrographic analysis with far greater accuracy. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories. In modern spectrographs in the UV, visible, and near-IR spectral ranges, the spectrum is generally given in the form of photon number per unit wavelength (nm or μm), wavenumber (μm−1, cm−1), frequency (THz), or energy (eV), with the units indicated by the abscissa. In the mid- to far-IR, spectra are typically expressed in units of Watts per unit wavelength (μm) or wavenumber (cm−1). In many cases, the spectrum is displayed with the units left implied (such as "digital counts" per spectral channel). In Gemology Gemologists frequently use spectroscopes to determine the absorption spectra of gemstones, thereby allowing them to make inferences about what kind of gem they are examining. A gemologist may compare the absorption spectrum they observe with a catalogue of spectra for various gems to help narrow down the exact identity of the gem. Spectrographs A spectrograph is an instrument that separates light into its wavelengths and records the data. A spectrograph typically has a multi-channel detector system or camera that detects and records the spectrum of light. The term was first used in 1876 by Dr. Henry Draper when he invented the earliest version of this device, and which he used to take several photographs of the spectrum of Vega. This earliest version of the spectrograph was cumbersome to use and difficult to manage. There are several kinds of machines referred to as spectrographs, depending on the precise nature of the waves. The first spectrographs used photographic paper as the detector. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector. More recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded. A spectrograph is sometimes called polychromator, as an analogy to monochromator. Stellar and solar spectrograph The star spectral classification and discovery of the main sequence, Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper. James Webb Space Telescope contains both a near-infrared spectrograph (NIRSpec) and a mid-infrared spectrograph (MIRI). Echelle spectrograph An echelle-based spectrograph uses two diffraction gratings, rotated 90 degrees with respect to each other and placed close to one another. Therefore, an entrance point and not a slit is used and a CCD-chip records the spectrum. Both gratings have a wide spacing, and one is blazed so that only the first order is visible and the other is blazed with many higher orders visible, so a very fine spectrum is presented to the CCD. Slitless spectrograph In conventional spectrographs, a slit is inserted into the beam to limit the image extent in the dispersion direction. A slitless spectrograph omits the slit; this results in images that convolve the image information with spectral information along the direction of dispersion. If the field is not sufficiently sparse, then spectra from different sources in the image field will overlap. The trade is that slitless spectrographs can produce spectral images much more quickly than scanning a conventional spectrograph. That is useful in applications such as solar physics where time evolution is important. See also Circular dichroism Cosmic Origins Spectrograph Czerny-Turner monochromator Imaging spectrometer List of astronomical instruments List of light sources Long-slit spectroscopy Prism spectrometer Scanning mobility particle sizer Spectrogram Spectrometer Spectroradiometer Spectroscopy Virtually imaged phased array References Bibliography J. F. James and R. S. Sternberg (1969), The Design of Optical Spectrometers (Chapman and Hall Ltd) James, John (2007), Spectrograph Design Fundamentals (Cambridge University Press) Browning, John (1882), How to work with the spectroscope : a manual of practical manipulation with spectroscopes of all kinds External links Spectrograph for astronomical Spectra Photographs of spectrographs used in the Lick Observatory from the Lick Observatory Records Digital Archive, UC Santa Cruz Library's Digital Collections Electronic test equipment Signal processing Measuring instruments Laboratory equipment German inventions Telescope types
Optical spectrometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,929
[ "Telecommunications engineering", "Computer engineering", "Spectrum (physical sciences)", "Signal processing", "Electronic test equipment", "Measuring instruments", "Spectrographs", "Spectrometers", "Spectroscopy" ]
29,400
https://en.wikipedia.org/wiki/Structural%20biology
Structural biology, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization. Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy. Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution. With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. Computational techniques such as molecular dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function. History In 1912 Max Von Laue directed X-rays at crystallized copper sulfate generating a diffraction pattern. These experiments led to the development of X-ray crystallography, and its usage in exploring biological structures. In 1951, Rosalind Franklin and Maurice Wilkins used X-ray diffraction patterns to capture the first image of deoxyribonucleic acid (DNA). Francis Crick and James Watson modeled the double helical structure of DNA using this same technique in 1953 and received the Nobel Prize in Medicine along with Wilkins in 1962. Pepsin crystals were the first proteins to be crystallized for use in X-ray diffraction, by Theodore Svedberg who received the 1962 Nobel Prize in Chemistry. The first tertiary protein structure, that of myoglobin, was published in 1958 by John Kendrew. During this time, modeling of protein structures was done using balsa wood or wire models. With the invention of modeling software such as CCP4 in the late 1970s, modeling is now done with computer assistance. Recent developments in the field have included the generation of X-ray free electron lasers, allowing analysis of the dynamics and motion of biological molecules, and the use of structural biology in assisting synthetic biology. In the late 1930s and early 1940s, the combination of work done by Isidor Rabi, Felix Bloch, and Edward Mills Purcell led to the development of nuclear magnetic resonance (NMR). Currently, solid-state NMR is widely used in the field of structural biology to determine the structure and dynamic nature of proteins (protein NMR). In 1990, Richard Henderson produced the first three-dimensional, high resolution image of bacteriorhodopsin using cryogenic electron microscopy (cryo-EM). Since then, cryo-EM has emerged as an increasingly popular technique to determine three-dimensional, high resolution structures of biological images. More recently, computational methods have been developed to model and study biological structures. For example, molecular dynamics (MD) is commonly used to analyze the dynamic movements of biological molecules. In 1975, the first simulation of a biological folding process using MD was published in Nature. Recently, protein structure prediction was significantly improved by a new machine learning method called AlphaFold. Some claim that computational approaches are starting to lead the field of structural biology research. Techniques Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include: Mass spectrometry Macromolecular crystallography Neutron diffraction Proteolysis Nuclear magnetic resonance spectroscopy of proteins (NMR) Electron paramagnetic resonance (EPR) Cryogenic electron microscopy (cryoEM) Electron crystallography and microcrystal electron diffraction Multiangle light scattering Small angle scattering Ultrafast laser spectroscopy Anisotropic terahertz microspectroscopy Two-dimensional infrared spectroscopy Dual-polarization interferometry and circular dichroism Most often researchers use them to study the "native states" of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding. A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction. Applications Structural biologists have made significant contributions towards understanding the molecular components and mechanisms underlying human diseases. For example, cryo-EM and ssNMR have been used to study the aggregation of amyloid fibrils, which are associated with Alzheimer's disease, Parkinson's disease, and type II diabetes. In addition to amyloid proteins, scientists have used cryo-EM to produce high resolution models of tau filaments in the brain of Alzheimer's patients which may help develop better treatments in the future. Structural biology tools can also be used to explain interactions between pathogens and hosts. For example, structural biology tools have enabled virologists to understand how the HIV envelope allows the virus to evade human immune responses. Structural biology is also an important component of drug discovery. Scientists can identify targets using genomics, study those targets using structural biology, and develop drugs that are suited for those targets. Specifically, ligand-NMR, mass spectrometry, and X-ray crystallography are commonly used techniques in the drug discovery process. For example, researchers have used structural biology to better understand Met, a protein encoded by a protooncogene that is an important drug target in cancer. Similar research has been conducted for HIV targets to treat people with AIDS. Researchers are also developing new antimicrobials for mycobacterial infections using structure-driven drug discovery. See also Primary structure Secondary structure Tertiary structure Quaternary structure Structural domain Structural motif Protein subunit Molecular model Cooperativity Chaperonin Structural genomics Stereochemistry Resolution (electron density) Proteopedia The collaborative, 3D encyclopedia of proteins and other molecules. Protein structure prediction SBGrid Consortium References External links Nature: Structural & Molecular Biology magazine website Journal of Structural Biology Structural Biology - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology Structural Biology in Europe Learning Crystallography Molecular biology Protein structure Biophysics
Structural biology
[ "Physics", "Chemistry", "Biology" ]
1,649
[ "Applied and interdisciplinary physics", "Biophysics", "Structural biology", "Molecular biology", "Biochemistry", "Protein structure" ]
29,417
https://en.wikipedia.org/wiki/Statite
A statite (a portmanteau of the words static and satellite) is a hypothetical type of artificial satellite that employs a solar sail to continuously modify its orbit in ways that gravity alone would not allow. Typically, a statite would use the solar sail to "hover" in a location that would not otherwise be available as a stable geosynchronous orbit. Statites have been proposed that would remain in fixed locations high over Earth's poles, using reflected sunlight to counteract the gravity pulling them down. Statites might also employ their sails to change the shape or velocity of more conventional orbits, depending upon the purpose of the particular statite. The concept of the statite was invented independently and at about the same time by Robert L. Forward (who coined the term "statite") and Colin McInnes, who used the term "halo orbit" (not to be confused with the type of halo orbit discovered by Robert Farquhar). Subsequently, the terms "non-Keplerian orbit" and "artificial Lagrange point" have been used as a generalization of the above terms. No statites have been deployed to date, as solar sail technology remains in its infancy. NASA's cancelled Sunjammer solar sail mission had the stated objective of flying to an artificial Lagrange point near the Earth/Sun L1 point, to demonstrate the feasibility of the Geostorm geomagnetic storm warning mission concept proposed by NOAA's Patricia Mulligan. A constellation of statites have been proposed for performing a rendezvous with an interstellar object. See also Dyson bubble List of hypothetical technologies Solar mirror Space sunshade References Astrodynamics Hypothetical technology Spaceflight concepts
Statite
[ "Astronomy", "Engineering" ]
350
[ "Aerospace engineering", "Astrodynamics", "Astronomy stubs", "Spacecraft stubs" ]
29,420
https://en.wikipedia.org/wiki/Solar%20sail
Solar sails (also known as lightsails, light sails, and photon sails) are a method of spacecraft propulsion using radiation pressure exerted by sunlight on large surfaces. A number of spaceflight missions to test solar propulsion and navigation have been proposed since the 1980s. The first spacecraft to make use of the technology was IKAROS, launched in 2010. A useful analogy to solar sailing may be a sailing boat; the light exerting a force on the large surface is akin to a sail being blown by the wind. High-energy laser beams could be used as an alternative light source to exert much greater force than would be possible using sunlight, a concept known as beam sailing. Solar sail craft offer the possibility of low-cost operations combined with high speeds (relative to chemical rockets) and long operating lifetimes. Since they have few moving parts and use no propellant, they can potentially be used numerous times for the delivery of payloads. Solar sails use a phenomenon that has a proven, measured effect on astrodynamics. Solar pressure affects all spacecraft, whether in interplanetary space or in orbit around a planet or small body. A typical spacecraft going to Mars, for example, will be displaced thousands of kilometers by solar pressure, so the effects must be accounted for in trajectory planning, which has been done since the time of the earliest interplanetary spacecraft of the 1960s. Solar pressure also affects the orientation of a spacecraft, a factor that must be included in spacecraft design. The total force exerted on an solar sail, for example, is about at Earth's distance from the Sun, making it a low-thrust propulsion system, similar to spacecraft propelled by electric engines, but as it uses no propellant, that force is exerted almost constantly and the collective effect over time is great enough to be considered a potential manner of propelling spacecraft. History of concept Johannes Kepler observed that comet tails point away from the Sun and suggested that the Sun caused the effect. In a letter to Galileo in 1610, he wrote, "Provide ships or sails adapted to the heavenly breezes, and there will be some who will brave even that void." He might have had the comet tail phenomenon in mind when he wrote those words, although his publications on comet tails came several years later. James Clerk Maxwell, in 1861–1864, published his theory of electromagnetic fields and radiation, which shows that light has momentum and thus can exert pressure on objects. Maxwell's equations provide the theoretical foundation for sailing with light pressure. So by 1864, the physics community and beyond knew sunlight carried momentum that would exert a pressure on objects. Jules Verne, in From the Earth to the Moon, published in 1865, wrote "there will some day appear velocities far greater than these [of the planets and the projectile], of which light or electricity will probably be the mechanical agent ... we shall one day travel to the moon, the planets, and the stars." This is possibly the first published recognition that light could move ships through space. Pyotr Lebedev was first to successfully demonstrate light pressure, which he did in 1899 with a torsional balance; Ernest Nichols and Gordon Hull conducted a similar independent experiment in 1901 using a Nichols radiometer. Svante Arrhenius predicted in 1908 the possibility of solar radiation pressure distributing life spores across interstellar distances, providing one means to explain the concept of panspermia. He was apparently the first scientist to state that light could move objects between stars. Konstantin Tsiolkovsky first proposed using the pressure of sunlight to propel spacecraft through space and suggested, "using tremendous mirrors of very thin sheets to utilize the pressure of sunlight to attain cosmic velocities". Friedrich Zander (Tsander) published a technical paper in 1925 that included technical analysis of solar sailing. Zander wrote of "applying small forces" using "light pressure or transmission of light energy to distances by means of very thin mirrors". JBS Haldane speculated in 1927 about the invention of tubular spaceships that would take humanity to space and how "wings of metallic foil of a square kilometre or more in area are spread out to catch the Sun's radiation pressure". J. D. Bernal wrote in 1929, "A form of space sailing might be developed which used the repulsive effect of the Sun's rays instead of wind. A space vessel spreading its large, metallic wings, acres in extent, to the full, might be blown to the limit of Neptune's orbit. Then, to increase its speed, it would tack, close-hauled, down the gravitational field, spreading full sail again as it rushed past the Sun." Arthur C. Clarke wrote Sunjammer, a science fiction short story originally published in the March 1964 issue of Boys' Life depicting a yacht race between solar sail spacecraft. Carl Sagan, in the 1970s, popularized the idea of sailing on light using a giant structure which would reflect photons in one direction, creating momentum. He brought up his ideas in college lectures, books, and television shows. He was fixated on quickly launching this spacecraft in time to perform a rendezvous with Halley's Comet. Unfortunately, the mission didn't take place in time and he would never live to finally see it through. The first formal technology and design effort for a solar sail began in 1976 at Jet Propulsion Laboratory for a proposed mission to rendezvous with Halley's Comet. Types Reflective Most solar sails are based on reflection. The surface of the sail is highly reflective, like a mirror, and light reflecting off of the surface imparts a force. Diffractive In 2018, diffraction was proposed as a different solar sail propulsion mechanism, which is claimed to have several advantages. Alternatives Electric solar wind Pekka Janhunen from FMI has proposed a type of solar sail called the electric solar wind sail. Mechanically it has little in common with the traditional solar sail design. The sails are replaced with straightened conducting tethers (wires) placed radially around the host ship. The wires are electrically charged to create an electric field around the wires. The electric field extends a few tens of metres into the plasma of the surrounding solar wind. The solar electrons are reflected by the electric field (like the photons on a traditional solar sail). The radius of the sail is from the electric field rather than the actual wire itself, making the sail lighter. The craft can also be steered by regulating the electric charge of the wires. A practical electric sail would have 50–100 straightened wires with a length of about 20 km each. Electric solar wind sails can adjust their electrostatic fields and sail attitudes. Magnetic A magnetic sail would also employ the solar wind. However, the magnetic field deflects the electrically charged particles in the wind. It uses wire loops, and runs a static current through them instead of applying a static voltage. All these designs maneuver, though the mechanisms are different. Magnetic sails bend the path of the charged protons that are in the solar wind. By changing the sails' attitudes, and the size of the magnetic fields, they can change the amount and direction of the thrust. Physical principles for reflective sails Solar radiation pressure The force imparted to a solar sail arises from the momentum of photons. The momentum of a photon or an entire flux is given by Einstein's relation: where p is the momentum, E is the energy (of the photon or flux), and c is the speed of light. Specifically, the momentum of a photon depends on its wavelength Solar radiation pressure can be related to the irradiance (solar constant) value of 1361 W/m2 at 1 AU (Earth-Sun distance), as revised in 2011: perfect absorbance: F = 4.54 μN per square metre (4.54 μPa) in the direction of the incident beam (a perfectly inelastic collision) perfect reflectance: F = 9.08 μN per square metre (9.08 μPa) in the direction normal to surface (an elastic collision) An ideal sail is flat and has 100% specular reflection. An actual sail will have an overall efficiency of about 90%, about 8.17 μN/m2, due to curvature (billow), wrinkles, absorbance, re-radiation from front and back, non-specular effects, and other factors. The force on a sail and the actual acceleration of the craft vary by the inverse square of distance from the Sun (unless extremely close to the Sun), and by the square of the cosine of the angle between the sail force vector and the radial from the Sun, so (for an ideal sail) where R is distance from the Sun in AU. An actual square sail can be modelled as: Note that the force and acceleration approach zero generally around θ = 60° rather than 90° as one might expect with an ideal sail. If some of the energy is absorbed, the absorbed energy will heat the sail, which re-radiates that energy from the front and rear surfaces, depending on the emissivity of those two surfaces. Solar wind, the flux of charged particles blown out from the Sun, exerts a nominal dynamic pressure of about 3 to 4 nPa, three orders of magnitude less than solar radiation pressure on a reflective sail. Sail parameters Sail loading (areal density) is an important parameter, which is the total mass divided by the sail area, expressed in g/m2. It is represented by the Greek letter σ (sigma). A sail craft has a characteristic acceleration, ac, which it would experience at 1 AU when facing the Sun. Note this value accounts for both the incident and reflected momentums. Using the value from above of 9.08 μN per square metre of radiation pressure at 1 AU, ac is related to areal density by: ac = 9.08(efficiency) / σ mm/s2 Assuming 90% efficiency, ac = 8.17 / σ mm/s2 The lightness number, λ, is the dimensionless ratio of maximum vehicle acceleration divided by the Sun's local gravity. Using the values at 1 AU: λ = ac / 5.93 The lightness number is also independent of distance from the Sun because both gravity and light pressure fall off as the inverse square of the distance from the Sun. Therefore, this number defines the types of orbit maneuvers that are possible for a given vessel. The table presents some example values. Payloads are not included. The first two are from the detailed design effort at JPL in the 1970s. The third, the lattice sailer, might represent about the best possible performance level. The dimensions for square and lattice sails are edges. The dimension for heliogyro is blade tip to blade tip. Attitude control An active attitude control system (ACS) is essential for a sail craft to achieve and maintain a desired orientation. The required sail orientation changes slowly (often less than 1 degree per day) in interplanetary space, but much more rapidly in a planetary orbit. The ACS must be capable of meeting these orientation requirements. Attitude control is achieved by a relative shift between the craft's center of pressure and its center of mass. This can be achieved with control vanes, movement of individual sails, movement of a control mass, or altering reflectivity. Holding a constant attitude requires that the ACS maintain a net torque of zero on the craft. The total force and torque on a sail, or set of sails, is not constant along a trajectory. The force changes with solar distance and sail angle, which changes the billow in the sail and deflects some elements of the supporting structure, resulting in changes in the sail force and torque. Sail temperature also changes with solar distance and sail angle, which changes sail dimensions. The radiant heat from the sail changes the temperature of the supporting structure. Both factors affect total force and torque. To hold the desired attitude the ACS must compensate for all of these changes. Constraints In Earth orbit, solar pressure and drag pressure are typically equal at an altitude of about 800 km, which means that a sail craft would have to operate above that altitude. Sail craft must operate in orbits where their turn rates are compatible with the orbits, which is generally a concern only for spinning disk configurations. Sail operating temperatures are a function of solar distance, sail angle, reflectivity, and front and back emissivities. A sail can be used only where its temperature is kept within its material limits. Generally, a sail can be used rather close to the Sun, around 0.25 AU, or even closer if carefully designed for those conditions. Applications Potential applications for sail craft range throughout the Solar System, from near the Sun to the comet clouds beyond Neptune. The craft can make outbound voyages to deliver loads or to take up station keeping at the destination. They can be used to haul cargo and possibly also used for human travel. Inner planets For trips within the inner Solar System, they can deliver payloads and then return to Earth for subsequent voyages, operating as an interplanetary shuttle. For Mars in particular, the craft could provide economical means of routinely supplying operations on the planet. According to Jerome Wright, "The cost of launching the necessary conventional propellants from Earth are enormous for manned missions. Use of sailing ships could potentially save more than $10 billion in mission costs." Solar sail craft can approach the Sun to deliver observation payloads or to take up station keeping orbits. They can operate at 0.25 AU or closer. They can reach high orbital inclinations, including polar. Solar sails can travel to and from all of the inner planets. Trips to Mercury and Venus are for rendezvous and orbit entry for the payload. Trips to Mars could be either for rendezvous or swing-by with release of the payload for aerodynamic braking. Outer planets Minimum transfer times to the outer planets benefit from using an indirect transfer (solar swing-by). However, this method results in high arrival speeds. Slower transfers have lower arrival speeds. The minimum transfer time to Jupiter for ac of 1 mm/s2 with no departure velocity relative to Earth is 2 years when using an indirect transfer (solar swing-by). The arrival speed (V∞) is close to 17 km/s. For Saturn, the minimum trip time is 3.3 years, with an arrival speed of nearly 19 km/s. Oort Cloud/Sun's inner gravity focus The Sun's inner gravitational focus point lies at minimum distance of 550 AU from the Sun, and is the point to which light from distant objects is focused by gravity as a result of it passing by the Sun. This is thus the distant point to which solar gravity will cause the region of deep space on the other side of the Sun to be focused, thus serving effectively as a very large telescope objective lens. It has been proposed that an inflated sail, made of beryllium, that starts at 0.05 AU from the Sun would gain an initial acceleration of 36.4 m/s2, and reach a speed of 0.00264c (about 950 km/s) in less than a day. Such proximity to the Sun could prove to be impractical in the near term due to the structural degradation of beryllium at high temperatures, diffusion of hydrogen at high temperatures as well as an electrostatic gradient, generated by the ionization of beryllium from the solar wind, posing a burst risk. A revised perihelion of 0.1 AU would reduce the aforementioned temperature and solar flux exposure. Such a sail would take "Two and a half years to reach the heliopause, six and a half years to reach the Sun’s inner gravitational focus, with arrival at the inner Oort Cloud in no more than thirty years." "Such a mission could perform useful astrophysical observations en route, explore gravitational focusing techniques, and image Oort Cloud objects while exploring particles and fields in that region that are of galactic rather than solar origin." Satellites Robert L. Forward has commented that a solar sail could be used to modify the orbit of a satellite about the Earth. In the limit, a sail could be used to "hover" a satellite above one pole of the Earth. Spacecraft fitted with solar sails could also be placed in close orbits such that they are stationary with respect to either the Sun or the Earth, a type of satellite named by Forward a "statite". This is possible because the propulsion provided by the sail offsets the gravitational attraction of the Sun. Such an orbit could be useful for studying the properties of the Sun for long durations. Likewise a solar sail-equipped spacecraft could also remain on station nearly above the polar solar terminator of a planet such as the Earth by tilting the sail at the appropriate angle needed to counteract the planet's gravity. In his book The Case for Mars, Robert Zubrin points out that the reflected sunlight from a large statite, placed near the polar terminator of the planet Mars, could be focused on one of the Martian polar ice caps to significantly warm the planet's atmosphere. Such a statite could be made from asteroid material. A group of satellites designed to act as sails has been proposed to measure Earth's Energy Imbalance which is the most fundamental measure of the planet's rate of global warming. On-board state-of-the-art accelerometers would measure shifts in the pressure differential between incoming solar and outgoing thermal radiation on opposing sides of each satellite. Measurement accuracy has been projected to be better than that achievable with compact radiometric detectors. Trajectory corrections The MESSENGER probe orbiting Mercury used light pressure on its solar panels to perform fine trajectory corrections on the way to Mercury. By changing the angle of the solar panels relative to the Sun, the amount of solar radiation pressure was varied to adjust the spacecraft trajectory more delicately than possible with thrusters. Minor errors are greatly amplified by gravity assist maneuvers, so using radiation pressure to make very small corrections saved large amounts of propellant. Interstellar flight In the 1970s, Robert Forward proposed two beam-powered propulsion schemes using either lasers or masers to push giant sails to a significant fraction of the speed of light. In the science fiction novel Rocheworld, Forward described a light sail propelled by super lasers. As the starship neared its destination, the outer portion of the sail would detach. The outer sail would then refocus and reflect the lasers back onto a smaller, inner sail. This would provide braking thrust to stop the ship in the destination star system. Both methods pose monumental engineering challenges. The lasers would have to operate for years continuously at gigawatt strength. Forward's solution to this requires enormous solar panel arrays to be built at or near the planet Mercury. A planet-sized mirror or Fresnel lens would need to be located at several dozen astronomical units from the Sun to keep the lasers focused on the sail. The giant braking sail would have to act as a precision mirror to focus the braking beam onto the inner "deceleration" sail. A potentially easier approach would be to use a maser to drive a "solar sail" composed of a mesh of wires with the same spacing as the wavelength of the microwaves directed at the sail, since the manipulation of microwave radiation is somewhat easier than the manipulation of visible light. The hypothetical "Starwisp" interstellar probe design would use microwaves, rather than visible light, to push it. Masers spread out more rapidly than optical lasers owing to their longer wavelength, and so would not have as great an effective range. Masers could also be used to power a painted solar sail, a conventional sail coated with a layer of chemicals designed to evaporate when struck by microwave radiation. The momentum generated by this evaporation could significantly increase the thrust generated by solar sails, as a form of lightweight ablative laser propulsion. To further focus the energy on a distant solar sail, Forward proposed a lens designed as a large zone plate. This would be placed at a location between the laser or maser and the spacecraft. Another more physically realistic approach would be to use the light from the Sun to accelerate the spacecraft. The ship would first drop into an orbit making a close pass to the Sun, to maximize the solar energy input on the sail, then it would begin to accelerate away from the system using the light from the Sun. Acceleration will drop approximately as the inverse square of the distance from the Sun, and beyond some distance, the ship would no longer receive enough light to accelerate it significantly, but would maintain the final velocity attained. When nearing the target star, the ship could turn its sails toward it and begin to use the outward pressure of the destination star to decelerate. Rockets could augment the solar thrust. Similar solar sailing launch and capture were suggested for directed panspermia to expand life in other solar systems. Velocities of 0.05% the speed of light could be obtained by solar sails carrying 10 kg payloads, using thin solar sail vehicles with effective areal densities of 0.1 g/m2 with thin sails of 0.1 μm thickness and sizes on the order of one square kilometer. Alternatively, swarms of 1 mm capsules could be launched on solar sails with radii of 42 cm, each carrying 10,000 capsules of a hundred million extremophile microorganisms to seed life in diverse target environments. Theoretical studies suggest relativistic speeds if the solar sail harnesses a supernova. Deorbiting artificial satellites Small solar sails have been proposed to accelerate the deorbiting of small artificial satellites from Earth orbits. Satellites in low Earth orbit can use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry. A de-orbit sail developed at Cranfield University is part of the UK satellite TechDemoSat-1, launched in 2014. The sail deployed at the end of the satellite's five-year useful life in May 2019. The sail's purpose is to bring the satellite out of orbit over a period of about 25 years. In July 2015 British 3U CubeSat called DeorbitSail was launched into space with the purpose of testing 16 m2 deorbit structure, but eventually it failed to deploy it. A student 2U CubeSat mission called PW-Sat2, launched in December 2018 and tested a 4 m2 deorbit sail. It successfully deorbited in February 2021. In June 2017, a second British 3U CubeSat called InflateSail deployed a 10 m2 deorbit sail at an altitude of . In June 2017 the 3U Cubesat URSAMAIOR has been launched in low Earth orbit to test the deorbiting system ARTICA developed by Spacemind. The device, which occupies only 0.4 U of the cubesat, shall deploy a sail of 2.1 m2 to deorbit the satellite at the end of the operational life. Sail configurations IKAROS, launched in 2010, was the first practical solar sail vehicle. As of 2015, it was still under thrust, proving the practicality of a solar sail for long-duration missions. It is spin-deployed, with tip-masses in the corners of its square sail. The sail is made of thin polyimide film, coated with evaporated aluminium. It steers with electrically controlled liquid crystal panels. The sail slowly spins, and these panels turn on and off to control the attitude of the vehicle. When on, they diffuse light, reducing the momentum transfer to that part of the sail. When off, the sail reflects more light, transferring more momentum. In that way, they turn the sail. Thin-film solar cells are also integrated into the sail, powering the spacecraft. The design is very reliable, because spin deployment, which is preferable for large sails, simplified the mechanisms to unfold the sail and the LCD panels have no moving parts. Parachutes have very low mass, but a parachute is not a workable configuration for a solar sail. Analysis shows that a parachute configuration would collapse from the forces exerted by shroud lines, since radiation pressure does not behave like aerodynamic pressure, and would not act to keep the parachute open. The highest thrust-to-mass designs for ground-assembled deploy-able structures are square sails with the masts and guy lines on the dark side of the sail. Usually there are four masts that spread the corners of the sail, and a mast in the center to hold guy-wires. One of the largest advantages is that there are no hot spots in the rigging from wrinkling or bagging, and the sail protects the structure from the Sun. This form can, therefore, go close to the Sun for maximum thrust. Most designs steer with small moving sails on the ends of the spars. In the 1970s JPL studied many rotating blade and ring sails for a mission to rendezvous with Halley's Comet. The intention was to stiffen the structures using angular momentum, eliminating the need for struts, and saving mass. In all cases, surprisingly large amounts of tensile strength were needed to cope with dynamic loads. Weaker sails would ripple or oscillate when the sail's attitude changed, and the oscillations would add and cause structural failure. The difference in the thrust-to-mass ratio between practical designs was almost nil, and the static designs were easier to control. JPL's reference design was called the "heliogyro". It had plastic-film blades deployed from rollers and held out by centrifugal forces as it rotated. The spacecraft's attitude and direction were to be completely controlled by changing the angle of the blades in various ways, similar to the cyclic and collective pitch of a helicopter. Although the design had no mass advantage over a square sail, it remained attractive because the method of deploying the sail was simpler than a strut-based design. The CubeSail (UltraSail) is an active project aiming to deploy a heliogyro sail. Heliogyro design is similar to the blades on a helicopter. The design is faster to manufacture due to lightweight centrifugal stiffening of sails. Also, they are highly efficient in cost and velocity because the blades are lightweight and long. Unlike the square and spinning disk designs, heliogyro is easier to deploy because the blades are compacted on a reel. The blades roll out when they are deploying after the ejection from the spacecraft. As the heliogyro travels through space the system spins around because of the centrifugal acceleration. Finally, payloads for the space flights are placed in the center of gravity to even out the distribution of weight to ensure stable flight. JPL also investigated "ring sails" (Spinning Disk Sail in the above diagram), panels attached to the edge of a rotating spacecraft. The panels would have slight gaps, about one to five percent of the total area. Lines would connect the edge of one sail to the other. Masses in the middles of these lines would pull the sails taut against the coning caused by the radiation pressure. JPL researchers said that this might be an attractive sail design for large crewed structures. The inner ring, in particular, might be made to have artificial gravity roughly equal to the gravity on the surface of Mars. A solar sail can serve a dual function as a high-gain antenna. Designs differ, but most modify the metalization pattern to create a holographic monochromatic lens or mirror in the radio frequencies of interest, including visible light. Reflective sail making Materials The most common material in current designs is a thin layer of aluminum coating on a polymer (plastic) sheet, such as aluminized 2 μm Kapton film. The polymer provides mechanical support as well as flexibility, while the thin metal layer provides the reflectivity. Such material resists the heat of a pass close to the Sun and still remains reasonably strong. The aluminum reflecting film is on the Sun side. The sails of Cosmos 1 were made of aluminized PET film (Mylar). Eric Drexler developed a concept for a sail in which the polymer was removed. He proposed very high thrust-to-mass solar sails, and made prototypes of the sail material. His sail would use panels of thin aluminium film (30 to 100 nanometres thick) supported by a tensile structure. The sail would rotate and would have to be continually under thrust. He made and handled samples of the film in the laboratory, but the material was too delicate to survive folding, launch, and deployment. The design planned to rely on space-based production of the film panels, joining them to a deployable tension structure. Sails in this class would offer high area per unit mass and hence accelerations up to "fifty times higher" than designs based on deploy-able plastic films. The material developed for the Drexler solar sail was a thin aluminium film with a baseline thickness of 0.1 μm, to be fabricated by vapor deposition in a space-based system. Drexler used a similar process to prepare films on the ground. As anticipated, these films demonstrated adequate strength and robustness for handling in the laboratory and for use in space, but not for folding, launch, and deployment. Research by Geoffrey Landis in 1998–1999, funded by the NASA Institute for Advanced Concepts, showed that various materials such as alumina for laser lightsails and carbon fiber for microwave pushed lightsails were superior sail materials to the previously standard aluminium or Kapton films. In 2000, Energy Science Laboratories developed a new carbon fiber material that might be useful for solar sails. The material is over 200 times thicker than conventional solar sail designs, but it is so porous that it has the same mass. The rigidity and durability of this material could make solar sails that are significantly sturdier than plastic films. The material could self-deploy and should withstand higher temperatures. There has been some theoretical speculation about using molecular manufacturing techniques to create advanced, strong, hyper-light sail material, based on nanotube mesh weaves, where the weave "spaces" are less than half the wavelength of light impinging on the sail. While such materials have so far only been produced in laboratory conditions, and the means for manufacturing such material on an industrial scale are not yet available, such materials could mass less than 0.1 g/m2, making them lighter than any current sail material by a factor of at least 30. For comparison, 5 micrometre thick Mylar sail material mass 7 g/m2, aluminized Kapton films have a mass as much as 12 g/m2, and Energy Science Laboratories' new carbon fiber material masses 3 g/m2. The least dense metal is lithium, about 5 times less dense than aluminium. Fresh, unoxidized surfaces are reflective. At a thickness of 20 nm, lithium has an area density of 0.011 g/m2. A high-performance sail could be made of lithium alone at 20 nm (no emission layer). It would have to be fabricated in space and not used to approach the Sun. In the limit, a sail craft might be constructed with a total areal density of around 0.02 g/m2, giving it a lightness number of 67 and ac of about 400 mm/s2. Magnesium and beryllium are also potential materials for high-performance sails. These 3 metals can be alloyed with each other and with aluminium. Reflection and emissivity layers Aluminium is the common choice for the reflection layer. It typically has a thickness of at least 20 nm, with a reflectivity of 0.88 to 0.90. Chromium is a good choice for the emission layer on the face away from the Sun. It can readily provide emissivity values of 0.63 to 0.73 for thicknesses from 5 to 20 nm on plastic film. Usable emissivity values are empirical because thin-film effects dominate; bulk emissivity values do not hold up in these cases because material thickness is much thinner than the emitted wavelengths. Fabrication Sails are fabricated on Earth on long tables where ribbons are unrolled and joined to create the sails. Sail material needed to have as little weight as possible because it would require the use of the shuttle to carry the craft into orbit. Thus, these sails are packed, launched, and unfurled in space. In the future, fabrication could take place in orbit inside large frames that support the sail. This would result in lower mass sails and elimination of the risk of deployment failure. Operations Changing orbits Sailing operations are simplest in interplanetary orbits, where altitude changes are done at low rates. For outward bound trajectories, the sail force vector is oriented forward of the Sun line, which increases orbital energy and angular momentum, resulting in the craft moving farther from the Sun. For inward trajectories, the sail force vector is oriented behind the Sun line, which decreases orbital energy and angular momentum, resulting in the craft moving in toward the Sun. It is worth noting that only the Sun's gravity pulls the craft toward the Sun—there is no analog to a sailboat's tacking to windward. To change orbital inclination, the force vector is turned out of the plane of the velocity vector. In orbits around planets or other bodies, the sail is oriented so that its force vector has a component along the velocity vector, either in the direction of motion for an outward spiral, or against the direction of motion for an inward spiral. Trajectory optimizations can often require intervals of reduced or zero thrust. This can be achieved by rolling the craft around the Sun line with the sail set at an appropriate angle to reduce or remove the thrust. Swing-by maneuvers A close solar passage can be used to increase a craft's energy. The increased radiation pressure combines with the efficacy of being deep in the Sun's gravity well to substantially increase the energy for runs to the outer Solar System. The optimal approach to the Sun is done by increasing the orbital eccentricity while keeping the energy level as high as practical. The minimum approach distance is a function of sail angle, thermal properties of the sail and other structure, load effects on structure, and sail optical characteristics (reflectivity and emissivity). A close passage can result in substantial optical degradation. Required turn rates can increase substantially for a close passage. A sail craft arriving at a star can use a close passage to reduce energy, which also applies to a sail craft on a return trip from the outer Solar System. A lunar swing-by can have important benefits for trajectories leaving from or arriving at Earth. This can reduce trip times, especially in cases where the sail is heavily loaded. A swing-by can also be used to obtain favorable departure or arrival directions relative to Earth. A planetary swing-by could also be employed similar to what is done with coasting spacecraft, but good alignments might not exist due to the requirements for overall optimization of the trajectory. Laser powered The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: Interstellar travel catalog to use photogravitational assists for a full stop Successive assists at α Cen A and B could allow travel times to 75 yr to both stars. Lightsail has a nominal mass-to-surface ratio (σnom) of 8.6×10−4 gram m−2 for a nominal graphene-class sail. Area of the Lightsail, about 105 m2 = (316 m)2 Velocity up to 37,300 km s−1 (12.5% c) . Ref: Projects operating or completed Attitude (orientation) control Both the Mariner 10 mission, which flew by the planets Mercury and Venus, and the MESSENGER mission to Mercury demonstrated the use of solar pressure as a method of attitude control in order to conserve attitude-control propellant. Hayabusa also used solar pressure on its solar paddles as a method of attitude control to compensate for broken reaction wheels and chemical thruster. MTSAT-1R (Multi-Functional Transport Satellite)'s solar sail counteracts the torque produced by sunlight pressure on the solar array. The trim tab on the solar array makes small adjustments to the torque balance. Ground deployment tests NASA has successfully tested deployment technologies on small scale sails in vacuum chambers. In 1999, a full-scale deployment of a solar sail was tested on the ground at DLR/ESA in Cologne. Suborbital tests Cosmos 1, a joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science attempted to launch a suborbital prototype vehicle in 2005, which was destroyed due to a rocket failure. A 15-meter-diameter solar sail (SSP, solar sail sub payload, soraseiru sabupeiro-do) was launched together with ASTRO-F on a M-V rocket on February 21, 2006, and made it to orbit. It deployed from the stage, but opened incompletely. On August 9, 2004, the Japanese ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover-shaped sail was deployed at 122 km altitude and a fan-shaped sail was deployed at 169 km altitude. Both sails used 7.5-micrometer film. The experiment purely tested the deployment mechanisms, not propulsion. Znamya 2 On February 4, 1993, the Znamya 2, a 20-meter wide aluminized-mylar reflector, was successfully deployed from the Russian Mir space station. It was the first thin film reflector of such type successfully deployed in space using the mechanism based on centrifugal force. Although the deployment succeeded, propulsion was not demonstrated. A second test in 1999, Znamya 2.5, failed to deploy properly. IKAROS 2010 On 21 May 2010, Japan Aerospace Exploration Agency (JAXA) launched the world's first interplanetary solar sail spacecraft "IKAROS" (Interplanetary Kite-craft Accelerated by Radiation Of the Sun) to Venus. Using a new solar-photon propulsion method, it was the first true solar sail spacecraft fully propelled by sunlight, and was the first spacecraft to succeed in solar sail flight. JAXA successfully tested IKAROS in 2010. The goal was to deploy and control the sail and, for the first time, to determine the minute orbit perturbations caused by light pressure. Orbit determination was done by the nearby AKATSUKI probe from which IKAROS detached after both had been brought into a transfer orbit to Venus. The total effect over the six month flight was 100 m/s. Until 2010, no solar sails had been successfully used in space as primary propulsion systems. On 21 May 2010, the Japan Aerospace Exploration Agency (JAXA) launched the IKAROS spacecraft, which deployed a 200 m2 polyimide experimental solar sail on June 10. In July, the next phase for the demonstration of acceleration by radiation began. On 9 July 2010, it was verified that IKAROS collected radiation from the Sun and began photon acceleration by the orbit determination of IKAROS by range-and-range-rate (RARR) that is newly calculated in addition to the data of the relativization accelerating speed of IKAROS between IKAROS and the Earth that has been taken since before the Doppler effect was utilized. The data showed that IKAROS appears to have been solar-sailing since 3 June when it deployed the sail. IKAROS has a diagonal spinning square sail 14×14 m (196 m2) made of a thick sheet of polyimide. The polyimide sheet had a mass of about 10 grams per square metre. A thin-film solar array is embedded in the sail. Eight LCD panels are embedded in the sail, whose reflectance can be adjusted for attitude control. IKAROS spent six months traveling to Venus, and then began a three-year journey to the far side of the Sun. NanoSail-D 2010 A team from the NASA Marshall Space Flight Center (Marshall), along with a team from the NASA Ames Research Center, developed a solar sail mission called NanoSail-D, which was lost in a launch failure aboard a Falcon 1 rocket on 3 August 2008. The second backup version, NanoSail-D2, also sometimes called simply NanoSail-D, was launched with FASTSAT on a Minotaur IV on November 19, 2010, becoming NASA's first solar sail deployed in low earth orbit. The objectives of the mission were to test sail deployment technologies, and to gather data about the use of solar sails as a simple, "passive" means of de-orbiting dead satellites and space debris. The NanoSail-D structure was made of aluminium and plastic, with the spacecraft massing less than . The sail has about of light-catching surface. After some initial problems with deployment, the solar sail was deployed and over the course of its 240-day mission reportedly produced a "wealth of data" concerning the use of solar sails as passive deorbit devices. NASA launched the second NanoSail-D unit stowed inside the FASTSAT satellite on the Minotaur IV on November 19, 2010. The ejection date from the FASTSAT microsatellite was planned for December 6, 2010, but deployment only occurred on January 20, 2011. Planetary Society LightSail Projects On June 21, 2005, a joint private project between Planetary Society, Cosmos Studios and Russian Academy of Science launched a prototype sail Cosmos 1 from a submarine in the Barents Sea, but the Volna rocket failed, and the spacecraft failed to reach orbit. They intended to use the sail to gradually raise the spacecraft to a higher Earth orbit over a mission duration of one month. The launch attempt sparked public interest according to Louis Friedman. Despite the failed launch attempt of Cosmos 1, The Planetary Society received applause for their efforts from the space community and sparked a rekindled interest in solar sail technology. On Carl Sagan's 75th birthday (November 9, 2009) the Planetary Society announced plans to make three further attempts, dubbed LightSail-1, -2, and -3. The new design will use a 32 m2 Mylar sail, deployed in four triangular segments like NanoSail-D. The launch configuration is a 3U CubeSat format, and as of 2015, it was scheduled as a secondary payload for a 2016 launch on the first SpaceX Falcon Heavy launch. "LightSail-1" was launched on 20 May 2015. The purpose of the test was to allow a full checkout of the satellite's systems in advance of LightSail-2. Its deployment orbit was not high enough to escape Earth's atmospheric drag and demonstrate true solar sailing. "LightSail-2" was launched on 25 June 2019, and deployed into a much higher low Earth orbit. Its solar sails were deployed on 23 July 2019. It reentered the atmosphere on 17 November 2022. NEA Scout The Near-Earth Asteroid Scout (NEA Scout) was a mission jointly developed by NASA's Marshall Space Flight Center (MSFC) and the Jet Propulsion Laboratory (JPL), consisting of a controllable low-cost CubeSat solar sail spacecraft capable of encountering near-Earth asteroids (NEA). Four booms were to deploy, unfurling the aluminized polyimide solar sail. In 2015, NASA announced it had selected NEA Scout to launch as one of several secondary payloads aboard Artemis 1, the first flight of the agency's heavy-lift SLS launch vehicle. However, the craft was considered lost with the failure to establish communications shortly after launch in 2022. Advanced Composite Solar Sail System (ACS3) The NASA Advanced Composite Solar Sail System (ACS3) is a technology demonstration of solar sail technology for future small spacecraft. It was selected in 2019 by NASA's CubeSat Launch Initiative (CSLI) to be launched as part of the ELaNa program. ACS3 consists of a 12U (unit) CubeSat small satellite (23 cm x 23 cm x 34 cm; 16 kg) that unfolds a quadratic solar sail consisting of a polyethylene naphthalate film coated on one side with aluminum for reflectivity and on the other side with chromium to increase thermal emissivity. The sail is held by a novel unfolding system of four long carbon fiber reinforced polymer booms that roll-up for storage. ACS3 was launched on 23 April 2024 on the Electron "Beginning Of The Swarm" mission. The ACS3 successfully made contact with ground stations following deployment in early May. The solar sail was confirmed as successfully operational by mission operators on 29 August 2024. On 25 October 2024 it was reported "... a bent support arm has made it (ACS3) lose direction and spin out of control in space." Projects proposed or cancelled or not selected Despite the losses of Cosmos 1 and NanoSail-D (about 23cm x 23cm x 34cm) which were due to failure of their launchers, scientists and engineers around the world remain encouraged and continue to work on solar sails. While most direct applications created so far intend to use the sails as inexpensive modes of cargo transport, some scientists are investigating the possibility of using solar sails as a means of transporting humans. This goal is strongly related to the management of very large (i.e. well above 1 km2) surfaces in space and the sail making advancements. Development of solar sails for crewed space flight is still in its infancy. Sunjammer 2015 A technology demonstration sail craft, dubbed Sunjammer, was in development with the intent to prove the viability and value of sailing technology. Sunjammer had a square sail, wide on each side, giving it an effective area of . It would have traveled from the Sun-Earth Lagrangian point from Earth to a distance of . The demonstration was expected to launch on a Falcon 9 in January 2015. It would have been a secondary payload, released after the placement of the DSCOVR climate satellite at the L1 point. Citing a lack of confidence in the ability of its contractor L'Garde to deliver, the mission was cancelled by NASA in October 2014. OKEANOS OKEANOS (Outsized Kite-craft for Exploration and Astronautics in the Outer Solar System) was a proposed mission concept by Japan's JAXA to Jupiter's Trojan asteroids using a hybrid solar sail for propulsion; the sail would have been covered with thin solar panels to power an ion engine. In-situ analysis of the collected samples would have been performed by either direct contact or using a lander carrying a high-resolution mass spectrometer. A lander and a sample-return to Earth were options under study. The OKEANOS Jupiter Trojan Asteroid Explorer was a finalist for Japan's ISAS 2nd Large-class mission to be launched in the late 2020s. However, it was not selected. Solar Cruiser In August 2019, NASA awarded the Solar Cruiser team $400,000 for nine-month mission concept studies. The spacecraft would have a solar sail and would orbit the Sun in a polar orbit, while the coronagraph instrument would enable simultaneous measurements of the Sun's magnetic field structure and velocity of coronal mass ejections. If selected for further development, it would have launched in 2025. However, Solar Cruiser was not approved to advance to phase C of its development cycle and was subsequently discontinued. Projects still in development or unknown status Gossamer deorbit sail , the European Space Agency (ESA) has a proposed deorbit sail, named "Gossamer", that would be intended to be used to accelerate the deorbiting of small (less than ) artificial satellites from low Earth orbits. The launch mass is with a launch volume of only . Once deployed, the sail would expand to and would use a combination of solar pressure on the sail and increased atmospheric drag to accelerate satellite reentry. Breakthrough Starshot The well-funded Breakthrough Starshot project announced on April 12, 2016, aims to develop a fleet of 1000 light sail nanocraft carrying miniature cameras, propelled by ground-based lasers and send them to Alpha Centauri at 20% the speed of light. The trip would take 20 years. In popular culture Cordwainer Smith gives a description of solar-sail-powered spaceships in "The Lady Who Sailed The Soul", published first in April 1960. Jack Vance wrote a short story about a training mission on a solar-sail-powered spaceship in "Sail 25", published in 1961. Arthur C. Clarke and Poul Anderson (writing as Winston P. Sanders) independently published stories featuring solar sails, both stories titled "Sunjammer," in 1964. Clarke retitled his story "The Wind from the Sun" when it was reprinted, in order to avoid confusion. In Larry Niven and Jerry Pournelle's 1974 novel The Mote in God's Eye, aliens are discovered when their laser-sail propelled probe enters human space. A similar technology was the theme in the Star Trek: Deep Space Nine episode "Explorers". In the episode, Lightships are described as an ancient technology used by Bajorans to travel beyond their solar system by using light from the Bajoran sun and specially constructed sails to propel them through space (). In the 2002 Star Wars film Attack of the Clones, the main villain Count Dooku was seen using a spacecraft with solar sails. In the 2009 film Avatar, the spacecraft which transports the protagonist Jake Sully to the Alpha Centauri system, the ISV Venture Star, uses solar sails as a means of propulsion to accelerate the vehicle away from the Earth towards Alpha Centauri. In the third season of Apple TV+'s alternate history TV show For All Mankind, the fictional NASA spaceship Sojourner 1 utilises solar sails for additional propulsion on its way to Mars. In the final episode of the first season of 2024 Netflix 2024 TV show, 3 Body Problem, one of the protagonists, Will Downing, has his cryogenically frozen brain launched into space toward the oncoming Trisolarian spaceship, using solar sails and nuclear pulse propulsion to accelerate it to a fraction of the speed of light. See also References Bibliography G. Vulpetti, Fast Solar Sailing: Astrodynamics of Special Sailcraft Trajectories, Space Technology Library Vol. 30, Springer, August 2012, (Hardcover) https://www.springer.com/engineering/mechanical+engineering/book/978-94-007-4776-0, (Kindle-edition), ASIN: B00A9YGY4I G. Vulpetti, L. Johnson, G. L. Matloff, Solar Sails: A Novel Approach to Interplanetary Flight, Springer, August 2015, J. L. Wright, Space Sailing, Gordon and Breach Science Publishers, London, 1992; Wright was involved with JPL's effort to use a solar sail for a rendezvous with Halley's comet. NASA/CR 2002-211730, Chapter IV— presents an optimized escape trajectory via the H-reversal sailing mode G. Vulpetti, The Sailcraft Splitting Concept, JBIS, Vol. 59, pp. 48–53, February 2006 G. L. Matloff, Deep-Space Probes: To the Outer Solar System and Beyond, 2nd ed., Springer-Praxis, UK, 2005, T. Taylor, D. Robinson, T. Moton, T. C. Powell, G. Matloff, and J. Hall, "Solar Sail Propulsion Systems Integration and Analysis (for Option Period)", Final Report for NASA/MSFC, Contract No. H-35191D Option Period, Teledyne Brown Engineering Inc., Huntsville, AL, May 11, 2004 G. Vulpetti, "Sailcraft Trajectory Options for the Interstellar Probe: Mathematical Theory and Numerical Results", the Chapter IV of NASA/CR-2002-211730, The Interstellar Probe (ISP): Pre-Perihelion Trajectories and Application of Holography, June 2002 G. Vulpetti, Sailcraft-Based Mission to The Solar Gravitational Lens, STAIF-2000, Albuquerque (New Mexico, USA), 30 January – 3 February 2000 G. Vulpetti, "General 3D H-Reversal Trajectories for High-Speed Sailcraft", Acta Astronautica, Vol. 44, No. 1, pp. 67–73, 1999 C. R. McInnes, Solar Sailing: Technology, Dynamics, and Mission Applications, Springer-Praxis Publishing Ltd, Chichester, UK, 1999, Genta, G., and Brusa, E., "The AURORA Project: a New Sail Layout", Acta Astronautica, 44, No. 2–4, pp. 141–146 (1999) S. Scaglione and G. Vulpetti, "The Aurora Project: Removal of Plastic Substrate to Obtain an All-Metal Solar Sail", special issue of Acta Astronautica, vol. 44, No. 2–4, pp. 147–150, 1999 External links "Deflecting Asteroids" by Gregory L. Matloff, IEEE Spectrum, April 2012 Planetary Society's solar sailing project The Solar Photon Sail Comes of Age by Gregory L. Matloff NASA Mission Site for NanoSail-D NanoSail-D mission: Dana Coulter, "NASA to Attempt Historic Solar Sail Deployment", NASA, June 28, 2008 Far-out Pathways to Space: Solar Sails from NASA Solar Sails Comprehensive collection of solar sail information and references, maintained by Benjamin Diedrich. Good diagrams showing how light sailors must tack. U3P Multilingual site with news and flight simulators ISAS Deployed Solar Sail Film in Space Suggestion of a solar sail with roller reefing, hybrid propulsion and a central docking and payload station. Interview with NASA's JPL about solar sail technology and missions Website with technical pdf-files about solar-sailing, including NASA report and lectures at Aerospace Engineering School of Rome University Advanced Solar- and Laser-pushed Lightsail Concepts www.aibep.org: Official site of American Institute of Beamed Energy Propulsion Space Sailing Sailing ship concepts, operations, and history of concept Bernd Dachwald's Website Broad information on sail propulsion and missions Spacecraft attitude control Spacecraft propulsion Spacecraft components Interstellar travel Microwave technology Photonics Japanese inventions
Solar sail
[ "Astronomy" ]
11,157
[ "Astronomical hypotheses", "Interstellar travel" ]
29,952
https://en.wikipedia.org/wiki/Thermodynamics
Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities, but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering and mechanical engineering, but also in other complex fields such as meteorology. Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave to the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat. The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics. Introduction A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system. In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes. With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few. This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field. History The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated. Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science. The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin). The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs. Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865. During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes. Etymology Thermodynamics has an intricate etymology. By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power". In 1849, the adjective thermo-dynamic is used by William Thomson. In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines. Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology. Branches of thermodynamics The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems. Classical thermodynamics Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics. Statistical mechanics Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level. Chemical thermodynamics Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation. Equilibrium thermodynamics Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings. Non-equilibrium thermodynamics Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods. Laws of thermodynamics Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following. Zeroth law The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other. This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers. The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law. First law The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy, , of a thermodynamic system is equal to the energy gained as heat, , less the thermodynamic work, , done by the system on its surroundings. . where denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible), denotes the quantity of energy supplied to the system as heat, and denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surrounding requires that the system's internal energy decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system (so that is recovered) to make the system work continuously. For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then , where denotes the internal energy of the combined system, and and denote the internal energies of the respective separated systems. Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed. Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state. Second law A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body. The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium. In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos. Third law The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value. This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes". Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine). System models An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities. Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole. Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle. Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries: As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium. Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes. States and processes When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant. A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair. Several commonly studied thermodynamic processes are: Adiabatic process: occurs without loss or gain of energy by heat Isenthalpic process: occurs at a constant enthalpy Isentropic process: a reversible adiabatic process, occurs at a constant entropy Isobaric process: occurs at constant pressure Isochoric process: occurs at constant volume (also called isometric/isovolumetric) Isothermal process: occurs at a constant temperature Steady state process: occurs without a change in the internal energy Instrumentation There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system. A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants. Conjugate variables The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement. Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are: Pressure-volume (the mechanical parameters); Temperature-entropy (thermal parameters); Chemical potential-particle number (material parameters). Potentials Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics. The five most well known potentials are: where is the temperature, the entropy, the pressure, the volume, the chemical potential, the number of particles in the system, and is the count of particles types in the system. Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation. Axiomatic thermodynamics Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics. The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states. Applied fields See also Thermodynamic process path Lists and timelines List of important publications in thermodynamics List of textbooks on thermodynamics and statistical mechanics List of thermal conductivities List of thermodynamic properties Table of thermodynamic equations Timeline of thermodynamics Thermodynamic equations Notes References Further reading A nontechnical introduction, good on historical and interpretive matters. Vol. 1, pp. 55–349. 5th ed. (in Russian) The following titles are more technical: External links Thermodynamics Data & Property Calculation Websites Thermodynamics Educational Websites Biochemistry Thermodynamics Thermodynamics and Statistical Mechanics Engineering Thermodynamics – A Graphical Approach Thermodynamics and Statistical Mechanics by Richard Fitzpatrick Energy Chemical engineering
Thermodynamics
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
5,823
[ "Physical quantities", "Chemical engineering", "Energy (physics)", "Energy", "Thermodynamics", "nan", "Dynamical systems" ]
29,954
https://en.wikipedia.org/wiki/Topology
Topology (from the Greek words , and ) is the branch of mathematics concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of topological spaces, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. The following are basic examples of topological properties: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Wilhelm Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although, it was not until the first decades of the 20th century that the idea of a topological space was developed. Motivation The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside. In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory. Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes. To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere. Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A famous example, known as the "Topologist's Breakfast", is that a topologist cannot distinguish a coffee mug from a doughnut; a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle. Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object. History Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750, Euler wrote to a friend that he had realized the importance of the edges of a polyhedron. This led to his polyhedron formula, (where , , and respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signaling the birth of topology. Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print. The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated". Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and gave the definition for what is now called a Hausdorff space. Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski. Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology. The 2022 Abel Prize was awarded to Dennis Sullivan "for his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects". Concepts Topologies on sets The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology describes how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies. Formally, let be a set and let be a family of subsets of . Then is called a topology on if: Both the empty set and are elements of . Any union of elements of is an element of . Any intersection of finitely many elements of is an element of . If is a topology on , then the pair is called a topological space. The notation may be used to denote a set endowed with the particular topology . By definition, every topology is a -system. The members of are called open sets in . A subset of is said to be closed if its complement is in (that is, its complement is open). A subset of may be open, closed, both (a clopen set), or neither. The empty set and itself are always both closed and open. An open subset of which contains a point is called an open neighborhood of . Continuous functions and homeomorphisms A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties, and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. However, the sphere is not homeomorphic to the doughnut. Manifolds While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an -dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension . Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds). Topics General topology General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology. The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected. Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius centered at is the set of all points whose distance to is less than . Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs. Algebraic topology Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence. The most important of these invariants are homotopy groups, homology, and cohomology. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Differential topology Differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds. More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifoldthat is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume. Geometric topology Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem. In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory. Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries. 2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure. Generalizations Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory, while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories, and with that the definition of general cohomology theories. Applications Biology Topology has been used to study various biological systems including molecules and nanostructure (e.g., membraneous objects). In particular, circuit topology and knot theory have been extensively applied to classify and compare the topology of folded proteins and nucleic acids. Circuit topology classifies folded molecular chains based on the pairwise arrangement of their intra-chain contacts and chain crossings. Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis. Computer science Topological data analysis uses techniques from algebraic topology to determine the large scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to: Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter. Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology. Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode. Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties. Physics Topology is relevant to physics in areas such as condensed matter physics, quantum field theory and physical cosmology. The topological dependence of mechanical properties in solids is of interest in disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials. The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space. Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics. A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory. The topological classification of Calabi–Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings. In cosmology, topology can be used to describe the overall shape of the universe. This area of research is commonly known as spacetime topology. In condensed matter a relevant application to topological physics comes from the possibility to obtain one-way current, which is a current protected from backscattering. It was first discovered in electronics with the famous quantum Hall effect, and then generalized in other areas of physics, for instance in photonics by F.D.M Haldane. Robotics The possible positions of a robot can be described by a manifold called configuration space. In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose. Games and puzzles Disentanglement puzzles are based on topological aspects of the puzzle's shapes and components. Fiber art In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order which surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path. Resources and research Major journals Geometry & Topology- a mathematic research journal focused on geometry and topology, and their applications, published by Mathematical Sciences Publishers. Journal of Topology- a scientific journal which publishes papers of high quality and significance in topology, geometry, and adjacent areas of mathematics. Major books Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Willard, Stephen (2016). General topology. Dover books on mathematics. Mineola, N.Y: Dover publications. Armstrong, M. A. (1983). Basic topology. Undergraduate texts in mathematics. New York: Springer-Verlag. See also Characterizations of the category of topological spaces Equivariant topology List of algebraic topology topics List of examples in general topology List of general topology topics List of geometric topology topics List of topology topics Publications in topology Topoisomer Topology glossary Topological Galois theory Topological geometry Topological order References Citations Bibliography Further reading Ryszard Engelking, General Topology, Heldermann Verlag, Sigma Series in Pure Mathematics, December 1989, . Bourbaki; Elements of Mathematics: General Topology, Addison–Wesley (1966). (Provides a well motivated, geometric account of general topology, and shows the use of groupoids in discussing van Kampen's theorem, covering spaces, and orbit spaces.) Wacław Sierpiński, General Topology, Dover Publications, 2000, (Provides a popular introduction to topology and geometry) External links Elementary Topology: A First Course Viro, Ivanov, Netsvetaev, Kharlamov. The Topological Zoo at The Geometry Center. Topology Atlas Topology Course Lecture Notes Aisling McCluskey and Brian McMaster, Topology Atlas. Topology Glossary Moscow 1935: Topology moving towards America, a historical essay by Hassler Whitney. Mathematical structures
Topology
[ "Physics", "Mathematics" ]
4,093
[ "Mathematical structures", "Mathematical objects", "Topology", "Space", "Geometry", "Spacetime" ]
29,965
https://en.wikipedia.org/wiki/Tensor
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors (which are the simplest tensors), dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix. Tensors have become important in physics because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics (stress, elasticity, quantum mechanics, fluid mechanics, moment of inertia, ...), electrodynamics (electromagnetic tensor, Maxwell tensor, permittivity, magnetic susceptibility, ...), and general relativity (stress–energy tensor, curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors". Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor. Definition Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction. As multidimensional arrays A tensor may be represented as a (potentially multidimensional) array. Just as a vector in an -dimensional space is represented by a one-dimensional array with components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order tensor could be denoted  , where and are indices running from to , or also by . Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while and can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together. The total number of indices () required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an -dimensional array or an -way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors. Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis (see Covariance and contravariance of vectors), where the new basis vectors are expressed in terms of the old basis vectors as, Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R, where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector (or row vector), w, transform with the matrix R itself, This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index (subscript). As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array that transforms under a change of basis matrix by . For the individual matrix entries, this transformation law has the form so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1). Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above: , where is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices (j into k in this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like can immediately be seen to be geometrically identical in all coordinate systems. Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components are given by . These components transform contravariantly, since The transformation law for an order tensor with p contravariant indices and q covariant indices is thus given as, Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type . The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions), in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type is also called a -tensor for short. This discussion motivates the following formal definition: The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci. An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If is an ordered basis, and is an invertible matrix, then the action is given by Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL(n). Let W be a vector space and let be a representation of GL(n) on W (that is, a group homomorphism ). Then a tensor of type is an equivariant map . Equivariance here means that When is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups. As multilinear maps A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed (finite-dimensional) vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type tensor T is defined as a multilinear map, where V∗ is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers, . More generally, V can be taken over any field F (e.g. the complex numbers), with F replacing as the codomain of the multilinear maps. By applying a multilinear map T of type to a basis {ej} for V and a canonical cobasis {εi} for V∗, a -dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors. In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V∗, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V∗ against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual. Using tensor products For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements of tensor products of vector spaces, which in turn are defined through a universal property as explained here and here. A type tensor is defined in this context as an element of the tensor product of vector spaces, A basis of and basis of naturally induce a basis of the tensor product . The components of a tensor are the coefficients of the tensor with respect to the basis obtained from a basis for and its dual basis , i.e. Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type tensor. Moreover, the universal property of the tensor product gives a one-to-one correspondence between tensors defined in this way and tensors defined as multilinear maps. This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual: The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps from and . Tensor products can be defined in great generality – for example, involving arbitrary modules over a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of any number of copies of a single vector space and its dual, as above. Tensors in infinite dimensions This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions are naturally isomorphic. Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, to vector bundles or coherent sheaves. For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (see topological tensor product). In some applications, it is the tensor product of Hilbert spaces that is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as a symmetric monoidal category that encodes their most important properties, rather than the specific models of those categories. Tensor fields In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called a tensor field, often referred to simply as a tensor. In this context, a coordinate basis is often chosen for the tangent vector space. The transformation law may then be expressed in terms of partial derivatives of the coordinate functions, defining a coordinate transformation, History The concepts of later tensor analysis arose from the work of Carl Friedrich Gauss in differential geometry, and the formulation was much influenced by the theory of algebraic forms and invariants developed during the middle of the nineteenth century. The word "tensor" itself was introduced in 1846 by William Rowan Hamilton to describe something different from what is now meant by a tensor. Gibbs introduced dyadics and polyadic algebra, which are also tensors in the modern sense. The contemporary usage was introduced by Woldemar Voigt in 1898. Tensor calculus was developed around 1890 by Gregorio Ricci-Curbastro under the title absolute differential calculus, and originally presented in 1892. It was made accessible to many mathematicians by the publication of Ricci-Curbastro and Tullio Levi-Civita's 1900 classic text Méthodes de calcul différentiel absolu et leurs applications (Methods of absolute differential calculus and their applications). In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense. In the 20th century, the subject came to be known as tensor analysis, and achieved broader acceptance with the introduction of Albert Einstein's theory of general relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometer Marcel Grossmann. Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect: Tensors and tensor fields were also found to be useful in other fields such as continuum mechanics. Some well-known examples of tensors in differential geometry are quadratic forms such as metric tensors, and the Riemann curvature tensor. The exterior algebra of Hermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory of differential forms, as naturally unified with tensor calculus. The work of Élie Cartan made differential forms one of the basic kinds of tensors used in mathematics, and Hassler Whitney popularized the tensor product. From about the 1920s onwards, it was realised that tensors play a basic role in algebraic topology (for example in the Künneth theorem). Correspondingly there are types of tensors at work in many branches of abstract algebra, particularly in homological algebra and representation theory. Multilinear algebra can be developed in greater generality than for scalars coming from a field. For example, scalars can come from a ring. But the theory is then less geometric and computations more technical and less algorithmic. Tensors are generalized within category theory by means of the concept of monoidal category, from the 1960s. Examples An elementary example of a mapping describable as a tensor is the dot product, which maps two vectors to a scalar. A more complex example is the Cauchy stress tensor T, which takes a directional unit vector v as input and maps it to the stress vector T(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal to v against the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). The cross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. The totally anti-symmetric symbol nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems. This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type , where n is the number of contravariant indices, m is the number of covariant indices, and gives the total order of the tensor. For example, a bilinear form is the same thing as a -tensor; an inner product is an example of a -tensor, but not all -tensors are inner products. In the -entry of the table, M denotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor. Raising an index on an -tensor produces an -tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table. Contraction of an upper with a lower index of an -tensor produces an -tensor; this corresponds to moving diagonally up and to the left on the table. Properties Assuming a basis of a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organized multidimensional array of numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows to define tensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers a tensor. Compare this to the array representing not being a tensor, for the sign change under transformations changing the orientation. Because the components of vectors and their duals transform differently under the change of their dual bases, there is a covariant and/or contravariant transformation law that relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively, (contravariant indices) and dual (covariant indices) in the input and output of a tensor determine the type (or valence) of the tensor, a pair of natural numbers , which determine the precise form of the transformation law. The of a tensor is the sum of these two numbers. The order (also degree or ) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order , the same as the stress tensor, taking one vector and returning another . The mapping two vectors to one vector, would have order The collection of tensors on a vector space and its dual forms a tensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order , which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this. Notation There are several notational systems that are used to describe tensors and perform calculations involving them. Ricci calculus Ricci calculus is the modern formalism and notation for tensor indices: indicating inner and outer products, covariance and contravariance, summations of tensor components, symmetry and antisymmetry, and partial and covariant derivatives. Einstein summation convention The Einstein summation convention dispenses with writing summation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the index is used twice in a given term of a tensor expression, it means that the term is to be summed for all . Several distinct pairs of indices may be summed this way. Penrose graphical notation Penrose graphical notation is a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices. Abstract index notation The abstract index notation is a way to write tensors such that the indices are no longer thought of as numerical, but rather are indeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation. Component-free notation A component-free treatment of tensors uses notation that emphasises that tensors do not rely on any basis, and is defined in terms of the tensor product of vector spaces. Operations There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to the scaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type. Tensor product The tensor product takes two tensors, S and T, and produces a new tensor, , whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e., which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e., If is of type and is of type , then the tensor product has type . Contraction Tensor contraction is an operation that reduces a type tensor to a type tensor, of which the trace is a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a -tensor can be contracted to a scalar through , where the summation is again implied. When the -tensor is interpreted as a linear map, this operation is known as the trace. The contraction is often used in conjunction with the tensor product to contract an index from each tensor. The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the space V with the space V∗ by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor from V∗ to a factor from V. For example, a tensor can be written as a linear combination The contraction of T on the first and last slots is then the vector In a vector space with an inner product (also known as a metric) g, the term contraction is used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a -tensor can be contracted to a scalar through (yet again assuming the summation convention). Raising or lowering an index When a vector space is equipped with a nondegenerate bilinear form (or metric tensor as it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known as lowering an index. Conversely, the inverse operation can be defined, and is called raising an index. This is equivalent to a similar contraction on the product with a -tensor. This inverse metric tensor has components that are the matrix inverse of those of the metric tensor. Applications Continuum mechanics Important examples are provided by continuum mechanics. The stresses inside a solid body or fluid are described by a tensor field. The stress tensor and strain tensor are both second-order tensor fields, and are related in a general linear elastic material by a fourth-order elasticity tensor field. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed. If a particular surface element inside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor of type , in linear elasticity, or more precisely by a tensor field of type , since the stresses may vary from point to point. Other examples from physics Common applications include: Electromagnetic tensor (or Faraday tensor) in electromagnetism Finite deformation tensors for describing deformations and strain tensor for strain in continuum mechanics Permittivity and electric susceptibility are tensors in anisotropic media Four-tensors in general relativity (e.g. stress–energy tensor), used to represent momentum fluxes Spherical tensor operators are the eigenfunctions of the quantum angular momentum operator in spherical coordinates Diffusion tensors, the basis of diffusion tensor imaging, represent rates of diffusion in biological environments Quantum mechanics and quantum computing utilize tensor products for combination of quantum states Computer vision and optics The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field of computer vision, with the trifocal tensor generalizing the fundamental matrix. The field of nonlinear optics studies the changes to material polarization density under extreme electric fields. The polarization waves generated are related to the generating electric fields through the nonlinear susceptibility tensor. If the polarization P is not linearly proportional to the electric field E, the medium is termed nonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is given by a Taylor series in E whose coefficients are the nonlinear susceptibilities: Here is the linear susceptibility, gives the Pockels effect and second harmonic generation, and gives the Kerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter. Machine learning The properties of tensors, especially tensor decomposition, have enabled their use in machine learning to embed higher dimensional data in artificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same. Generalizations Tensor products of vector spaces The vector spaces of a tensor product need not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product space is a second-order "tensor" in this more general sense, and an order- tensor may likewise be defined as an element of a tensor product of different vector spaces. A type tensor, in the sense defined previously, is also a tensor of order in this more general sense. The concept of tensor product can be extended to arbitrary modules over a ring. Tensors in infinite dimensions The notion of a tensor can be generalized in a variety of ways to infinite dimensions. One, for instance, is via the tensor product of Hilbert spaces. Another way of generalizing the idea of tensor, common in nonlinear analysis, is via the multilinear maps definition where instead of using finite-dimensional vector spaces and their algebraic duals, one uses infinite-dimensional Banach spaces and their continuous dual. Tensors thus live naturally on Banach manifolds and Fréchet manifolds. Tensor densities Suppose that a homogeneous medium fills , so that the density of the medium is described by a single scalar value in . The mass, in kg, of a region is obtained by multiplying by the volume of the region , or equivalently integrating the constant over the region: where the Cartesian coordinates , , are measured in . If the units of length are changed into , then the numerical values of the coordinate functions must be rescaled by a factor of 100: The numerical value of the density must then also transform by to compensate, so that the numerical value of the mass in kg is still given by integral of . Thus (in units of ). More generally, if the Cartesian coordinates , , undergo a linear transformation, then the numerical value of the density must change by a factor of the reciprocal of the absolute value of the determinant of the coordinate transformation, so that the integral remains invariant, by the change of variables formula for integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called a scalar density. To model a non-constant density, is a function of the variables , , (a scalar field), and under a curvilinear change of coordinates, it transforms by the reciprocal of the Jacobian of the coordinate change. For more on the intrinsic meaning, see Density on a manifold. A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition: Here is called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor. An example of a tensor density is the current density of electromagnetism. Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from the rational representations of the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are still semisimple representations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation, consisting of an with the transformation law Geometric objects The transformation law for a tensor behaves as a functor on the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such as local diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes. Examples of objects obeying more general kinds of transformation laws are jets and, more generally still, natural bundles. Spinors When changing from one orthonormal basis (called a frame) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is not simply connected (see orientation entanglement and plate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1. A spinor is an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant. Spinors are elements of the spin representation of the rotation group, while tensors are elements of its tensor representations. Other classical groups have tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well. See also Array data type, for tensor storage and manipulation Foundational Cartesian tensor Fibre bundle Glossary of tensor theory Multilinear projection One-form Tensor product of modules Applications Application of tensor theory in engineering Continuum mechanics Covariant derivative Curvature Diffusion tensor MRI Einstein field equations Fluid mechanics Gravity Multilinear subspace learning Riemannian geometry Structure tensor Tensor Contraction Engine Tensor decomposition Tensor derivative Tensor software Explanatory notes References Specific General Chapter six gives a "from scratch" introduction to covariant tensors. External links A discussion of the various approaches to teaching tensors, and recommendations of textbooks Concepts in physics Continuum mechanics Tensors
Tensor
[ "Physics", "Engineering" ]
7,234
[ "Tensors", "Classical mechanics", "nan", "Continuum mechanics" ]
30,040
https://en.wikipedia.org/wiki/Titanium
Titanium is a chemical element; it has symbol Ti and atomic number 22. Found in nature only as an oxide, it can be reduced to produce a lustrous transition metal with a silver color, low density, and high strength, resistant to corrosion in sea water, aqua regia, and chlorine. Titanium was discovered in Cornwall, Great Britain, by William Gregor in 1791 and was named by Martin Heinrich Klaproth after the Titans of Greek mythology. The element occurs within a number of minerals, principally rutile and ilmenite, which are widely distributed in the Earth's crust and lithosphere; it is found in almost all living things, as well as bodies of water, rocks, and soils. The metal is extracted from its principal mineral ores by the Kroll and Hunter processes. The most common compound, titanium dioxide, is a popular photocatalyst and is used in the manufacture of white pigments. Other compounds include titanium tetrachloride (TiCl4), a component of smoke screens and catalysts; and titanium trichloride (TiCl3), which is used as a catalyst in the production of polypropylene. Titanium can be alloyed with iron, aluminium, vanadium, and molybdenum, among other elements. The resulting titanium alloys are strong, lightweight, and versatile, with applications including aerospace (jet engines, missiles, and spacecraft), military, industrial processes (chemicals and petrochemicals, desalination plants, pulp, and paper), automotive, agriculture (farming), sporting goods, jewelry, and consumer electronics. Titanium is also considered one of the most biocompatible metals, leading to a range of medical applications including prostheses, orthopedic implants, dental implants, and surgical instruments. The two most useful properties of the metal are corrosion resistance and strength-to-density ratio, the highest of any metallic element. In its unalloyed condition, titanium is as strong as some steels, but less dense. There are two allotropic forms and five naturally occurring isotopes of this element, Ti through Ti, with Ti being the most abundant (73.8%). Characteristics Physical properties As a metal, titanium is recognized for its high strength-to-weight ratio. It is a strong metal with low density that is quite ductile (especially in an oxygen-free environment), lustrous, and metallic-white in color. Due to its relatively high melting point (1,668 °C or 3,034 °F) it has sometimes been described as a refractory metal, but this is not the case. It is paramagnetic and has fairly low electrical and thermal conductivity compared to other metals. Titanium is superconducting when cooled below its critical temperature of 0.49 K. Commercially pure (99.2% pure) grades of titanium have ultimate tensile strength of about 434 MPa (63,000 psi), equal to that of common, low-grade steel alloys, but are less dense. Titanium is 60% denser than aluminium, but more than twice as strong as the most commonly used 6061-T6 aluminium alloy. Certain titanium alloys (e.g., Beta C) achieve tensile strengths of over 1,400 MPa (200,000 psi). However, titanium loses strength when heated above . Titanium is not as hard as some grades of heat-treated steel; it is non-magnetic and a poor conductor of heat and electricity. Machining requires precautions, because the material can gall unless sharp tools and proper cooling methods are used. Like steel structures, those made from titanium have a fatigue limit that guarantees longevity in some applications. The metal is a dimorphic allotrope of a hexagonal close packed α form that changes into a body-centered cubic (lattice) β form at . The specific heat of the α form increases dramatically as it is heated to this transition temperature but then falls and remains fairly constant for the β form regardless of temperature. Chemical properties Like aluminium and magnesium, the surface of titanium metal and its alloys oxidize immediately upon exposure to air to form a thin non-porous passivation layer that protects the bulk metal from further oxidation or corrosion. When it first forms, this protective layer is only 1–2 nm thick but it continues to grow slowly, reaching a thickness of 25 nm in four years. This layer gives titanium excellent resistance to corrosion against oxidizing acids, but it will dissolve in dilute hydrofluoric acid, hot hydrochloric acid, and hot sulfuric acid. Titanium is capable of withstanding attack by dilute sulfuric and hydrochloric acids at room temperature, chloride solutions, and most organic acids. However, titanium is corroded by concentrated acids. Titanium is a very reactive metal that burns in normal air at lower temperatures than the melting point. Melting is possible only in an inert atmosphere or vacuum. At , it combines with chlorine. It also reacts with the other halogens and absorbs hydrogen. Titanium readily reacts with oxygen at in air, and at in pure oxygen, forming titanium dioxide. Titanium is one of the few elements that burns in pure nitrogen gas, reacting at to form titanium nitride, which causes embrittlement. Because of its high reactivity with oxygen, nitrogen, and many other gases, titanium that is evaporated from filaments is the basis for titanium sublimation pumps, in which titanium serves as a scavenger for these gases by chemically binding to them. Such pumps inexpensively produce extremely low pressures in ultra-high vacuum systems. Occurrence Titanium is the ninth-most abundant element in Earth's crust (0.63% by mass) and the seventh-most abundant metal. It is present as oxides in most igneous rocks, in sediments derived from them, in living things, and natural bodies of water. Of the 801 types of igneous rocks analyzed by the United States Geological Survey, 784 contained titanium. Its proportion in soils is approximately 0.5–1.5%. Common titanium-containing minerals are anatase, brookite, ilmenite, perovskite, rutile, and titanite (sphene). Akaogiite is an extremely rare mineral consisting of titanium dioxide. Of these minerals, only rutile and ilmenite have economic importance, yet even they are difficult to find in high concentrations. About 6.0 and 0.7 million tonnes of those minerals were mined in 2011, respectively. Significant titanium-bearing ilmenite deposits exist in Australia, Canada, China, India, Mozambique, New Zealand, Norway, Sierra Leone, South Africa, and Ukraine. About 210,000 tonnes of titanium metal sponge were produced in 2020, mostly in China (110,000 t), Japan (50,000 t), Russia (33,000 t) and Kazakhstan (15,000 t). Total reserves of anatase, ilmenite, and rutile are estimated to exceed 2 billion tonnes. The concentration of titanium is about 4 picomolar in the ocean. At 100 °C, the concentration of titanium in water is estimated to be less than 10−7 M at pH 7. The identity of titanium species in aqueous solution remains unknown because of its low solubility and the lack of sensitive spectroscopic methods, although only the 4+ oxidation state is stable in air. No evidence exists for a biological role, although rare organisms are known to accumulate high concentrations of titanium. Titanium is contained in meteorites, and it has been detected in the Sun and in M-type stars (the coolest type) with a surface temperature of . Rocks brought back from the Moon during the Apollo 17 mission are composed of 12.1% TiO2. Native titanium (pure metallic) is very rare. Isotopes Naturally occurring titanium is composed of five stable isotopes: 46Ti, 47Ti, 48Ti, 49Ti, and 50Ti, with 48Ti being the most abundant (73.8% natural abundance). At least 21 radioisotopes have been characterized, the most stable of which are 44Ti with a half-life of 63 years; 45Ti, 184.8 minutes; 51Ti, 5.76 minutes; and 52Ti, 1.7 minutes. All other radioactive isotopes have half-lives less than 33 seconds, with the majority less than half a second. The isotopes of titanium range in atomic weight from (39Ti) to (64Ti). The primary decay mode for isotopes lighter than 46Ti is positron emission (with the exception of 44Ti which undergoes electron capture), leading to isotopes of scandium, and the primary mode for isotopes heavier than 50Ti is beta emission, leading to isotopes of vanadium. Titanium becomes radioactive upon bombardment with deuterons, emitting mainly positrons and hard gamma rays. Compounds The +4 oxidation state dominates titanium chemistry, but compounds in the +3 oxidation state are also numerous. Commonly, titanium adopts an octahedral coordination geometry in its complexes, but tetrahedral TiCl4 is a notable exception. Because of its high oxidation state, titanium(IV) compounds exhibit a high degree of covalent bonding. Oxides, sulfides, and alkoxides The most important oxide is TiO2, which exists in three important polymorphs; anatase, brookite, and rutile. All three are white diamagnetic solids, although mineral samples can appear dark (see rutile). They adopt polymeric structures in which Ti is surrounded by six oxide ligands that link to other Ti centers. The term titanates usually refers to titanium(IV) compounds, as represented by barium titanate (BaTiO3). With a perovskite structure, this material exhibits piezoelectric properties and is used as a transducer in the interconversion of sound and electricity. Many minerals are titanates, such as ilmenite (FeTiO3). Star sapphires and rubies get their asterism (star-forming shine) from the presence of titanium dioxide impurities. A variety of reduced oxides (suboxides) of titanium are known, mainly reduced stoichiometries of titanium dioxide obtained by atmospheric plasma spraying. Ti3O5, described as a Ti(IV)-Ti(III) species, is a purple semiconductor produced by reduction of TiO2 with hydrogen at high temperatures, and is used industrially when surfaces need to be vapor-coated with titanium dioxide: it evaporates as pure TiO, whereas TiO2 evaporates as a mixture of oxides and deposits coatings with variable refractive index. Also known is Ti2O3, with the corundum structure, and TiO, with the rock salt structure, although often nonstoichiometric. The alkoxides of titanium(IV), prepared by treating TiCl4 with alcohols, are colorless compounds that convert to the dioxide on reaction with water. They are industrially useful for depositing solid TiO2 via the sol-gel process. Titanium isopropoxide is used in the synthesis of chiral organic compounds via the Sharpless epoxidation. Titanium forms a variety of sulfides, but only TiS2 has attracted significant interest. It adopts a layered structure and was used as a cathode in the development of lithium batteries. Because Ti(IV) is a "hard cation", the sulfides of titanium are unstable and tend to hydrolyze to the oxide with release of hydrogen sulfide. Nitrides and carbides Titanium nitride (TiN) is a refractory solid exhibiting extreme hardness, thermal/electrical conductivity, and a high melting point. TiN has a hardness equivalent to sapphire and carborundum (9.0 on the Mohs scale), and is often used to coat cutting tools, such as drill bits. It is also used as a gold-colored decorative finish and as a barrier layer in semiconductor fabrication. Titanium carbide (TiC), which is also very hard, is found in cutting tools and coatings. Halides Titanium tetrachloride (titanium(IV) chloride, TiCl4) is a colorless volatile liquid (commercial samples are yellowish) that, in air, hydrolyzes with spectacular emission of white clouds. Via the Kroll process, TiCl4 is used in the conversion of titanium ores to titanium metal. Titanium tetrachloride is also used to make titanium dioxide, e.g., for use in white paint. It is widely used in organic chemistry as a Lewis acid, for example in the Mukaiyama aldol condensation. In the van Arkel–de Boer process, titanium tetraiodide (TiI4) is generated in the production of high purity titanium metal. Titanium(III) and titanium(II) also form stable chlorides. A notable example is titanium(III) chloride (TiCl3), which is used as a catalyst for production of polyolefins (see Ziegler–Natta catalyst) and a reducing agent in organic chemistry. Organometallic complexes Owing to the important role of titanium compounds as polymerization catalyst, compounds with Ti-C bonds have been intensively studied. The most common organotitanium complex is titanocene dichloride ((C5H5)2TiCl2). Related compounds include Tebbe's reagent and Petasis reagent. Titanium forms carbonyl complexes, e.g. (C5H5)2Ti(CO)2. Anticancer therapy studies Following the success of platinum-based chemotherapy, titanium(IV) complexes were among the first non-platinum compounds to be tested for cancer treatment. The advantage of titanium compounds lies in their high efficacy and low toxicity in vivo. In biological environments, hydrolysis leads to the safe and inert titanium dioxide. Despite these advantages the first candidate compounds failed clinical trials due to insufficient efficacy to toxicity ratios and formulation complications. Further development resulted in the creation of potentially effective, selective, and stable titanium-based drugs. History Titanium was discovered in 1791 by the clergyman and geologist William Gregor as an inclusion of a mineral in Cornwall, Great Britain. Gregor recognized the presence of a new element in ilmenite when he found black sand by a stream and noticed the sand was attracted by a magnet. Analyzing the sand, he determined the presence of two metal oxides: iron oxide (explaining the attraction to the magnet) and 45.25% of a white metallic oxide he could not identify. Realizing that the unidentified oxide contained a metal that did not match any known element, in 1791 Gregor reported his findings in both German and French science journals: Crell's Annalen and Observations et Mémoires sur la Physique. He named this oxide manaccanite. Around the same time, Franz-Joseph Müller von Reichenstein produced a similar substance, but could not identify it. The oxide was independently rediscovered in 1795 by Prussian chemist Martin Heinrich Klaproth in rutile from Boinik (the German name of Bajmócska), a village in Hungary (now Bojničky in Slovakia). Klaproth found that it contained a new element and named it for the Titans of Greek mythology. After hearing about Gregor's earlier discovery, he obtained a sample of manaccanite and confirmed that it contained titanium. The currently known processes for extracting titanium from its various ores are laborious and costly; it is not possible to reduce the ore by heating with carbon (as in iron smelting) because titanium combines with the carbon to produce titanium carbide. An extraction of 95% pure titanium was achieved by Lars Fredrik Nilson and Otto Petterson. To achieve this they chlorinated titanium oxide in a carbon monoxide atmosphere with chlorine gas before reducing it to titanium metal by the use of sodium. Pure metallic titanium (99.9%) was first prepared in 1910 by Matthew A. Hunter at Rensselaer Polytechnic Institute by heating TiCl4 with sodium at under great pressure in a batch process known as the Hunter process. Titanium metal was not used outside the laboratory until 1932 when William Justin Kroll produced it by reducing titanium tetrachloride (TiCl4) with calcium. Eight years later he refined this process with magnesium and with sodium in what became known as the Kroll process. Although research continues to seek cheaper and more efficient routes, such as the FFC Cambridge process, the Kroll process is still predominantly used for commercial production. Titanium of very high purity was made in small quantities when Anton Eduard van Arkel and Jan Hendrik de Boer discovered the iodide process in 1925, by reacting with iodine and decomposing the formed vapors over a hot filament to pure metal. In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications (Alfa class and Mike class) as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use extensively in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71. Throughout the Cold War period, titanium was considered a strategic material by the U.S. government, and a large stockpile of titanium sponge (a porous form of the pure metal) was maintained by the Defense National Stockpile Center, until the stockpile was dispersed in the 2000s. As of 2021, the four leading producers of titanium sponge were China (52%), Japan (24%), Russia (16%) and Kazakhstan (7%). Production Mineral beneficiation processes The Becher process is an industrial process used to produce synthetic rutile, a form of titanium dioxide, from the ore ilmenite. The Chloride process. The Sulfate process: "relies on sulfuric acid (H2SO4) to leach titanium from ilmenite ore (FeTiO3). The resulting reaction produces titanyl sulfate (TiOSO4). A secondary hydrolysis stage is used to break the titanyl sulfate into hydrated TiO2 and H2SO4. Finally, heat is used to remove the water and create the end product - pure TiO2." Purification processes Hunter process The Hunter process was the first industrial process to produce pure metallic titanium. It was invented in 1910 by Matthew A. Hunter, a chemist born in New Zealand who worked in the United States. The process involves reducing titanium tetrachloride (TiCl4) with sodium (Na) in a batch reactor with an inert atmosphere at a temperature of 1,000 °C. Dilute hydrochloric acid is then used to leach the salt from the product. TiCl4(g) + 4 Na(l) → 4 NaCl(l) + Ti(s) Kroll process The processing of titanium metal occurs in four major steps: reduction of titanium ore into "sponge", a porous form; melting of sponge, or sponge plus a master alloy to form an ingot; primary fabrication, where an ingot is converted into general mill products such as billet, bar, plate, sheet, strip, and tube; and secondary fabrication of finished shapes from mill products. Because it cannot be readily produced by reduction of titanium dioxide, titanium metal is obtained by reduction of titanium tetrachloride (TiCl4) with magnesium metal in the Kroll process. The complexity of this batch production in the Kroll process explains the relatively high market value of titanium, despite the Kroll process being less expensive than the Hunter process. To produce the TiCl4 required by the Kroll process, the dioxide is subjected to carbothermic reduction in the presence of chlorine. In this process, the chlorine gas is passed over a red-hot mixture of rutile or ilmenite in the presence of carbon. After extensive purification by fractional distillation, the TiCl4 is reduced with molten magnesium in an argon atmosphere. 2FeTiO3 + 7Cl2 + 6C ->[900^oC] 2FeCl3 + 2TiCl4 + 6CO TiCl4 + 2Mg ->[1100^oC] Ti + 2MgCl2 Arkel-Boer process The van Arkel–de Boer process was the first semi-industrial process for pure Titanium. It involves thermal decomposition of titanium tetraiodide. Armstrong process Titanium powder is manufactured using a flow production process known as the Armstrong process that is similar to the batch production Hunter process. A stream of titanium tetrachloride gas is added to a stream of molten sodium; the products (sodium chloride salt and titanium particles) is filtered from the extra sodium. Titanium is then separated from the salt by water washing. Both sodium and chlorine are recycled to produce and process more titanium tetrachloride. Pilot plants Methods for electrolytic production of Ti metal from using molten salt electrolytes have been researched and tested at laboratory and small pilot plant scales. The lead author of an impartial review published in 2017 considered his own process "ready for scaling up." A 2023 review "discusses the electrochemical principles involved in the recovery of metals from aqueous solutions and fused salt electrolytes", with particular attention paid to titanium. While some metals such as nickel and copper can be refined by electrowinning at room temperature, titanium must be in the molten state and "there is a strong chance of attack of the refractory lining by molten titanium." Zhang et al concluded their Perspective on Thermochemical and Electrochemical Processes for Titanium Metal Production in 2017 that "Even though there are strong interests in the industry for finding a better method to produce Ti metal, and a large number of new concepts and improvements have been investigated at the laboratory or even at pilot plant scales, there is no new process to date that can replace the Kroll process commercially." The Hydrogen assisted magnesiothermic reduction (HAMR) process uses titanium dihydride. Fabrication All welding of titanium must be done in an inert atmosphere of argon or helium to shield it from contamination with atmospheric gases (oxygen, nitrogen, and hydrogen). Contamination causes a variety of conditions, such as embrittlement, which reduce the integrity of the assembly welds and lead to joint failure. Titanium is very difficult to solder directly, and hence a solderable metal or alloy such as steel is coated on titanium prior to soldering. Titanium metal can be machined with the same equipment and the same processes as stainless steel. Titanium alloys Common titanium alloys are made by reduction. For example, cuprotitanium (rutile with copper added), ferrocarbon titanium (ilmenite reduced with coke in an electric furnace), and manganotitanium (rutile with manganese or manganese oxides) are reduced. About fifty grades of titanium alloys are designed and currently used, although only a couple of dozen are readily available commercially. The ASTM International recognizes 31 grades of titanium metal and alloys, of which grades one through four are commercially pure (unalloyed). Those four vary in tensile strength as a function of oxygen content, with grade 1 being the most ductile (lowest tensile strength with an oxygen content of 0.18%), and grade 4 the least ductile (highest tensile strength with an oxygen content of 0.40%). The remaining grades are alloys, each designed for specific properties of ductility, strength, hardness, electrical resistivity, creep resistance, specific corrosion resistance, and combinations thereof. In addition to the ASTM specifications, titanium alloys are also produced to meet aerospace and military specifications (SAE-AMS, MIL-T), ISO standards, and country-specific specifications, as well as proprietary end-user specifications for aerospace, military, medical, and industrial applications. Forming and forging Commercially pure flat product (sheet, plate) can be formed readily, but processing must take into account of the tendency of the metal to springback. This is especially true of certain high-strength alloys. Exposure to the oxygen in air at the elevated temperatures used in forging results in formation of a brittle oxygen-rich metallic surface layer called "alpha case" that worsens the fatigue properties, so it must be removed by milling, etching, or electrochemical treatment. The working of titanium is very complicated, and may include Friction welding, cryo-forging, and Vacuum arc remelting. Applications Titanium is used in steel as an alloying element (ferro-titanium) to reduce grain size and as a deoxidizer, and in stainless steel to reduce carbon content. Titanium is often alloyed with aluminium (to refine grain size), vanadium, copper (to harden), iron, manganese, molybdenum, and other metals. Titanium mill products (sheet, plate, bar, wire, forgings, castings) find application in industrial, aerospace, recreational, and emerging markets. Powdered titanium is used in pyrotechnics as a source of bright-burning particles. Pigments, additives, and coatings About 95% of all titanium ore is destined for refinement into titanium dioxide (), an intensely white permanent pigment used in paints, paper, toothpaste, and plastics. It is also used in cement, in gemstones, and as an optical opacifier in paper. pigment is chemically inert, resists fading in sunlight, and is very opaque: it imparts a pure and brilliant white color to the brown or grey chemicals that form the majority of household plastics. In nature, this compound is found in the minerals anatase, brookite, and rutile. Paint made with titanium dioxide does well in severe temperatures and marine environments. Pure titanium dioxide has a very high index of refraction and an optical dispersion higher than diamond. Titanium dioxide is used in sunscreens because it reflects and absorbs UV light. Aerospace and marine Because titanium alloys have high tensile strength to density ratio, high corrosion resistance, fatigue resistance, high crack resistance, and ability to withstand moderately high temperatures without creeping, they are used in aircraft, armor plating, naval ships, spacecraft, and missiles. For these applications, titanium is alloyed with aluminium, zirconium, nickel, vanadium, and other elements to manufacture a variety of components including critical structural parts, landing gear, firewalls, exhaust ducts (helicopters), and hydraulic systems. In fact, about two thirds of all titanium metal produced is used in aircraft engines and frames. The titanium 6AL-4V alloy accounts for almost 50% of all alloys used in aircraft applications. The Lockheed A-12 and the SR-71 "Blackbird" were two of the first aircraft frames where titanium was used, paving the way for much wider use in modern military and commercial aircraft. A large amount of titanium mill products are used in the production of many aircraft, such as (following values are amount of raw mill products used, only a fraction of this ends up in the finished aircraft): 116 metric tons are used in the Boeing 787, 77 in the Airbus A380, 59 in the Boeing 777, 45 in the Boeing 747, 32 in the Airbus A340, 18 in the Boeing 737, 18 in the Airbus A330, and 12 in the Airbus A320. In aero engine applications, titanium is used for rotors, compressor blades, hydraulic system components, and nacelles. An early use in jet engines was for the Orenda Iroquois in the 1950s. Because titanium is resistant to corrosion by sea water, it is used to make propeller shafts, rigging, heat exchangers in desalination plants, heater-chillers for salt water aquariums, fishing line and leader, and divers' knives. Titanium is used in the housings and components of ocean-deployed surveillance and monitoring devices for science and military. The former Soviet Union developed techniques for making submarines with hulls of titanium alloys, forging titanium in huge vacuum tubes. Industrial Welded titanium pipe and process equipment (heat exchangers, tanks, process vessels, valves) are used in the chemical and petrochemical industries primarily for corrosion resistance. Specific alloys are used in oil and gas downhole applications and nickel hydrometallurgy for their high strength (e. g.: titanium beta C alloy), corrosion resistance, or both. The pulp and paper industry uses titanium in process equipment exposed to corrosive media, such as sodium hypochlorite or wet chlorine gas (in the bleachery). Other applications include ultrasonic welding, wave soldering, and sputtering targets. Titanium tetrachloride (TiCl4), a colorless liquid, is important as an intermediate in the process of making TiO2 and is also used to produce the Ziegler–Natta catalyst. Titanium tetrachloride is also used to iridize glass and, because it fumes strongly in moist air, it is used to make smoke screens. Consumer and architectural Titanium metal is used in automotive applications, particularly in automobile and motorcycle racing where low weight and high strength and rigidity are critical. The metal is generally too expensive for the general consumer market, though some late model Corvettes have been manufactured with titanium exhausts, and a Corvette Z06's LT4 supercharged engine uses lightweight, solid titanium intake valves for greater strength and resistance to heat. Titanium is used in many sporting goods: tennis rackets, golf clubs, lacrosse stick shafts; cricket, hockey, lacrosse, and football helmet grills, and bicycle frames and components. Although not a mainstream material for bicycle production, titanium bikes have been used by racing teams and adventure cyclists. Titanium alloys are used in spectacle frames that are rather expensive but highly durable, long lasting, light weight, and cause no skin allergies. Titanium is a common material for backpacking cookware and eating utensils. Though more expensive than traditional steel or aluminium alternatives, titanium products can be significantly lighter without compromising strength. Titanium horseshoes are preferred to steel by farriers because they are lighter and more durable. Titanium has occasionally been used in architecture. The Monument to Yuri Gagarin, the first man to travel in space (), as well as the Monument to the Conquerors of Space on top of the Cosmonaut Museum in Moscow are made of titanium for the metal's attractive color and association with rocketry. The Guggenheim Museum Bilbao and the Cerritos Millennium Library were the first buildings in Europe and North America, respectively, to be sheathed in titanium panels. Titanium sheathing was used in the Frederic C. Hamilton Building in Denver, Colorado. Because of titanium's superior strength and light weight relative to other metals (steel, stainless steel, and aluminium), and because of recent advances in metalworking techniques, its use has become more widespread in the manufacture of firearms. Primary uses include pistol frames and revolver cylinders. For the same reasons, it is used in the body of some laptop computers (for example, in Apple's PowerBook G4). In 2023, Apple launched the iPhone 15 Pro, which uses a titanium enclosure. Some upmarket lightweight and corrosion-resistant tools, such as shovels, knife handles and flashlights, are made of titanium or titanium alloys. Jewelry Because of its durability, titanium has become more popular for designer jewelry (particularly, titanium rings). Its inertness makes it a good choice for those with allergies or those who will be wearing the jewelry in environments such as swimming pools. Titanium is also alloyed with gold to produce an alloy that can be marketed as 24-karat gold because the 1% of alloyed Ti is insufficient to require a lesser mark. The resulting alloy is roughly the hardness of 14-karat gold and is more durable than pure 24-karat gold. Titanium's durability, light weight, and dent and corrosion resistance make it useful for watch cases. Some artists work with titanium to produce sculptures, decorative objects and furniture. Titanium may be anodized to vary the thickness of the surface oxide layer, causing optical interference fringes and a variety of bright colors. With this coloration and chemical inertness, titanium is a popular metal for body piercing. Titanium has a minor use in dedicated non-circulating coins and medals. In 1999, Gibraltar released the world's first titanium coin for the millennium celebration. The Gold Coast Titans, an Australian rugby league team, award a medal of pure titanium to their player of the year. Medical Because titanium is biocompatible (non-toxic and not rejected by the body), it has many medical uses, including surgical implements and implants, such as hip balls and sockets (joint replacement) and dental implants that can stay in place for up to 20 years. The titanium is often alloyed with about 4% aluminium or 6% Al and 4% vanadium. Titanium has the inherent ability to osseointegrate, enabling use in dental implants that can last for over 30 years. This property is also useful for orthopedic implant applications. These benefit from titanium's lower modulus of elasticity (Young's modulus) to more closely match that of the bone that such devices are intended to repair. As a result, skeletal loads are more evenly shared between bone and implant, leading to a lower incidence of bone degradation due to stress shielding and periprosthetic bone fractures, which occur at the boundaries of orthopedic implants. However, titanium alloys' stiffness is still more than twice that of bone, so adjacent bone bears a greatly reduced load and may deteriorate. Because titanium is non-ferromagnetic, patients with titanium implants can be safely examined with magnetic resonance imaging (convenient for long-term implants). Preparing titanium for implantation in the body involves subjecting it to a high-temperature plasma arc which removes the surface atoms, exposing fresh titanium that is instantly oxidized. Modern advancements in additive manufacturing techniques have increased potential for titanium use in orthopedic implant applications. Complex implant scaffold designs can be 3D-printed using titanium alloys, which allows for more patient-specific applications and increased implant osseointegration. Titanium is used for the surgical instruments used in image-guided surgery, as well as wheelchairs, crutches, and any other products where high strength and low weight are desirable. Titanium dioxide nanoparticles are widely used in electronics and the delivery of pharmaceuticals and cosmetics. Nuclear waste storage Because of its corrosion resistance, containers made of titanium have been studied for the long-term storage of nuclear waste. Containers lasting more than 100,000 years are thought possible with manufacturing conditions that minimize material defects. A titanium "drip shield" could also be installed over containers of other types to enhance their longevity. Precautions Titanium is non-toxic even in large doses and does not play any natural role inside the human body. An estimated quantity of 0.8 milligrams of titanium is ingested by humans each day, but most passes through without being absorbed in the tissues. It does, however, sometimes bio-accumulate in tissues that contain silica. One study indicates a possible connection between titanium and yellow nail syndrome. As a powder or in the form of metal shavings, titanium metal poses a significant fire hazard and, when heated in air, an explosion hazard. Water and carbon dioxide are ineffective for extinguishing a titanium fire; Class D dry powder agents must be used instead. When used in the production or handling of chlorine, titanium should not be exposed to dry chlorine gas because it may result in a titanium–chlorine fire. Titanium can catch fire when a fresh, non-oxidized surface comes in contact with liquid oxygen. Function in plants An unknown mechanism in plants may use titanium to stimulate the production of carbohydrates and encourage growth. This may explain why most plants contain about 1 part per million (ppm) of titanium, food plants have about 2 ppm, and horsetail and nettle contain up to 80 ppm. See also Titanium alloys Suboxide Titanium in zircon geothermometry Titanium Man Footnotes References Bibliography External links "Titanium: Our Next Major Metal" in Popular Science (October 1950), one of first general public detailed articles on Titanium Titanium at Periodic Videos (University of Nottingham) Titanium.org: official website of the International Titanium Association, an industry association Metallurgy of Titanium and its Alloys - slide presentations, movies, and other material from Harshad Bhadeshia and other Cambridge University metallurgists Aerospace materials Biomaterials Chemical elements with hexagonal close-packed structure Chemical elements Native element minerals Pyrotechnic fuels Transition metals
Titanium
[ "Physics", "Engineering", "Biology" ]
7,628
[ "Biomaterials", "Chemical elements", "Aerospace materials", "Materials", "Aerospace engineering", "Atoms", "Matter", "Medical technology" ]
30,041
https://en.wikipedia.org/wiki/Technetium
Technetium is a chemical element; it has symbol Tc and atomic number 43. It is the lightest element whose isotopes are all radioactive. Technetium and promethium are the only radioactive elements whose neighbours in the sense of atomic number are both stable. All available technetium is produced as a synthetic element. Naturally occurring technetium is a spontaneous fission product in uranium ore and thorium ore (the most common source), or the product of neutron capture in molybdenum ores. This silvery gray, crystalline transition metal lies between manganese and rhenium in group 7 of the periodic table, and its chemical properties are intermediate between those of both adjacent elements. The most common naturally occurring isotope is 99Tc, in traces only. Many of technetium's properties had been predicted by Dmitri Mendeleev before it was discovered; Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937, technetium became the first predominantly artificial element to be produced, hence its name (from the Greek , 'artificial', + One short-lived gamma ray–emitting nuclear isomer, technetium-99m, is used in nuclear medicine for a wide variety of tests, such as bone cancer diagnoses. The ground state of the nuclide technetium-99 is used as a gamma ray–free source of beta particles. Long-lived technetium isotopes produced commercially are byproducts of the fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because even the longest-lived isotope of technetium has a relatively short half-life (4.21 million years), the 1952 detection of technetium in red giants helped to prove that stars can produce heavier elements. History Early assumptions From the 1860s through 1871, early forms of the periodic table proposed by Dmitri Mendeleev contained a gap between molybdenum (element 42) and ruthenium (element 44). In 1871, Mendeleev predicted this missing element would occupy the empty place below manganese and have similar chemical properties. Mendeleev gave it the provisional name eka-manganese (from eka, the Sanskrit word for one) because it was one place down from the known element manganese. Early misidentifications Many early researchers, both before and after the periodic table was published, were eager to be the first to discover and name the missing element. Its location in the table suggested that it should be easier to find than other undiscovered elements. This turned out not to be the case, due to technetium's radioactivity. Irreproducible results German chemists Walter Noddack, Otto Berg, and Ida Tacke reported the discovery of element 75 and element 43 in 1925, and named element 43 masurium (after Masuria in eastern Prussia, now in Poland, the region where Walter Noddack's family originated). This name caused significant resentment in the scientific community, because it was interpreted as referring to a series of victories of the German army over the Russian army in the Masuria region during World War I; as the Noddacks remained in their academic positions while the Nazis were in power, suspicions and hostility against their claim for discovering element 43 continued. The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray emission spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley in 1913. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Later experimenters could not replicate the discovery, and it was dismissed as an error. Still, in 1933, a series of articles on the discovery of elements quoted the name masurium for element 43. Some more recent attempts have been made to rehabilitate the Noddacks' claims, but they are disproved by Paul Kuroda's study on the amount of technetium that could have been present in the ores they studied: it could not have exceeded of ore, and thus would have been undetectable by the Noddacks' methods. Official discovery and later history The discovery of element 43 was finally confirmed in a 1937 experiment at the University of Palermo in Sicily by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron. Segrè enlisted his colleague Perrier to attempt to prove, through comparative chemistry, that the molybdenum activity was indeed from an element with the atomic number 43. In 1937, they succeeded in isolating the isotopes technetium-95m and technetium-97. University of Palermo officials wanted them to name their discovery , after the Latin name for Palermo, . In 1947, element 43 was named after the Greek word (), meaning 'artificial', since it was the first element to be artificially produced. Segrè returned to Berkeley and met Glenn T. Seaborg. They isolated the metastable isotope technetium-99m, which is now used in some ten million medical diagnostic procedures annually. In 1952, the astronomer Paul W. Merrill in California detected the spectral signature of technetium (specifically wavelengths of 403.1 nm, 423.8 nm, 426.2 nm, and 429.7 nm) in light from S-type red giants. The stars were near the end of their lives but were rich in the short-lived element, which indicated that it was being produced in the stars by nuclear reactions. That evidence bolstered the hypothesis that heavier elements are the product of nucleosynthesis in stars. More recently, such observations provided evidence that elements are formed by neutron capture in the s-process. Since that discovery, there have been many searches in terrestrial materials for natural sources of technetium. In 1962, technetium-99 was isolated and identified in pitchblende from the Belgian Congo in very small quantities (about 0.2 ng/kg), where it originates as a spontaneous fission product of uranium-238. The natural nuclear fission reactor in Oklo contains evidence that significant amounts of technetium-99 were produced and have since decayed into ruthenium-99. Characteristics Physical properties Technetium is a silvery-gray radioactive metal with an appearance similar to platinum, commonly obtained as a gray powder. The crystal structure of the bulk pure metal is hexagonal close-packed, and crystal structures of the nanodisperse pure metal are cubic. Nanodisperse technetium does not have a split NMR spectrum, while hexagonal bulk technetium has the Tc-99-NMR spectrum split in 9 satellites. Atomic technetium has characteristic emission lines at wavelengths of 363.3 nm, 403.1 nm, 426.2 nm, 429.7 nm, and 485.3 nm. The unit cell parameters of the orthorhombic Tc metal were reported when Tc is contaminated with carbon ( = 0.2805(4), = 0.4958(8), = 0.4474(5)·nm for Tc-C with 1.38 wt% C and = 0.2815(4), = 0.4963(8), = 0.4482(5)·nm for Tc-C with 1.96 wt% C ). The metal form is slightly paramagnetic, meaning its magnetic dipoles align with external magnetic fields, but will assume random orientations once the field is removed. Pure, metallic, single-crystal technetium becomes a type-II superconductor at temperatures below . Below this temperature, technetium has a very high magnetic penetration depth, greater than any other element except niobium. Chemical properties Technetium is located in the group 7 of the periodic table, between rhenium and manganese. As predicted by the periodic law, its chemical properties are between those two elements. Of the two, technetium more closely resembles rhenium, particularly in its chemical inertness and tendency to form covalent bonds. This is consistent with the tendency of period 5 elements to resemble their counterparts in period 6 more than period 4 due to the lanthanide contraction. Unlike manganese, technetium does not readily form cations (ions with net positive charge). Technetium exhibits nine oxidation states from −1 to +7, with +4, +5, and +7 being the most common. Technetium dissolves in aqua regia, nitric acid, and concentrated sulfuric acid, but not in hydrochloric acid of any concentration. Metallic technetium slowly tarnishes in moist air and, in powder form, burns in oxygen. When reacting with hydrogen at high pressure, it forms the hydride TcH and while reacting with carbon it forms TcC, with cell parameter 0.398 nm, as well as the nanodisperce low-carbon-content carbide with parameter 0.402nm. Technetium can catalyse the destruction of hydrazine by nitric acid, and this property is due to its multiplicity of valencies. This caused a problem in the separation of plutonium from uranium in nuclear fuel processing, where hydrazine is used as a protective reductant to keep plutonium in the trivalent rather than the more stable tetravalent state. The problem was exacerbated by the mutually enhanced solvent extraction of technetium and zirconium at the previous stage, and required a process modification. Compounds Pertechnetate and other derivatives The most prevalent form of technetium that is easily accessible is sodium pertechnetate, Na[TcO4]. The majority of this material is produced by radioactive decay from [99MoO4]2−: Pertechnetate () is only weakly hydrated in aqueous solutions, and it behaves analogously to perchlorate anion, both of which are tetrahedral. Unlike permanganate (), it is only a weak oxidizing agent. Related to pertechnetate is technetium heptoxide. This pale-yellow, volatile solid is produced by oxidation of Tc metal and related precursors: It is a molecular metal oxide, analogous to manganese heptoxide. It adopts a centrosymmetric structure with two types of Tc−O bonds with 167 and 184 pm bond lengths. Technetium heptoxide hydrolyzes to pertechnetate and pertechnetic acid, depending on the pH: HTcO4 is a strong acid. In concentrated sulfuric acid, [TcO4]− converts to the octahedral form TcO3(OH)(H2O)2, the conjugate base of the hypothetical triaquo complex [TcO3(H2O)3]+. Other chalcogenide derivatives Technetium forms a dioxide, disulfide, diselenide, and ditelluride. An ill-defined Tc2S7 forms upon treating pertechnate with hydrogen sulfide. It thermally decomposes into disulfide and elemental sulfur. Similarly the dioxide can be produced by reduction of the Tc2O7. Unlike the case for rhenium, a trioxide has not been isolated for technetium. However, TcO3 has been identified in the gas phase using mass spectrometry. Simple hydride and halide complexes Technetium forms the complex . The potassium salt is isostructural with . At high pressure formation of TcH1.3 from elements was also reported. The following binary (containing only two elements) technetium halides are known: TcF6, TcF5, TcCl4, TcBr4, TcBr3, α-TcCl3, β-TcCl3, TcI3, α-TcCl2, and β-TcCl2. The oxidation states range from Tc(VI) to Tc(II). Technetium halides exhibit different structure types, such as molecular octahedral complexes, extended chains, layered sheets, and metal clusters arranged in a three-dimensional network. These compounds are produced by combining the metal and halogen or by less direct reactions. TcCl4 is obtained by chlorination of Tc metal or Tc2O7. Upon heating, TcCl4 gives the corresponding Tc(III) and Tc(II) chlorides. The structure of TcCl4 is composed of infinite zigzag chains of edge-sharing TcCl6 octahedra. It is isomorphous to transition metal tetrachlorides of zirconium, hafnium, and platinum. Two polymorphs of technetium trichloride exist, α- and β-TcCl3. The α polymorph is also denoted as Tc3Cl9. It adopts a confacial bioctahedral structure. It is prepared by treating the chloro-acetate Tc2(O2CCH3)4Cl2 with HCl. Like Re3Cl9, the structure of the α-polymorph consists of triangles with short M-M distances. β-TcCl3 features octahedral Tc centers, which are organized in pairs, as seen also for molybdenum trichloride. TcBr3 does not adopt the structure of either trichloride phase. Instead it has the structure of molybdenum tribromide, consisting of chains of confacial octahedra with alternating short and long Tc—Tc contacts. TcI3 has the same structure as the high temperature phase of TiI3, featuring chains of confacial octahedra with equal Tc—Tc contacts. Several anionic technetium halides are known. The binary tetrahalides can be converted to the hexahalides [TcX6]2− (X = F, Cl, Br, I), which adopt octahedral molecular geometry. More reduced halides form anionic clusters with Tc–Tc bonds. The situation is similar for the related elements of Mo, W, Re. These clusters have the nuclearity Tc4, Tc6, Tc8, and Tc13. The more stable Tc6 and Tc8 clusters have prism shapes where vertical pairs of Tc atoms are connected by triple bonds and the planar atoms by single bonds. Every technetium atom makes six bonds, and the remaining valence electrons can be saturated by one axial and two bridging ligand halogen atoms such as chlorine or bromine. Coordination and organometallic complexes Technetium forms a variety of coordination complexes with organic ligands. Many have been well-investigated because of their relevance to nuclear medicine. Technetium forms a variety of compounds with Tc–C bonds, i.e. organotechnetium complexes. Prominent members of this class are complexes with CO, arene, and cyclopentadienyl ligands. The binary carbonyl Tc2(CO)10 is a white volatile solid. In this molecule, two technetium atoms are bound to each other; each atom is surrounded by octahedra of five carbonyl ligands. The bond length between technetium atoms, 303 pm, is significantly larger than the distance between two atoms in metallic technetium (272 pm). Similar carbonyls are formed by technetium's congeners, manganese and rhenium. Interest in organotechnetium compounds has also been motivated by applications in nuclear medicine. Technetium also forms aquo-carbonyl complexes, one prominent complex being [Tc(CO)3(H2O)3]+, which are unusual compared to other metal carbonyls. Isotopes Technetium, with atomic number Z = 43, is the lowest-numbered element in the periodic table for which all isotopes are radioactive. The second-lightest exclusively radioactive element, promethium, has atomic number 61. Atomic nuclei with an odd number of protons are less stable than those with even numbers, even when the total number of nucleons (protons + neutrons) is even, and odd numbered elements have fewer stable isotopes. The most stable radioactive isotopes are technetium-97 with a half-life of  million years and technetium-98 with  million years; current measurements of their half-lives give overlapping confidence intervals corresponding to one standard deviation and therefore do not allow a definite assignment of technetium's most stable isotope. The next most stable isotope is technetium-99, which has a half-life of 211,100 years. Thirty-four other radioisotopes have been characterized with mass numbers ranging from 86 to 122. Most of these have half-lives that are less than an hour, the exceptions being technetium-93 (2.73 hours), technetium-94 (4.88 hours), technetium-95 (20 hours), and technetium-96 (4.3 days). The primary decay mode for isotopes lighter than technetium-98 (98Tc) is electron capture, producing molybdenum (Z = 42). For technetium-98 and heavier isotopes, the primary mode is beta emission (the emission of an electron or positron), producing ruthenium (Z = 44), with the exception that technetium-100 can decay both by beta emission and electron capture. Technetium also has numerous nuclear isomers, which are isotopes with one or more excited nucleons. Technetium-97m (97mTc; "m" stands for metastability) is the most stable, with a half-life of 91 days and excitation energy 0.0965 MeV. This is followed by technetium-95m (61 days, 0.03 MeV), and technetium-99m (6.01 hours, 0.142 MeV). Technetium-99 (99Tc) is a major product of the fission of uranium-235 (235U), making it the most common and most readily available isotope of technetium. One gram of technetium-99 produces per second (in other words, the specific activity of 99Tc is 0.62 GBq/g). Occurrence and production Technetium occurs naturally in the Earth's crust in minute concentrations of about 0.003 parts per trillion. Technetium is so rare because the half-lives of 97Tc and 98Tc are only More than a thousand of such periods have passed since the formation of the Earth, so the probability of survival of even one atom of primordial technetium is effectively zero. However, small amounts exist as spontaneous fission products in uranium ores. A kilogram of uranium contains an estimated 1 nanogram equivalent to ten trillion atoms of technetium. Some red giant stars with the spectral types S-, M-, and N display a spectral absorption line indicating the presence of technetium. These red giants are known informally as technetium stars. Fission waste product In contrast to the rare natural occurrence, bulk quantities of technetium-99 are produced each year from spent nuclear fuel rods, which contain various fission products. The fission of a gram of uranium-235 in nuclear reactors yields 27 mg of technetium-99, giving technetium a fission product yield of 6.1%. Other fissile isotopes produce similar yields of technetium, such as 4.9% from uranium-233 and 6.21% from plutonium-239. An estimated 49,000 TBq (78 metric tons) of technetium was produced in nuclear reactors between 1983 and 1994, by far the dominant source of terrestrial technetium. Only a fraction of the production is used commercially. Technetium-99 is produced by the nuclear fission of both uranium-235 and plutonium-239. It is therefore present in radioactive waste and in the nuclear fallout of fission bomb explosions. Its decay, measured in becquerels per amount of spent fuel, is the dominant contributor to nuclear waste radioactivity after about after the creation of the nuclear waste. From 1945–1994, an estimated 160 TBq (about 250 kg) of technetium-99 was released into the environment during atmospheric nuclear tests. The amount of technetium-99 from nuclear reactors released into the environment up to 1986 is on the order of 1000 TBq (about 1600 kg), primarily by nuclear fuel reprocessing; most of this was discharged into the sea. Reprocessing methods have reduced emissions since then, but as of 2005 the primary release of technetium-99 into the environment is by the Sellafield plant, which released an estimated 550 TBq (about 900 kg) from 1995 to 1999 into the Irish Sea. From 2000 onwards the amount has been limited by regulation to 90 TBq (about 140 kg) per year. Discharge of technetium into the sea resulted in contamination of some seafood with minuscule quantities of this element. For example, European lobster and fish from west Cumbria contain about 1 Bq/kg of technetium. Fission product for commercial use The metastable isotope technetium-99m is continuously produced as a fission product from the fission of uranium or plutonium in nuclear reactors: ^{238}_{92}U ->[\ce{sf}] ^{137}_{53}I + ^{99}_{39}Y + 2^{1}_{0}n ^{99}_{39}Y ->[\beta^-][1.47\,\ce{s}] ^{99}_{40}Zr ->[\beta^-][2.1\,\ce{s}] ^{99}_{41}Nb ->[\beta^-][15.0\,\ce{s}] ^{99}_{42}Mo ->[\beta^-][65.94\,\ce{h}] ^{99}_{43}Tc ->[\beta^-][211,100\,\ce{y}] ^{99}_{44}Ru Because used fuel is allowed to stand for several years before reprocessing, all molybdenum-99 and technetium-99m is decayed by the time that the fission products are separated from the major actinides in conventional nuclear reprocessing. The liquid left after plutonium–uranium extraction (PUREX) contains a high concentration of technetium as but almost all of this is technetium-99, not technetium-99m. The vast majority of the technetium-99m used in medical work is produced by irradiating dedicated highly enriched uranium targets in a reactor, extracting molybdenum-99 from the targets in reprocessing facilities, and recovering at the diagnostic center the technetium-99m produced upon decay of molybdenum-99. Molybdenum-99 in the form of molybdate is adsorbed onto acid alumina () in a shielded column chromatograph inside a technetium-99m generator ("technetium cow", also occasionally called a "molybdenum cow"). Molybdenum-99 has a half-life of 67 hours, so short-lived technetium-99m (half-life: 6 hours), which results from its decay, is being constantly produced. The soluble pertechnetate can then be chemically extracted by elution using a saline solution. A drawback of this process is that it requires targets containing uranium-235, which are subject to the security precautions of fissile materials. Almost two-thirds of the world's supply comes from two reactors; the National Research Universal Reactor at Chalk River Laboratories in Ontario, Canada, and the High Flux Reactor at Nuclear Research and Consultancy Group in Petten, Netherlands. All major reactors that produce technetium-99m were built in the 1960s and are close to the end of life. The two new Canadian Multipurpose Applied Physics Lattice Experiment reactors planned and built to produce 200% of the demand of technetium-99m relieved all other producers from building their own reactors. With the cancellation of the already tested reactors in 2008, the future supply of technetium-99m became problematic. Waste disposal The long half-life of technetium-99 and its potential to form anionic species creates a major concern for long-term disposal of radioactive waste. Many of the processes designed to remove fission products in reprocessing plants aim at cationic species such as caesium (e.g., caesium-137) and strontium (e.g., strontium-90). Hence the pertechnetate escapes through those processes. Current disposal options favor burial in continental, geologically stable rock. The primary danger with such practice is the likelihood that the waste will contact water, which could leach radioactive contamination into the environment. The anionic pertechnetate and iodide tend not to adsorb into the surfaces of minerals, and are likely to be washed away. By comparison plutonium, uranium, and caesium tend to bind to soil particles. Technetium could be immobilized by some environments, such as microbial activity in lake bottom sediments, and the environmental chemistry of technetium is an area of active research. An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. In this process, the technetium (technetium-99 as a metal target) is bombarded with neutrons to form the short-lived technetium-100 (half-life = 16 seconds) which decays by beta decay to stable ruthenium-100. If recovery of usable ruthenium is a goal, an extremely pure technetium target is needed; if small traces of the minor actinides such as americium and curium are present in the target, they are likely to undergo fission and form more fission products which increase the radioactivity of the irradiated target. The formation of ruthenium-106 (half-life 374 days) from the 'fresh fission' is likely to increase the activity of the final ruthenium metal, which will then require a longer cooling time after irradiation before the ruthenium can be used. The actual separation of technetium-99 from spent nuclear fuel is a long process. During fuel reprocessing, it comes out as a component of the highly radioactive waste liquid. After sitting for several years, the radioactivity reduces to a level where extraction of the long-lived isotopes, including technetium-99, becomes feasible. A series of chemical processes yields technetium-99 metal of high purity. Neutron activation Molybdenum-99, which decays to form technetium-99m, can be formed by the neutron activation of molybdenum-98. When needed, other technetium isotopes are not produced in significant quantities by fission, but are manufactured by neutron irradiation of parent isotopes (for example, technetium-97 can be made by neutron irradiation of ruthenium-96). Particle accelerators The feasibility of technetium-99m production with the 22-MeV-proton bombardment of a molybdenum-100 target in medical cyclotrons following the reaction 100Mo(p,2n)99mTc was demonstrated in 1971. The recent shortages of medical technetium-99m reignited the interest in its production by proton bombardment of isotopically enriched (>99.5%) molybdenum-100 targets. Other techniques are being investigated for obtaining molybdenum-99 from molybdenum-100 via (n,2n) or (γ,n) reactions in particle accelerators. Applications Nuclear medicine and biology Technetium-99m ("m" indicates that this is a metastable nuclear isomer) is used in radioactive isotope medical tests. For example, technetium-99m is a radioactive tracer that medical imaging equipment tracks in the human body. It is well suited to the role because it emits readily detectable 140 keV gamma rays, and its half-life is 6.01 hours (meaning that about 94% of it decays to technetium-99 in 24 hours). The chemistry of technetium allows it to be bound to a variety of biochemical compounds, each of which determines how it is metabolized and deposited in the body, and this single isotope can be used for a multitude of diagnostic tests. More than 50 common radiopharmaceuticals are based on technetium-99m for imaging and functional studies of the brain, heart muscle, thyroid, lungs, liver, gall bladder, kidneys, skeleton, blood, and tumors. The longer-lived isotope, technetium-95m with a half-life of 61 days, is used as a radioactive tracer to study the movement of technetium in the environment and in plant and animal systems. Industrial and chemical Technetium-99 decays almost entirely by beta decay, emitting beta particles with consistent low energies and no accompanying gamma rays. Moreover, its long half-life means that this emission decreases very slowly with time. It can also be extracted to a high chemical and isotopic purity from radioactive waste. For these reasons, it is a National Institute of Standards and Technology (NIST) standard beta emitter, and is used for equipment calibration. Technetium-99 has also been proposed for optoelectronic devices and nanoscale nuclear batteries. Like rhenium and palladium, technetium can serve as a catalyst. In processes such as the dehydrogenation of isopropyl alcohol, it is a far more effective catalyst than either rhenium or palladium. However, its radioactivity is a major problem in safe catalytic applications. When steel is immersed in water, adding a small concentration (55 ppm) of potassium pertechnetate(VII) to the water protects the steel from corrosion, even if the temperature is raised to . For this reason, pertechnetate has been used as an anodic corrosion inhibitor for steel, although technetium's radioactivity poses problems that limit this application to self-contained systems. While (for example) can also inhibit corrosion, it requires a concentration ten times as high. In one experiment, a specimen of carbon steel was kept in an aqueous solution of pertechnetate for 20 years and was still uncorroded. The mechanism by which pertechnetate prevents corrosion is not well understood, but seems to involve the reversible formation of a thin surface layer (passivation). One theory holds that the pertechnetate reacts with the steel surface to form a layer of technetium dioxide which prevents further corrosion; the same effect explains how iron powder can be used to remove pertechnetate from water. The effect disappears rapidly if the concentration of pertechnetate falls below the minimum concentration or if too high a concentration of other ions is added. As noted, the radioactive nature of technetium (3 MBq/L at the concentrations required) makes this corrosion protection impractical in almost all situations. Nevertheless, corrosion protection by pertechnetate ions was proposed (but never adopted) for use in boiling water reactors. Precautions Technetium plays no natural biological role and is not normally found in the human body. Technetium is produced in quantity by nuclear fission, and spreads more readily than many radionuclides. It appears to have low chemical toxicity. For example, no significant change in blood formula, body and organ weights, and food consumption could be detected for rats which ingested up to 15 μg of technetium-99 per gram of food for several weeks. In the body, technetium quickly gets converted to the stable ion, which is highly water-soluble and quickly excreted. The radiological toxicity of technetium (per unit of mass) is a function of compound, type of radiation for the isotope in question, and the isotope's half-life. All isotopes of technetium must be handled carefully. The most common isotope, technetium-99, is a weak beta emitter; such radiation is stopped by the walls of laboratory glassware. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. For most work, careful handling in a fume hood is sufficient, and a glove box is not needed. Notes References Bibliography Further reading External links Chemical elements Transition metals Synthetic elements Chemical elements predicted by Dmitri Mendeleev Chemical elements with hexagonal close-packed structure
Technetium
[ "Physics", "Chemistry" ]
7,002
[ "Periodic table", "Chemical elements", "Synthetic materials", "Synthetic elements", "Radioactivity", "Atoms", "Matter", "Chemical elements predicted by Dmitri Mendeleev" ]
30,325
https://en.wikipedia.org/wiki/Transcendental%20number
In mathematics, a transcendental number is a real or complex number that is not algebraic: that is, not the root of a non-zero polynomial with integer (or, equivalently, rational) coefficients. The best-known transcendental numbers are and . The quality of a number being transcendental is called transcendence. Though only a few classes of transcendental numbers are known, partly because it can be extremely difficult to show that a given number is transcendental, transcendental numbers are not rare: indeed, almost all real and complex numbers are transcendental, since the algebraic numbers form a countable set, while the set of real numbers and the set of complex numbers are both uncountable sets, and therefore larger than any countable set. All transcendental real numbers (also known as real transcendental numbers or transcendental irrational numbers) are irrational numbers, since all rational numbers are algebraic. The converse is not true: Not all irrational numbers are transcendental. Hence, the set of real numbers consists of non-overlapping sets of rational, algebraic irrational, and transcendental real numbers. For example, the square root of 2 is an irrational number, but it is not a transcendental number as it is a root of the polynomial equation . The golden ratio (denoted or ) is another irrational number that is not transcendental, as it is a root of the polynomial equation . History The name "transcendental" comes , and was first used for the mathematical concept in Leibniz's 1682 paper in which he proved that is not an algebraic function of . Euler, in the eighteenth century, was probably the first person to define transcendental numbers in the modern sense. Johann Heinrich Lambert conjectured that and were both transcendental numbers in his 1768 paper proving the number is irrational, and proposed a tentative sketch proof that is transcendental. Joseph Liouville first proved the existence of transcendental numbers in 1844, and in 1851 gave the first decimal examples such as the Liouville constant in which the th digit after the decimal point is if is equal to ( factorial) for some and otherwise. In other words, the th digit of this number is 1 only if is one of the numbers , etc. Liouville showed that this number belongs to a class of transcendental numbers that can be more closely approximated by rational numbers than can any irrational algebraic number, and this class of numbers is called the Liouville numbers, named in his honour. Liouville showed that all Liouville numbers are transcendental. The first number to be proven transcendental without having been specifically constructed for the purpose of proving transcendental numbers' existence was , by Charles Hermite in 1873. In 1874 Georg Cantor proved that the algebraic numbers are countable and the real numbers are uncountable. He also gave a new method for constructing transcendental numbers. Although this was already implied by his proof of the countability of the algebraic numbers, Cantor also published a construction that proves there are as many transcendental numbers as there are real numbers. Cantor's work established the ubiquity of transcendental numbers. In 1882 Ferdinand von Lindemann published the first complete proof that is transcendental. He first proved that is transcendental if is a non-zero algebraic number. Then, since is algebraic (see Euler's identity), must be transcendental. But since is algebraic, must therefore be transcendental. This approach was generalized by Karl Weierstrass to what is now known as the Lindemann–Weierstrass theorem. The transcendence of implies that geometric constructions involving compass and straightedge only cannot produce certain results, for example squaring the circle. In 1900 David Hilbert posed a question about transcendental numbers, Hilbert's seventh problem: If is an algebraic number that is not zero or one, and is an irrational algebraic number, is necessarily transcendental? The affirmative answer was provided in 1934 by the Gelfond–Schneider theorem. This work was extended by Alan Baker in the 1960s in his work on lower bounds for linear forms in any number of logarithms (of algebraic numbers). Properties A transcendental number is a (possibly complex) number that is not the root of any integer polynomial. Every real transcendental number must also be irrational, since a rational number is the root of an integer polynomial of degree one. The set of transcendental numbers is uncountably infinite. Since the polynomials with rational coefficients are countable, and since each such polynomial has a finite number of zeroes, the algebraic numbers must also be countable. However, Cantor's diagonal argument proves that the real numbers (and therefore also the complex numbers) are uncountable. Since the real numbers are the union of algebraic and transcendental numbers, it is impossible for both subsets to be countable. This makes the transcendental numbers uncountable. No rational number is transcendental and all real transcendental numbers are irrational. The irrational numbers contain all the real transcendental numbers and a subset of the algebraic numbers, including the quadratic irrationals and other forms of algebraic irrationals. Applying any non-constant single-variable algebraic function to a transcendental argument yields a transcendental value. For example, from knowing that is transcendental, it can be immediately deduced that numbers such as , , , and are transcendental as well. However, an algebraic function of several variables may yield an algebraic number when applied to transcendental numbers if these numbers are not algebraically independent. For example, and are both transcendental, but is obviously not. It is unknown whether , for example, is transcendental, though at least one of and must be transcendental. More generally, for any two transcendental numbers and , at least one of and must be transcendental. To see this, consider the polynomial  . If and were both algebraic, then this would be a polynomial with algebraic coefficients. Because algebraic numbers form an algebraically closed field, this would imply that the roots of the polynomial, and , must be algebraic. But this is a contradiction, and thus it must be the case that at least one of the coefficients is transcendental. The non-computable numbers are a strict subset of the transcendental numbers. All Liouville numbers are transcendental, but not vice versa. Any Liouville number must have unbounded partial quotients in its simple continued fraction expansion. Using a counting argument one can show that there exist transcendental numbers which have bounded partial quotients and hence are not Liouville numbers. Using the explicit continued fraction expansion of , one can show that is not a Liouville number (although the partial quotients in its continued fraction expansion are unbounded). Kurt Mahler showed in 1953 that is also not a Liouville number. It is conjectured that all infinite continued fractions with bounded terms, that have a "simple" structure, and that are not eventually periodic are transcendental (in other words, algebraic irrational roots of at least third degree polynomials do not have apparent pattern in their continued fraction expansions, since eventually periodic continued fractions correspond to quadratic irrationals, see Hermite's problem). Numbers proven to be transcendental Numbers proven to be transcendental: (by the Lindemann–Weierstrass theorem). if is algebraic and nonzero (by the Lindemann–Weierstrass theorem), in particular Euler's number . where is a positive integer; in particular Gelfond's constant (by the Gelfond–Schneider theorem). Algebraic combinations of and such as and (following from their algebraic independence). where is algebraic but not 0 or 1, and is irrational algebraic, in particular the Gelfond–Schneider constant (by the Gelfond–Schneider theorem). The natural logarithm if is algebraic and not equal to 0 or 1, for any branch of the logarithm function (by the Lindemann–Weierstrass theorem). if and are positive integers not both powers of the same integer, and is not equal to 1 (by the Gelfond–Schneider theorem). All numbers of the form are transcendental, where are algebraic for all and are non-zero algebraic for all (by Baker's theorem). The trigonometric functions and their hyperbolic counterparts, for any nonzero algebraic number , expressed in radians (by the Lindemann–Weierstrass theorem). Non-zero results of the inverse trigonometric functions and their hyperbolic counterparts, for any algebraic number (by the Lindemann–Weierstrass theorem). , for rational such that . The fixed point of the cosine function (also referred to as the Dottie number ) – the unique real solution to the equation , where is in radians (by the Lindemann–Weierstrass theorem). if is algebraic and nonzero, for any branch of the Lambert W Function (by the Lindemann–Weierstrass theorem), in particular the omega constant . if both and the order are algebraic such that , for any branch of the generalized Lambert W function. , the square super-root of any natural number is either an integer or transcendental (by the Gelfond–Schneider theorem). Values of the gamma function of rational numbers that are of the form or . Algebraic combinations of and or of and such as the lemniscate constant (following from their respective algebraic independences). The values of Beta function if and are non-integer rational numbers. The Bessel function of the first kind , its first derivative, and the quotient are transcendental when is rational and is algebraic and nonzero, and all nonzero roots of and are transcendental when is rational. The number , where and are Bessel functions and is the Euler–Mascheroni constant. Any Liouville number, in particular: Liouville's constant. Numbers with large irrationality measure, such as the Champernowne constant (by Roth's theorem). Numbers artificially constructed not to be algebraic periods. Any non-computable number, in particular: Chaitin's constant. Constructed irrational numbers which are not simply normal in any base. Any number for which the digits with respect to some fixed base form a Sturmian word. The Prouhet–Thue–Morse constant and the related rabbit constant. The Komornik–Loreti constant. The paperfolding constant (also named as "Gaussian Liouville number"). The values of the infinite series with fast convergence rate as defined by Y. Gao and J. Gao, such as . Numbers of the form and For where is the floor function. Any number of the form (where , are polynomials in variables and , is algebraic and , is any integer greater than 1). The numbers and with only two different decimal digits whose nonzero digit positions are given by the Moser–de Bruijn sequence and its double. The values of the Rogers-Ramanujan continued fraction where is algebraic and . The lemniscatic values of theta function (under the same conditions for ) are also transcendental. where is algebraic but not imaginary quadratic (i.e, the exceptional set of this function is the number field whose degree of extension over is 2). The constants and in the formula for first index of occurrence of Gijswijt's sequence, where k is any integer greater than 1. Conjectured transcendental numbers Numbers which have yet to be proven to be either transcendental or algebraic: Most nontrivial combinations of two or more transcendental numbers are themselves not known to be transcendental or even irrational: , , , , , , . It has been shown that both and do not satisfy any polynomial equation of degree and integer coefficients of average size 109. At least one of the numbers and is transcendental. Schanuel's conjecture would imply that all of the above numbers are transcendental and algebraically independent. The Euler–Mascheroni constant : In 2010 it has been shown that an infinite list of Euler-Lehmer constants (which includes ) contains at most one algebraic number. In 2012 it was shown that at least one of and the Gompertz constant is transcendental. The values of the Riemann zeta function at odd positive integers ; in particular Apéry's constant , which is known to be irrational. For the other numbers even this is not known. The values of the Dirichlet beta function at even positive integers ; in particular Catalan's Constant . (none of them are known to be irrational). Values of the Gamma Function for positive integers and are not known to be irrational, let alone transcendental. For at least one the numbers and is transcendental. Any number given by some kind of limit that is not obviously algebraic. Proofs for specific numbers A proof that is transcendental The first proof that the base of the natural logarithms, , is transcendental dates from 1873. We will now follow the strategy of David Hilbert (1862–1943) who gave a simplification of the original proof of Charles Hermite. The idea is the following: Assume, for purpose of finding a contradiction, that is algebraic. Then there exists a finite set of integer coefficients satisfying the equation: It is difficult to make use of the integer status of these coefficients when multiplied by a power of the irrational , but we can absorb those powers into an integral which “mostly” will assume integer values. For a positive integer , define the polynomial and multiply both sides of the above equation by to arrive at the equation: By splitting respective domains of integration, this equation can be written in the form where Here will turn out to be an integer, but more importantly it grows quickly with . Lemma 1 There are arbitrarily large such that is a non-zero integer. Proof. Recall the standard integral (case of the Gamma function) valid for any natural number . More generally, if then . This would allow us to compute exactly, because any term of can be rewritten as through a change of variables. Hence That latter sum is a polynomial in with integer coefficients, i.e., it is a linear combination of powers with integer coefficients. Hence the number is a linear combination (with those same integer coefficients) of factorials ; in particular is an integer. Smaller factorials divide larger factorials, so the smallest occurring in that linear combination will also divide the whole of . We get that from the lowest power term appearing with a nonzero coefficient in , but this smallest exponent is also the multiplicity of as a root of this polynomial. is chosen to have multiplicity of the root and multiplicity of the roots for , so that smallest exponent is for and for with . Therefore divides . To establish the last claim in the lemma, that is nonzero, it is sufficient to prove that does not divide . To that end, let be any prime larger than and . We know from the above that divides each of for , so in particular all of those are divisible by . It comes down to the first term . We have (see falling and rising factorials) and those higher degree terms all give rise to factorials or larger. Hence That right hand side is a product of nonzero integer factors less than the prime , therefore that product is not divisible by , and the same holds for ; in particular cannot be zero. Lemma 2 For sufficiently large , . Proof. Note that where are continuous functions of for all , so are bounded on the interval . That is, there are constants such that So each of those integrals composing is bounded, the worst case being It is now possible to bound the sum as well: where is a constant not depending on . It follows that finishing the proof of this lemma. Conclusion Choosing a value of that satisfies both lemmas leads to a non-zero integer added to a vanishingly small quantity being equal to zero: an impossibility. It follows that the original assumption, that can satisfy a polynomial equation with integer coefficients, is also impossible; that is, is transcendental. The transcendence of A similar strategy, different from Lindemann's original approach, can be used to show that the number is transcendental. Besides the gamma-function and some estimates as in the proof for , facts about symmetric polynomials play a vital role in the proof. For detailed information concerning the proofs of the transcendence of and , see the references and external links. See also Transcendental number theory, the study of questions related to transcendental numbers Transcendental element, generalization of transcendental numbers in abstract algebra Gelfond–Schneider theorem Diophantine approximation Periods, a countable set of numbers (including all algebraic and some transcendental numbers) which may be defined by integral equations. Notes References Sources External links — Proof that is transcendental, in German. Articles containing proofs
Transcendental number
[ "Mathematics" ]
3,571
[ "Articles containing proofs" ]
30,364
https://en.wikipedia.org/wiki/Transition%20metal
In chemistry, a transition metal (or transition element) is a chemical element in the d-block of the periodic table (groups 3 to 12), though the elements of group 12 (and less often group 3) are sometimes excluded. The lanthanide and actinide elements (the f-block) are called inner transition metals and are sometimes considered to be transition metals as well. Since they are metals, they are lustrous and have good electrical and thermal conductivity. Most (with the exception of group 11 and group 12) are hard and strong, and have high melting and boiling temperatures. They form compounds in any of two or more different oxidation states and bind to a variety of ligands to form coordination complexes that are often coloured. They form many useful alloys and are often employed as catalysts in elemental form or in compounds such as coordination complexes and oxides. Most are strongly paramagnetic because of their unpaired d electrons, as are many of their compounds. All of the elements that are ferromagnetic near room temperature are transition metals (iron, cobalt and nickel) or inner transition metals (gadolinium). English chemist Charles Rugeley Bury (1890–1968) first used the word transition in this context in 1921, when he referred to a transition series of elements during the change of an inner layer of electrons (for example n = 3 in the 4th row of the periodic table) from a stable group of 8 to one of 18, or from 18 to 32. These elements are now known as the d-block. Definition and classification The 2011 IUPAC Principles of Chemical Nomenclature describe a "transition metal" as any element in groups 3 to 12 on the periodic table. This corresponds exactly to the d-block elements, and many scientists use this definition. In actual practice, the f-block lanthanide and actinide series are called "inner transition metals". The 2005 Red Book allows for the group 12 elements to be excluded, but not the 2011 Principles. The IUPAC Gold Book defines a transition metal as "an element whose atom has a partially filled d sub-shell, or which can give rise to cations with an incomplete d sub-shell", but this definition is taken from an old edition of the Red Book and is no longer present in the current edition. In the d-block, the atoms of the elements have between zero and ten d electrons. Published texts and periodic tables show variation regarding the heavier members of group 3. The common placement of lanthanum and actinium in these positions is not supported by physical, chemical, and electronic evidence, which overwhelmingly favour putting lutetium and lawrencium in those places. Some authors prefer to leave the spaces below yttrium blank as a third option, but there is confusion on whether this format implies that group 3 contains only scandium and yttrium, or if it also contains all the lanthanides and actinides; additionally, it creates a 15-element-wide f-block, when quantum mechanics dictates that the f-block should only be 14 elements wide. The form with lutetium and lawrencium in group 3 is supported by a 1988 IUPAC report on physical, chemical, and electronic grounds, and again by a 2021 IUPAC preliminary report as it is the only form that allows simultaneous (1) preservation of the sequence of increasing atomic numbers, (2) a 14-element-wide f-block, and (3) avoidance of the split in the d-block. Argumentation can still be found in the contemporary literature purporting to defend the form with lanthanum and actinium in group 3, but many authors consider it to be logically inconsistent (a particular point of contention being the differing treatment of actinium and thorium, which both can use 5f as a valence orbital but have no 5f occupancy as single atoms); the majority of investigators considering the problem agree with the updated form with lutetium and lawrencium. The group 12 elements zinc, cadmium, and mercury are sometimes excluded from the transition metals. This is because they have the electronic configuration [ ]d10s2, where the d shell is complete, and they still have a complete d shell in all their known oxidation states. The group 12 elements Zn, Cd and Hg may therefore, under certain criteria, be classed as post-transition metals in this case. However, it is often convenient to include these elements in a discussion of the transition elements. For example, when discussing the crystal field stabilization energy of first-row transition elements, it is convenient to also include the elements calcium and zinc, as both and have a value of zero, against which the value for other transition metal ions may be compared. Another example occurs in the Irving–Williams series of stability constants of complexes. Moreover, Zn, Cd, and Hg can use their d orbitals for bonding even though they are not known in oxidation states that would formally require breaking open the d-subshell, which sets them apart from the p-block elements. The 2007 (though disputed and so far not reproduced independently) synthesis of mercury(IV) fluoride () has been taken by some to reinforce the view that the group 12 elements should be considered transition metals, but some authors still consider this compound to be exceptional. Copernicium is expected to be able to use its d electrons for chemistry as its 6d subshell is destabilised by strong relativistic effects due to its very high atomic number, and as such is expected to have transition-metal-like behaviour and show higher oxidation states than +2 (which are not definitely known for the lighter group 12 elements). Even in bare dications, Cn2+ is predicted to be 6d87s2, unlike Hg2+ which is 5d106s0. Although meitnerium, darmstadtium, and roentgenium are within the d-block and are expected to behave as transition metals analogous to their lighter congeners iridium, platinum, and gold, this has not yet been experimentally confirmed. Whether copernicium behaves more like mercury or has properties more similar to those of the noble gas radon is not clear. Relative inertness of Cn would come from the relativistically expanded 7s–7p1/2 energy gap, which is already adumbrated in the 6s–6p1/2 gap for Hg, weakening metallic bonding and causing its well-known low melting and boiling points. Transition metals with lower or higher group numbers are described as 'earlier' or 'later', respectively. When described in a two-way classification scheme, early transition metals are on the left side of the d-block from group 3 to group 7. Late transition metals are on the right side of the d-block, from group 8 to 11 (or 12, if they are counted as transition metals). In an alternative three-way scheme, groups 3, 4, and 5 are classified as early transition metals, 6, 7, and 8 are classified as middle transition metals, and 9, 10, and 11 (and sometimes group 12) are classified as late transition metals. The heavy group 2 elements calcium, strontium, and barium do not have filled d-orbitals as single atoms, but are known to have d-orbital bonding participation in some compounds, and for that reason have been called "honorary" transition metals. Probably the same is true of radium. The f-block elements La–Yb and Ac–No have chemical activity of the (n−1)d shell, but importantly also have chemical activity of the (n−2)f shell that is absent in d-block elements. Hence they are often treated separately as inner transition elements. Electronic configuration The general electronic configuration of the d-block atoms is [noble gas](n − 1)d0–10ns0–2np0–1. Here "[noble gas]" is the electronic configuration of the last noble gas preceding the atom in question, and n is the highest principal quantum number of an occupied orbital in that atom. For example, Ti (Z = 22) is in period 4 so that n = 4, the first 18 electrons have the same configuration of Ar at the end of period 3, and the overall configuration is [Ar]3d24s2. The period 6 and 7 transition metals also add core (n − 2)f14 electrons, which are omitted from the tables below. The p orbitals are almost never filled in free atoms (the one exception being lawrencium due to relativistic effects that become important at such high Z), but they can contribute to the chemical bonding in transition metal compounds. The Madelung rule predicts that the inner d orbital is filled after the valence-shell s orbital. The typical electronic structure of transition metal atoms is then written as [noble gas]ns2(n − 1)dm. This rule is approximate, but holds for most of the transition metals. Even when it fails for the neutral ground state, it accurately describes a low-lying excited state. The d subshell is the next-to-last subshell and is denoted as (n − 1)d subshell. The number of s electrons in the outermost s subshell is generally one or two except palladium (Pd), with no electron in that s sub shell in its ground state. The s subshell in the valence shell is represented as the ns subshell, e.g. 4s. In the periodic table, the transition metals are present in ten groups (3 to 12). The elements in group 3 have an ns2(n − 1)d1 configuration, except for lawrencium (Lr): its 7s27p1 configuration exceptionally does not fill the 6d orbitals at all. The first transition series is present in the 4th period, and starts after Ca (Z = 20) of group 2 with the configuration [Ar]4s2, or scandium (Sc), the first element of group 3 with atomic number Z = 21 and configuration [Ar]4s23d1, depending on the definition used. As we move from left to right, electrons are added to the same d subshell till it is complete. Since the electrons added fill the (n − 1)d orbitals, the properties of the d-block elements are quite different from those of s and p block elements in which the filling occurs either in s or in p orbitals of the valence shell. The electronic configuration of the individual elements present in all the d-block series are given below: A careful look at the electronic configuration of the elements reveals that there are certain exceptions to the Madelung rule. For Cr as an example the rule predicts the configuration 3d44s2, but the observed atomic spectra show that the real ground state is 3d54s1. To explain such exceptions, it is necessary to consider the effects of increasing nuclear charge on the orbital energies, as well as the electron–electron interactions including both Coulomb repulsion and exchange energy. The exceptions are in any case not very relevant for chemistry because the energy difference between them and the expected configuration is always quite low. The (n − 1)d orbitals that are involved in the transition metals are very significant because they influence such properties as magnetic character, variable oxidation states, formation of coloured compounds etc. The valence s and p orbitals (ns and np) have very little contribution in this regard since they hardly change in the moving from left to the right in a transition series. In transition metals, there are greater horizontal similarities in the properties of the elements in a period in comparison to the periods in which the d orbitals are not involved. This is because in a transition series, the valence shell electronic configuration of the elements do not change. However, there are some group similarities as well. Characteristic properties There are a number of properties shared by the transition elements that are not found in other elements, which results from the partially filled d shell. These include the formation of compounds whose colour is due to d–d electronic transitions the formation of compounds in many oxidation states, due to the relatively low energy gap between different possible oxidation states the formation of many paramagnetic compounds due to the presence of unpaired d electrons. A few compounds of main-group elements are also paramagnetic (e.g. nitric oxide, oxygen) Most transition metals can be bound to a variety of ligands, allowing for a wide variety of transition metal complexes. Coloured compounds Colour in transition-series metal compounds is generally due to electronic transitions of two principal types. charge transfer transitions. An electron may jump from a predominantly ligand orbital to a predominantly metal orbital, giving rise to a ligand-to-metal charge-transfer (LMCT) transition. These can most easily occur when the metal is in a high oxidation state. For example, the colour of chromate, dichromate and permanganate ions is due to LMCT transitions. Another example is that mercuric iodide, HgI2, is red because of a LMCT transition. A metal-to-ligand charge transfer (MLCT) transition will be most likely when the metal is in a low oxidation state and the ligand is easily reduced. In general charge transfer transitions result in more intense colours than d–d transitions. d–d transitions. An electron jumps from one d orbital to another. In complexes of the transition metals the d orbitals do not all have the same energy. The pattern of splitting of the d orbitals can be calculated using crystal field theory. The extent of the splitting depends on the particular metal, its oxidation state and the nature of the ligands. The actual energy levels are shown on Tanabe–Sugano diagrams. In centrosymmetric complexes, such as octahedral complexes, d–d transitions are forbidden by the Laporte rule and only occur because of vibronic coupling in which a molecular vibration occurs together with a d–d transition. Tetrahedral complexes have somewhat more intense colour because mixing d and p orbitals is possible when there is no centre of symmetry, so transitions are not pure d–d transitions. The molar absorptivity (ε) of bands caused by d–d transitions are relatively low, roughly in the range 5-500 M−1cm−1 (where M = mol dm−3). Some d–d transitions are spin forbidden. An example occurs in octahedral, high-spin complexes of manganese(II), which has a d5 configuration in which all five electrons have parallel spins; the colour of such complexes is much weaker than in complexes with spin-allowed transitions. Many compounds of manganese(II) appear almost colourless. The spectrum of shows a maximum molar absorptivity of about 0.04 M−1cm−1 in the visible spectrum. Oxidation states A characteristic of transition metals is that they exhibit two or more oxidation states, usually differing by one. For example, compounds of vanadium are known in all oxidation states between −1, such as , and +5, such as . Main-group elements in groups 13 to 18 also exhibit multiple oxidation states. The "common" oxidation states of these elements typically differ by two instead of one. For example, compounds of gallium in oxidation states +1 and +3 exist in which there is a single gallium atom. Compounds of Ga(II) would have an unpaired electron and would behave as a free radical and generally be destroyed rapidly, but some stable radicals of Ga(II) are known. Gallium also has a formal oxidation state of +2 in dimeric compounds, such as , which contain a Ga-Ga bond formed from the unpaired electron on each Ga atom. Thus the main difference in oxidation states, between transition elements and other elements is that oxidation states are known in which there is a single atom of the element and one or more unpaired electrons. The maximum oxidation state in the first row transition metals is equal to the number of valence electrons from titanium (+4) up to manganese (+7), but decreases in the later elements. In the second row, the maximum occurs with ruthenium (+8), and in the third row, the maximum occurs with iridium (+9). In compounds such as and , the elements achieve a stable configuration by covalent bonding. The lowest oxidation states are exhibited in metal carbonyl complexes such as (oxidation state zero) and (oxidation state −2) in which the 18-electron rule is obeyed. These complexes are also covalent. Ionic compounds are mostly formed with oxidation states +2 and +3. In aqueous solution, the ions are hydrated by (usually) six water molecules arranged octahedrally. Magnetism Transition metal compounds are paramagnetic when they have one or more unpaired d electrons. In octahedral complexes with between four and seven d electrons both high spin and low spin states are possible. Tetrahedral transition metal complexes such as are high spin because the crystal field splitting is small so that the energy to be gained by virtue of the electrons being in lower energy orbitals is always less than the energy needed to pair up the spins. Some compounds are diamagnetic. These include octahedral, low-spin, d6 and square-planar d8 complexes. In these cases, crystal field splitting is such that all the electrons are paired up. Ferromagnetism occurs when individual atoms are paramagnetic and the spin vectors are aligned parallel to each other in a crystalline material. Metallic iron and the alloy alnico are examples of ferromagnetic materials involving transition metals. Antiferromagnetism is another example of a magnetic property arising from a particular alignment of individual spins in the solid state. Catalytic properties The transition metals and their compounds are known for their homogeneous and heterogeneous catalytic activity. This activity is ascribed to their ability to adopt multiple oxidation states and to form complexes. Vanadium(V) oxide (in the contact process), finely divided iron (in the Haber process), and nickel (in catalytic hydrogenation) are some of the examples. Catalysts at a solid surface (nanomaterial-based catalysts) involve the formation of bonds between reactant molecules and atoms of the surface of the catalyst (first row transition metals utilize 3d and 4s electrons for bonding). This has the effect of increasing the concentration of the reactants at the catalyst surface and also weakening of the bonds in the reacting molecules (the activation energy is lowered). Also because the transition metal ions can change their oxidation states, they become more effective as catalysts. An interesting type of catalysis occurs when the products of a reaction catalyse the reaction producing more catalyst (autocatalysis). One example is the reaction of oxalic acid with acidified potassium permanganate (or manganate (VII)). Once a little Mn2+ has been produced, it can react with MnO4− forming Mn3+. This then reacts with C2O4− ions forming Mn2+ again. Physical properties As implied by the name, all transition metals are metals and thus conductors of electricity. In general, transition metals possess a high density and high melting points and boiling points. These properties are due to metallic bonding by delocalized d electrons, leading to cohesion which increases with the number of shared electrons. However the group 12 metals have much lower melting and boiling points since their full d subshells prevent d–d bonding, which again tends to differentiate them from the accepted transition metals. Mercury has a melting point of and is a liquid at room temperature. See also Inner transition element, a name given to any member of the f-block Main-group element, an element other than a transition metal Ligand field theory a development of crystal field theory taking covalency into account Crystal field theory a model that describes the breaking of degeneracies of electronic orbital states Post-transition metal, a metallic element to the right of the transition metals in the periodic table References Periodic table
Transition metal
[ "Chemistry" ]
4,194
[ "Periodic table" ]
30,367
https://en.wikipedia.org/wiki/Trigonometric%20functions
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis. The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent functions. Their reciprocals are respectively the cosecant, the secant, and the cotangent functions, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions. The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed. Notation Conventionally, an abbreviation of each trigonometric function's name is used as its symbol in formulas. Today, the most common versions of these abbreviations are "sin" for sine, "cos" for cosine, "tan" or "tg" for tangent, "sec" for secant, "csc" or "cosec" for cosecant, and "cot" or "ctg" for cotangent. Historically, these abbreviations were first used in prose sentences to indicate particular line segments or their lengths related to an arc of an arbitrary circle, and later to indicate ratios of lengths, but as the function concept developed in the 17th–18th century, they began to be considered as functions of real-number-valued angle measures, and written with functional notation, for example . Parentheses are still often omitted to reduce clutter, but are sometimes necessary; for example the expression would typically be interpreted to mean so parentheses are required to express A positive integer appearing as a superscript after the symbol of the function denotes exponentiation, not function composition. For example and denote not This differs from the (historically later) general functional notation in which However, the exponent is commonly used to denote the inverse function, not the reciprocal. For example and denote the inverse trigonometric function alternatively written The equation implies not In this case, the superscript could be considered as denoting a composed or iterated function, but negative superscripts other than are not in common use. Right-angled triangle definitions If the acute angle is given, then any right triangles that have an angle of are similar to each other. This means that the ratio of any two side lengths depends only on . Thus these six ratios define six functions of , which are the trigonometric functions. In the following definitions, the hypotenuse is the length of the side opposite the right angle, opposite represents the side opposite the given angle , and adjacent represents the side between the angle and the right angle. Various mnemonics can be used to remember these definitions. In a right-angled triangle, the sum of the two acute angles is a right angle, that is, or . Therefore and represent the same ratio, and thus are equal. This identity and analogous relationships between the other trigonometric functions are summarized in the following table. Radians versus degrees In geometric applications, the argument of a trigonometric function is generally the measure of an angle. For this purpose, any angular unit is convenient. One common unit is degrees, in which a right angle is 90° and a complete turn is 360° (particularly in elementary mathematics). However, in calculus and mathematical analysis, the trigonometric functions are generally regarded more abstractly as functions of real or complex numbers, rather than angles. In fact, the functions and can be defined for all complex numbers in terms of the exponential function, via power series, or as solutions to differential equations given particular initial values (see below), without reference to any geometric notions. The other four trigonometric functions (, , , ) can be defined as quotients and reciprocals of and , except where zero occurs in the denominator. It can be proved, for real arguments, that these definitions coincide with elementary geometric definitions if the argument is regarded as an angle in radians. Moreover, these definitions result in simple expressions for the derivatives and indefinite integrals for the trigonometric functions. Thus, in settings beyond elementary geometry, radians are regarded as the mathematically natural unit for describing angle measures. When radians (rad) are employed, the angle is given as the length of the arc of the unit circle subtended by it: the angle that subtends an arc of length 1 on the unit circle is 1 rad (≈ 57.3°), and a complete turn (360°) is an angle of 2 (≈ 6.28) rad. For real number x, the notation , , etc. refers to the value of the trigonometric functions evaluated at an angle of x rad. If units of degrees are intended, the degree sign must be explicitly shown (, , etc.). Using this standard notation, the argument x for the trigonometric functions satisfies the relationship x = (180x/)°, so that, for example, when we take x = . In this way, the degree symbol can be regarded as a mathematical constant such that 1° = /180 ≈ 0.0175. Unit-circle definitions The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin of this coordinate system. While right-angled triangle definitions allow for the definition of the trigonometric functions for angles between and radians the unit circle definitions allow the domain of trigonometric functions to be extended to all positive and negative real numbers. Let be the ray obtained by rotating by an angle the positive half of the -axis (counterclockwise rotation for and clockwise rotation for ). This ray intersects the unit circle at the point The ray extended to a line if necessary, intersects the line of equation at point and the line of equation at point The tangent line to the unit circle at the point , is perpendicular to and intersects the - and -axes at points and The coordinates of these points give the values of all trigonometric functions for any arbitrary real value of in the following manner. The trigonometric functions and are defined, respectively, as the x- and y-coordinate values of point . That is, and In the range , this definition coincides with the right-angled triangle definition, by taking the right-angled triangle to have the unit radius as hypotenuse. And since the equation holds for all points on the unit circle, this definition of cosine and sine also satisfies the Pythagorean identity. The other trigonometric functions can be found along the unit circle as and and By applying the Pythagorean identity and geometric proof methods, these definitions can readily be shown to coincide with the definitions of tangent, cotangent, secant and cosecant in terms of sine and cosine, that is Since a rotation of an angle of does not change the position or size of a shape, the points , , , , and are the same for two angles whose difference is an integer multiple of . Thus trigonometric functions are periodic functions with period . That is, the equalities and hold for any angle and any integer . The same is true for the four other trigonometric functions. By observing the sign and the monotonicity of the functions sine, cosine, cosecant, and secant in the four quadrants, one can show that is the smallest value for which they are periodic (i.e., is the fundamental period of these functions). However, after a rotation by an angle , the points and already return to their original position, so that the tangent function and the cotangent function have a fundamental period of . That is, the equalities and hold for any angle and any integer . Algebraic values The algebraic expressions for the most important angles are as follows: (zero angle) (right angle) Writing the numerators as square roots of consecutive non-negative integers, with a denominator of 2, provides an easy way to remember the values. Such simple expressions generally do not exist for other angles which are rational multiples of a right angle. For an angle which, measured in degrees, is a multiple of three, the exact trigonometric values of the sine and the cosine may be expressed in terms of square roots. These values of the sine and the cosine may thus be constructed by ruler and compass. For an angle of an integer number of degrees, the sine and the cosine may be expressed in terms of square roots and the cube root of a non-real complex number. Galois theory allows a proof that, if the angle is not a multiple of 3°, non-real cube roots are unavoidable. For an angle which, expressed in degrees, is a rational number, the sine and the cosine are algebraic numbers, which may be expressed in terms of th roots. This results from the fact that the Galois groups of the cyclotomic polynomials are cyclic. For an angle which, expressed in degrees, is not a rational number, then either the angle or both the sine and the cosine are transcendental numbers. This is a corollary of Baker's theorem, proved in 1966. Simple algebraic values The following table lists the sines, cosines, and tangents of multiples of 15 degrees from 0 to 90 degrees. Definitions in analysis G. H. Hardy noted in his 1908 work A Course of Pure Mathematics that the definition of the trigonometric functions in terms of the unit circle is not satisfactory, because it depends implicitly on a notion of angle that can be measured by a real number. Thus in modern analysis, trigonometric functions are usually constructed without reference to geometry. Various ways exist in the literature for defining the trigonometric functions in a manner suitable for analysis; they include: Using the "geometry" of the unit circle, which requires formulating the arc length of a circle (or area of a sector) analytically. By a power series, which is particularly well-suited to complex variables. By using an infinite product expansion. By inverting the inverse trigonometric functions, which can be defined as integrals of algebraic or rational functions. As solutions of a differential equation. Definition by differential equations Sine and cosine can be defined as the unique solution to the initial value problem: Differentiating again, and , so both sine and cosine are solutions of the same ordinary differential equation Sine is the unique solution with and ; cosine is the unique solution with and . One can then prove, as a theorem, that solutions are periodic, having the same period. Writing this period as is then a definition of the real number which is independent of geometry. Applying the quotient rule to the tangent , so the tangent function satisfies the ordinary differential equation It is the unique solution with . Power series expansion The basic trigonometric functions can be defined by the following power series expansions. These series are also known as the Taylor series or Maclaurin series of these trigonometric functions: The radius of convergence of these series is infinite. Therefore, the sine and the cosine can be extended to entire functions (also called "sine" and "cosine"), which are (by definition) complex-valued functions that are defined and holomorphic on the whole complex plane. Term-by-term differentiation shows that the sine and cosine defined by the series obey the differential equation discussed previously, and conversely one can obtain these series from elementary recursion relations derived from the differential equation. Being defined as fractions of entire functions, the other trigonometric functions may be extended to meromorphic functions, that is functions that are holomorphic in the whole complex plane, except some isolated points called poles. Here, the poles are the numbers of the form for the tangent and the secant, or for the cotangent and the cosecant, where is an arbitrary integer. Recurrences relations may also be computed for the coefficients of the Taylor series of the other trigonometric functions. These series have a finite radius of convergence. Their coefficients have a combinatorial interpretation: they enumerate alternating permutations of finite sets. More precisely, defining , the th up/down number, , the th Bernoulli number, and , is the th Euler number, one has the following series expansions: Continued fraction expansion The following continued fractions are valid in the whole complex plane: The last one was used in the historically first proof that π is irrational. Partial fraction expansion There is a series representation as partial fraction expansion where just translated reciprocal functions are summed up, such that the poles of the cotangent function and the reciprocal functions match: This identity can be proved with the Herglotz trick. Combining the th with the th term lead to absolutely convergent series: Similarly, one can find a partial fraction expansion for the secant, cosecant and tangent functions: Infinite product expansion The following infinite product for the sine is due to Leonhard Euler, and is of great importance in complex analysis: This may be obtained from the partial fraction decomposition of given above, which is the logarithmic derivative of . From this, it can be deduced also that Euler's formula and the exponential function Euler's formula relates sine and cosine to the exponential function: This formula is commonly considered for real values of , but it remains true for all complex values. Proof: Let and One has for . The quotient rule implies thus that . Therefore, is a constant function, which equals , as This proves the formula. One has Solving this linear system in sine and cosine, one can express them in terms of the exponential function: When is real, this may be rewritten as Most trigonometric identities can be proved by expressing trigonometric functions in terms of the complex exponential function by using above formulas, and then using the identity for simplifying the result. Euler's formula can also be used to define the basic trigonometric function directly, as follows, using the language of topological groups. The set of complex numbers of unit modulus is a compact and connected topological group, which has a neighborhood of the identity that is homeomorphic to the real line. Therefore, it is isomorphic as a topological group to the one-dimensional torus group , via an isomorphism In pedestrian terms , and this isomorphism is unique up to taking complex conjugates. For a nonzero real number (the base), the function defines an isomorphism of the group . The real and imaginary parts of are the cosine and sine, where is used as the base for measuring angles. For example, when , we get the measure in radians, and the usual trigonometric functions. When , we get the sine and cosine of angles measured in degrees. Note that is the unique value at which the derivative becomes a unit vector with positive imaginary part at . This fact can, in turn, be used to define the constant . Definition via integration Another way to define the trigonometric functions in analysis is using integration. For a real number , put where this defines this inverse tangent function. Also, is defined by a definition that goes back to Karl Weierstrass. On the interval , the trigonometric functions are defined by inverting the relation . Thus we define the trigonometric functions by where the point is on the graph of and the positive square root is taken. This defines the trigonometric functions on . The definition can be extended to all real numbers by first observing that, as , , and so and . Thus and are extended continuously so that . Now the conditions and define the sine and cosine as periodic functions with period , for all real numbers. Proving the basic properties of sine and cosine, including the fact that sine and cosine are analytic, one may first establish the addition formulae. First, holds, provided , since after the substitution . In particular, the limiting case as gives Thus we have and So the sine and cosine functions are related by translation over a quarter period . Definitions using functional equations One can also define the trigonometric functions using various functional equations. For example, the sine and the cosine form the unique pair of continuous functions that satisfy the difference formula and the added condition In the complex plane The sine and cosine of a complex number can be expressed in terms of real sines, cosines, and hyperbolic functions as follows: By taking advantage of domain coloring, it is possible to graph the trigonometric functions as complex-valued functions. Various features unique to the complex functions can be seen from the graph; for example, the sine and cosine functions can be seen to be unbounded as the imaginary part of becomes larger (since the color white represents infinity), and the fact that the functions contain simple zeros or poles is apparent from the fact that the hue cycles around each zero or pole exactly once. Comparing these graphs with those of the corresponding Hyperbolic functions highlights the relationships between the two. Periodicity and asymptotes The cosine and sine functions are periodic, with period , which is the smallest positive period: Consequently, the secant and cosecant also have as their period. The functions sine and cosine also have semiperiods , and and consequently Also, The function has a unique zero (at ) in the strip . The function has the pair of zeros in the same strip. Because of the periodicity, the zeros of sine are There zeros of cosine are All of the zeros are simple zeros, and both functions have derivative at each of the zeros. The tangent function has a simple zero at and vertical asymptotes at , where it has a simple pole of residue . Again, owing to the periodicity, the zeros are all the integer multiples of and the poles are odd multiples of , all having the same residue. The poles correspond to vertical asymptotes The cotangent function has a simple pole of residue 1 at the integer multiples of and simple zeros at odd multiples of . The poles correspond to vertical asymptotes Basic identities Many identities interrelate the trigonometric functions. This section contains the most basic ones; for more identities, see List of trigonometric identities. These identities may be proved geometrically from the unit-circle definitions or the right-angled-triangle definitions (although, for the latter definitions, care must be taken for angles that are not in the interval , see Proofs of trigonometric identities). For non-geometrical proofs using only tools of calculus, one may use directly the differential equations, in a way that is similar to that of the above proof of Euler's identity. One can also use Euler's identity for expressing all trigonometric functions in terms of complex exponentials and using properties of the exponential function. Parity The cosine and the secant are even functions; the other trigonometric functions are odd functions. That is: Periods All trigonometric functions are periodic functions of period . This is the smallest period, except for the tangent and the cotangent, which have as smallest period. This means that, for every integer , one has Pythagorean identity The Pythagorean identity, is the expression of the Pythagorean theorem in terms of trigonometric functions. It is . Dividing through by either or gives and . Sum and difference formulas The sum and difference formulas allow expanding the sine, the cosine, and the tangent of a sum or a difference of two angles in terms of sines and cosines and tangents of the angles themselves. These can be derived geometrically, using arguments that date to Ptolemy. One can also produce them algebraically using Euler's formula. Sum Difference When the two angles are equal, the sum formulas reduce to simpler equations known as the double-angle formulae. These identities can be used to derive the product-to-sum identities. By setting all trigonometric functions of can be expressed as rational fractions of : Together with this is the tangent half-angle substitution, which reduces the computation of integrals and antiderivatives of trigonometric functions to that of rational fractions. Derivatives and antiderivatives The derivatives of trigonometric functions result from those of sine and cosine by applying the quotient rule. The values given for the antiderivatives in the following table can be verified by differentiating them. The number  is a constant of integration. Note: For the integral of can also be written as and for the integral of for as where is the inverse hyperbolic sine. Alternatively, the derivatives of the 'co-functions' can be obtained using trigonometric identities and the chain rule: Inverse functions The trigonometric functions are periodic, and hence not injective, so strictly speaking, they do not have an inverse function. However, on each interval on which a trigonometric function is monotonic, one can define an inverse function, and this defines inverse trigonometric functions as multivalued functions. To define a true inverse function, one must restrict the domain to an interval where the function is monotonic, and is thus bijective from this interval to its image by the function. The common choice for this interval, called the set of principal values, is given in the following table. As usual, the inverse trigonometric functions are denoted with the prefix "arc" before the name or its abbreviation of the function. The notations , , etc. are often used for and , etc. When this notation is used, inverse functions could be confused with multiplicative inverses. The notation with the "arc" prefix avoids such a confusion, though "arcsec" for arcsecant can be confused with "arcsecond". Just like the sine and cosine, the inverse trigonometric functions can also be expressed in terms of infinite series. They can also be expressed in terms of complex logarithms. Applications Angles and sides of a triangle In this section , , denote the three (interior) angles of a triangle, and , , denote the lengths of the respective opposite edges. They are related by various formulas, which are named by the trigonometric functions they involve. Law of sines The law of sines states that for an arbitrary triangle with sides , , and and angles opposite those sides , and : where is the area of the triangle, or, equivalently, where is the triangle's circumradius. It can be proved by dividing the triangle into two right ones and using the above definition of sine. The law of sines is useful for computing the lengths of the unknown sides in a triangle if two angles and one side are known. This is a common situation occurring in triangulation, a technique to determine unknown distances by measuring two angles and an accessible enclosed distance. Law of cosines The law of cosines (also known as the cosine formula or cosine rule) is an extension of the Pythagorean theorem: or equivalently, In this formula the angle at is opposite to the side . This theorem can be proved by dividing the triangle into two right ones and using the Pythagorean theorem. The law of cosines can be used to determine a side of a triangle if two sides and the angle between them are known. It can also be used to find the cosines of an angle (and consequently the angles themselves) if the lengths of all the sides are known. Law of tangents The law of tangents says that: . Law of cotangents If s is the triangle's semiperimeter, (a + b + c)/2, and r is the radius of the triangle's incircle, then rs is the triangle's area. Therefore Heron's formula implies that: . The law of cotangents says that: It follows that Periodic functions The trigonometric functions are also important in physics. The sine and the cosine functions, for example, are used to describe simple harmonic motion, which models many natural phenomena, such as the movement of a mass attached to a spring and, for small angles, the pendular motion of a mass hanging by a string. The sine and cosine functions are one-dimensional projections of uniform circular motion. Trigonometric functions also prove to be useful in the study of general periodic functions. The characteristic wave patterns of periodic functions are useful for modeling recurring phenomena such as sound or light waves. Under rather general conditions, a periodic function can be expressed as a sum of sine waves or cosine waves in a Fourier series. Denoting the sine or cosine basis functions by , the expansion of the periodic function takes the form: For example, the square wave can be written as the Fourier series In the animation of a square wave at top right it can be seen that just a few terms already produce a fairly good approximation. The superposition of several terms in the expansion of a sawtooth wave are shown underneath. History While the early study of trigonometry can be traced to antiquity, the trigonometric functions as they are in use today were developed in the medieval period. The chord function was discovered by Hipparchus of Nicaea (180–125 BCE) and Ptolemy of Roman Egypt (90–165 CE). The functions of sine and versine (1 – cosine) can be traced back to the jyā and koti-jyā functions used in Gupta period Indian astronomy (Aryabhatiya, Surya Siddhanta), via translation from Sanskrit to Arabic and then from Arabic to Latin. (See Aryabhata's sine table.) All six trigonometric functions in current use were known in Islamic mathematics by the 9th century, as was the law of sines, used in solving triangles. With the exception of the sine (which was adopted from Indian mathematics), the other five modern trigonometric functions were discovered by Persian and Arab mathematicians, including the cosine, tangent, cotangent, secant and cosecant. Al-Khwārizmī (c. 780–850) produced tables of sines, cosines and tangents. Circa 830, Habash al-Hasib al-Marwazi discovered the cotangent, and produced tables of tangents and cotangents. Muhammad ibn Jābir al-Harrānī al-Battānī (853–929) discovered the reciprocal functions of secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°. The trigonometric functions were later studied by mathematicians including Omar Khayyám, Bhāskara II, Nasir al-Din al-Tusi, Jamshīd al-Kāshī (14th century), Ulugh Beg (14th century), Regiomontanus (1464), Rheticus, and Rheticus' student Valentinus Otho. Madhava of Sangamagrama (c. 1400) made early strides in the analysis of trigonometric functions in terms of infinite series. (See Madhava series and Madhava's sine table.) The tangent function was brought to Europe by Giovanni Bianchini in 1467 in trigonometry tables he created to support the calculation of stellar coordinates. The terms tangent and secant were first introduced by the Danish mathematician Thomas Fincke in his book Geometria rotundi (1583). The 17th century French mathematician Albert Girard made the first published use of the abbreviations sin, cos, and tan in his book Trigonométrie. In a paper published in 1682, Gottfried Leibniz proved that is not an algebraic function of . Though introduced as ratios of sides of a right triangle, and thus appearing to be rational functions, Leibnitz result established that they are actually transcendental functions of their argument. The task of assimilating circular functions into algebraic expressions was accomplished by Euler in his Introduction to the Analysis of the Infinite (1748). His method was to show that the sine and cosine functions are alternating series formed from the even and odd terms respectively of the exponential series. He presented "Euler's formula", as well as near-modern abbreviations (sin., cos., tang., cot., sec., and cosec.). A few functions were common historically, but are now seldom used, such as the chord, versine (which appeared in the earliest tables), haversine, coversine, half-tangent (tangent of half an angle), and exsecant. List of trigonometric identities shows more relations between these functions. Historically, trigonometric functions were often combined with logarithms in compound functions like the logarithmic sine, logarithmic cosine, logarithmic secant, logarithmic cosecant, logarithmic tangent and logarithmic cotangent. Etymology The word derives from Latin sinus, meaning "bend; bay", and more specifically "the hanging fold of the upper part of a toga", "the bosom of a garment", which was chosen as the translation of what was interpreted as the Arabic word jaib, meaning "pocket" or "fold" in the twelfth-century translations of works by Al-Battani and al-Khwārizmī into Medieval Latin. The choice was based on a misreading of the Arabic written form j-y-b (), which itself originated as a transliteration from Sanskrit , which along with its synonym (the standard Sanskrit term for the sine) translates to "bowstring", being in turn adopted from Ancient Greek "string". The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans—"cutting"—since the line cuts the circle. The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation for the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly. See also Bhāskara I's sine approximation formula Small-angle approximation Differentiation of trigonometric functions Generalized trigonometry Generating trigonometric tables List of integrals of trigonometric functions List of periodic functions Polar sine – a generalization to vertex angles Notes References Lars Ahlfors, Complex Analysis: an introduction to the theory of analytic functions of one complex variable, second edition, McGraw-Hill Book Company, New York, 1966. Boyer, Carl B., A History of Mathematics, John Wiley & Sons, Inc., 2nd edition. (1991). . Gal, Shmuel and Bachelis, Boris. An accurate elementary mathematical library for the IEEE floating point standard, ACM Transactions on Mathematical Software (1991). Joseph, George G., The Crest of the Peacock: Non-European Roots of Mathematics, 2nd ed. Penguin Books, London. (2000). . Kantabutra, Vitit, "On hardware for computing exponential and trigonometric functions," IEEE Trans. Computers 45 (3), 328–339 (1996). Maor, Eli, Trigonometric Delights, Princeton Univ. Press. (1998). Reprint edition (2002): . Needham, Tristan, "Preface"" to Visual Complex Analysis. Oxford University Press, (1999). . O'Connor, J. J., and E. F. Robertson, "Trigonometric functions", MacTutor History of Mathematics archive. (1996). O'Connor, J. J., and E. F. Robertson, "Madhava of Sangamagramma", MacTutor History of Mathematics archive. (2000). Pearce, Ian G., "Madhava of Sangamagramma" , MacTutor History of Mathematics archive. (2002). Weisstein, Eric W., "Tangent" from MathWorld, accessed 21 January 2006. External links Visionlearning Module on Wave Mathematics GonioLab Visualization of the unit circle, trigonometric and hyperbolic functions q-Sine Article about the q-analog of sin at MathWorld q-Cosine Article about the q-analog of cos at MathWorld Analytic functions Angle Ratios
Trigonometric functions
[ "Physics", "Mathematics" ]
7,011
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Arithmetic", "Wikipedia categories named after physical quantities", "Angle", "Ratios" ]
30,369
https://en.wikipedia.org/wiki/Thermochemistry
Thermochemistry is the study of the heat energy which is associated with chemical reactions and/or phase changes such as melting and boiling. A reaction may release or absorb energy, and a phase change may do the same. Thermochemistry focuses on the energy exchange between a system and its surroundings in the form of heat. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. Endothermic reactions absorb heat, while exothermic reactions release heat. Thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity, heat of combustion, heat of formation, enthalpy, entropy, and free energy. Thermochemistry is one part of the broader field of chemical thermodynamics, which deals with the exchange of all forms of energy between system and surroundings, including not only heat but also various forms of work, as well the exchange of matter. When all forms of energy are considered, the concepts of exothermic and endothermic reactions are generalized to exergonic reactions and endergonic reactions. History Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows: Lavoisier and Laplace's law (1780): The energy change accompanying any transformation is equal and opposite to energy change accompanying the reverse process. Hess' law of constant heat summation (1840): The energy change accompanying any transformation is the same whether the process occurs in one step or many. These statements preceded the first law of thermodynamics (1845) and helped in its formulation. Thermochemistry also involves the measurement of the latent heat of phase transitions. Joseph Black had already introduced the concept of latent heat in 1761, based on the observation that heating ice at its melting point did not raise the temperature but instead caused some ice to melt. Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature. Calorimetry The measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple, and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the differential scanning calorimeter. Systems Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment. A system may be: a (completely) isolated system which can exchange neither energy nor matter with the surroundings, such as an insulated bomb calorimeter a thermally isolated system which can exchange mechanical work but not heat or matter, such as an insulated closed piston or balloon a mechanically isolated system which can exchange heat but not mechanical work or matter, such as an uninsulated bomb calorimeter a closed system which can exchange energy but not matter, such as an uninsulated closed piston or balloon an open system which it can exchange both matter and energy with the surroundings, such as a pot of boiling water Processes A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same-temperature) process occurs when temperature of the system remains constant. An isobaric (same-pressure) process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs. See also Calorimetry Chemical kinetics Cryochemistry Differential scanning calorimetry Isodesmic reaction Important publications in thermochemistry Photoelectron photoion coincidence spectroscopy Principle of maximum work Reaction Calorimeter Thermodynamic databases for pure substances Thermodynamics Thomsen-Berthelot principle Julius Thomsen References External links Physical chemistry Branches of thermodynamics
Thermochemistry
[ "Physics", "Chemistry" ]
926
[ "Applied and interdisciplinary physics", "Thermochemistry", "Thermodynamics", "nan", "Branches of thermodynamics", "Physical chemistry" ]
30,400
https://en.wikipedia.org/wiki/Torque
In physics and mechanics, torque is the rotational analogue of linear force. It is also referred to as the moment of force (also abbreviated to moment). The symbol for torque is typically , the lowercase Greek letter tau. When being referred to as moment of force, it is commonly denoted by . Just as a linear force is a push or a pull applied to a body, a torque can be thought of as a twist applied to an object with respect to a chosen point; for example, driving a screw uses torque to force it into an object, which is applied by the screwdriver rotating around its axis to the drives on the head. History The term torque (from Latin , 'to twist') is said to have been suggested by James Thomson and appeared in print in April, 1884. Usage is attested the same year by Silvanus P. Thompson in the first edition of Dynamo-Electric Machinery. Thompson motivates the term as follows: Today, torque is referred to using different vocabulary depending on geographical location and field of study. This article follows the definition used in US physics in its usage of the word torque. In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. This terminology can be traced back to at least 1811 in Siméon Denis Poisson's . An English translation of Poisson's work appears in 1842. Definition and relation to other physical quantities A force applied perpendicularly to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. Therefore, torque is defined as the product of the magnitude of the perpendicular component of the force and the distance of the line of action of a force from the point around which it is being determined. In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the displacement vector and the force vector. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque. It follows that the torque vector is perpendicular to both the position and force vectors and defines the plane in which the two vectors lie. The resulting torque vector direction is determined by the right-hand rule. Therefore any force directed parallel to the particle's position vector does not produce a torque. The magnitude of torque applied to a rigid body depends on three quantities: the force applied, the lever arm vector connecting the point about which the torque is being measured to the point of force application, and the angle between the force and lever arm vectors. In symbols: where is the torque vector and is the magnitude of the torque, is the position vector (a vector from the point about which the torque is being measured to the point where the force is applied), and r is the magnitude of the position vector, is the force vector, F is the magnitude of the force vector and F⊥ is the amount of force directed perpendicularly to the position of the particle, denotes the cross product, which produces a vector that is perpendicular both to and to following the right-hand rule, is the angle between the force vector and the lever arm vector. The SI unit for torque is the newton-metre (N⋅m). For more on the units of torque, see . Relationship with the angular momentum The net torque on a body determines the rate of change of the body's angular momentum, where L is the angular momentum vector and t is time. For the motion of a point particle, where is the moment of inertia and ω is the orbital angular velocity pseudovector. It follows that using the derivative of a vector isThis equation is the rotational analogue of Newton's second law for point particles, and is valid for any type of trajectory. In some simple cases like a rotating disc, where only the moment of inertia on rotating axis is, the rotational Newton's second law can bewhere . Proof of the equivalence of definitions The definition of angular momentum for a single point particle is: where p is the particle's linear momentum and r is the position vector from the origin. The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. But because the rate of change of linear momentum is force and the rate of change of position is velocity , The cross product of momentum with its associated velocity is zero because velocity and momentum are parallel, so the second term vanishes. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time. If multiple forces are applied, according Newton's second law it follows that This is a general proof for point particles, but it can be generalized to a system of point particles by applying the above proof to each of the point particles and then summing over all the point particles. Similarly, the proof can be generalized to a continuous mass by applying the above proof to each point within the mass, and then integrating over the entire mass. Derivatives of torque In physics, rotatum is the derivative of torque with respect to timewhere τ is torque. This word is derived from the Latin word meaning 'to rotate', but the term rotatum is not universally recognized but is commonly used. There is not a universally accepted lexicon to indicate the successive derivatives of rotatum, even if sometimes various proposals have been made. Using the cross product definition of torque, an alternative expression for rotatum is: Because the rate of change of force is yank and the rate of change of position is velocity , the expression can be further simplified to: Relationship with power and energy The law of conservation of energy can also be used to understand torque. If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through an angular displacement, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, the work W can be expressed as where τ is torque, and θ1 and θ2 represent (respectively) the initial and final angular positions of the body. It follows from the work–energy principle that W also represents the change in the rotational kinetic energy Er of the body, given by where I is the moment of inertia of the body and ω is its angular speed. Power is the work per unit time, given by where P is power, τ is torque, ω is the angular velocity, and represents the scalar product. Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. The power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). Proof The work done by a variable force acting over a finite linear displacement is given by integrating the force with respect to an elemental linear displacement However, the infinitesimal linear displacement is related to a corresponding angular displacement and the radius vector as Substitution in the above expression for work, , gives The expression inside the integral is a scalar triple product , but as per the definition of torque, and since the parameter of integration has been changed from linear displacement to angular displacement, the equation becomes If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., giving Principle of moments The principle of moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the resultant torques due to several forces applied to about a point is equal to the sum of the contributing torques: From this it follows that the torques resulting from N number of forces acting around a pivot on an object are balanced when Units Torque has the dimension of force times distance, symbolically and those fundamental dimensions are the same as that for energy or work. Official SI literature indicates newton-metre, is properly denoted N⋅m, as the unit for torque; although this is dimensionally equivalent to the joule, which is not used for torque. In the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. This means that the dimensional equivalence of the newton-metre and the joule may be applied in the former but not in the latter case. This problem is addressed in orientational analysis, which treats the radian as a base unit rather than as a dimensionless unit. The traditional imperial units for torque are the pound foot (lbf-ft), or, for small values, the pound inch (lbf-in). In the US, torque is most commonly referred to as the foot-pound (denoted as either lb-ft or ft-lb) and the inch-pound (denoted as in-lb). Practitioners depend on context and the hyphen in the abbreviation to know that these refer to torque and not to energy or moment of mass (as the symbolism ft-lb would properly imply). Conversion to other units A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (unit: revolution per minute or second) is used in place of angular speed (unit: radian per second), we must multiply by 2 radians per revolution. In the following formulas, P is power, τ is torque, and ν (Greek letter nu) is rotational speed. Showing units: Dividing by 60 seconds per minute gives us the following. where rotational speed is in revolutions per minute (rpm, rev/min). Some people (e.g., American automotive engineers) use horsepower (mechanical) for power, foot-pounds (lbf⋅ft) for torque and rpm for rotational speed. This results in the formula changing to: The constant below (in foot-pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550. The use of other units (e.g., BTU per hour for power) would require a different custom conversion factor. Derivation For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time. By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: The radius r and time t have dropped out of the equation. However, angular speed must be in radians per unit of time, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2 in the above derivation to give: If torque is in newton-metres and rotational speed in revolutions per second, the above equation gives power in newton-metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft⋅lbf/min per horsepower: because Special cases and other facts Moment arm formula A very useful special case, often given as the definition of torque in fields other than physics, is as follows: The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N acting 0.5 m from the twist point of a wrench of any length), the torque will be 5 N⋅m – assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench. Static equilibrium For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: and , and the torque a third equation: . That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used. Net force versus torque When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of the point of reference. If the net force is not zero, and is the torque measured from , then the torque measured from is Machine torque Torque forms part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by the angular speed of the drive shaft. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). One can measure the varying torque output over that range with a dynamometer, and show it as a torque curve. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam-engines and electric motors can start heavy loads from zero rpm without a clutch. In practice, the relationship between power and torque can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of angular speed (i.e. the number of pedal revolutions per minute times 2π) and the torque at the spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, angular speed)input pair is converted to a (torque, angular speed)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change. Torque multiplier Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed-reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced. See also References External links "Horsepower and Torque" An article showing how power, torque, and gearing affect a vehicle's performance. Torque and Angular Momentum in Circular Motion on Project PHYSNET. An interactive simulation of torque Torque Unit Converter A feel for torque An order-of-magnitude interactive. Mechanical quantities Rotation Force Moment (physics)
Torque
[ "Physics", "Mathematics" ]
3,337
[ "Torque", "Physical phenomena", "Force", "Mechanical quantities", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Wikipedia categories named after physical quantities", "Matter", "Moment (physics)" ]
30,403
https://en.wikipedia.org/wiki/Turing%20machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algorithm. The machine operates on an infinite memory tape divided into discrete cells, each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right, or halts the computation. The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read. As with a real computer program, it is possible for a Turing machine to go into an infinite loop which will never halt. The Turing machine was invented in 1936 by Alan Turing, who called it an "a-machine" (automatic machine). It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review. With this model, Turing was able to answer two questions in the negative: Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)? Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol? Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem, or 'decision problem' (whether every mathematical statement is provable or disprovable). Turing machines proved the existence of fundamental limitations on the power of mechanical computation. While they can express arbitrary computations, their minimalist design makes them too slow for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory. Turing completeness is the ability for a computational model or a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored. Overview A Turing machine is an idealised model of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. Typically, the sequential memory is represented as a tape of infinite length on which the machine can perform read and write operations. In the context of formal language theory, a Turing machine (automaton) is capable of enumerating some arbitrary subset of valid strings of an alphabet. A set of strings which can be enumerated in this manner is called a recursively enumerable language. The Turing machine can equivalently be defined as a model that recognises valid input strings, rather than enumerating output strings. Given a Turing machine M and an arbitrary string s, it is generally not possible to decide whether M will eventually produce s. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus. A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). Another mathematical formalism, lambda calculus, with a similar "universal" nature was introduced by Alonzo Church. Church's work intertwined with Turing's to form the basis for the Church–Turing thesis. This thesis states that Turing machines, lambda calculus, and other similar formalisms of computation do indeed capture the informal notion of effective methods in logic and mathematics and thus provide a model through which one can reason about an algorithm or "mechanical procedure" in a mathematically precise way without being tied to any particular formalism. Studying the abstract properties of Turing machines has yielded many insights into computer science, computability theory, and complexity theory. Physical description In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consists of: Description The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On Computable Numbers, with an Application to the Entscheidungsproblem", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner"). More explicitly, a Turing machine consists of: A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendable to the left and to the right, so that the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written before are assumed to be filled with the blank symbol. In some models the tape has a left end marked with a special symbol; the tape extends or is indefinitely extensible to the right. A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time. In some models the head moves and the tape is stationary. A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialised. These states, writes Turing, replace the "state of mind" a person performing computations would ordinarily be in. A finite table of instructions that, given the state(qi) the machine is currently in and the symbol(aj) it is reading on the tape (the symbol currently under the head), tells the machine to do the following in sequence (for the 5-tuple models): Either erase or write a symbol (replacing aj with aj1). Move the head (which is described by dk and can have values: 'L' for one step left or 'R' for one step right or 'N' for staying in the same place). Assume the same or a new state as prescribed (go to state qi1). In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled. Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount of storage space. Formal definition Following , a (one-tape) Turing machine can be formally defined as a 7-tuple where is a finite, non-empty set of tape alphabet symbols; is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation); is the set of input symbols, that is, the set of symbols allowed to appear in the initial tape contents; is a finite, non-empty set of states; is the initial state; is the set of final states or accepting states. The initial tape contents is said to be accepted by if it eventually halts in a state from . is a partial function called the transition function, where L is left shift, R is right shift. If is not defined on the current state and the current tape symbol, then the machine halts; intuitively, the transition function specifies the next state transited from the current state, which symbol to overwrite the current symbol pointed by the head, and the next head movement. A variant allows "no shift", say N, as a third element of the set of directions . The 7-tuple for the 3-state busy beaver looks like this (see more about this busy beaver at Turing machine examples): (states); (tape alphabet symbols); (blank symbol); (input symbols); (initial state); (final states); see state-table below (transition function). Initially all tape cells are marked with . Additional details required to visualise or implement Turing machines In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like." For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely. The shift left and shift right operations may shift the tape head across the tape, but when actually building a Turing machine it is more practical to make the tape slide back and forth under the head instead. The tape can be finite, and automatically extended with blanks as needed (which is closest to the mathematical definition), but it is more common to think of it as stretching infinitely at one or both ends and being pre-filled with blanks except on the explicitly given finite fragment the tape head is on (this is, of course, not implementable in practice). The tape cannot be fixed in length, since that would not correspond to the given definition and would seriously limit the range of computations the machine can perform to those of a linear bounded automaton if the tape was proportional to the input size, or finite-state machine if it was strictly fixed-length. Alternative definitions Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, the set could be changed from to , where N ("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right. This would not increase the machine's computational power. The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in The Undecidable, p. 126–127 and Davis (2000) p. 152): (definition 1): (qi, Sj, Sk/E/N, L/R/N, qm) ( current state qi , symbol scanned Sj , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N , new state qm ) Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj: (definition 2): (qi, Sj, qm, Sk/E/N, L/R/N) ( current state qi , symbol scanned Sj , new state qm , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N ) For the remainder of this article "definition 1" (the Turing/Davis convention) will be used. In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing in The Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947), The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples: Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples. Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at Post–Turing machine. The "state" The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape: Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "the complete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to the left of the scanned symbol. A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gödel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374–375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right. Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below): 1A1 This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B. "State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol. Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion. "State" diagrams To the right: the above table as expressed as a "state transition" diagram. Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing. Whether a drawing represents an improvement on its table must be decided by the reader for the particular context. The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time, not the course ("trajectory") of a computation through time and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters". The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936) The Undecidable, pp. 139–140). Equivalent models Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesises this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.) A Turing machine is equivalent to a single-stack pushdown automaton (PDA) that has been made more flexible and concise by relaxing the last-in-first-out (LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right. At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model. Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state. Read-only, right-moving Turing machines are equivalent to DFAs (as well as NFAs by conversion using the NFA to DFA conversion algorithm). For practical and didactic intentions, the equivalent register machine can be used as a usual assembly programming language. A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example, ANSI C is not Turing complete, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, called pointers, is accessible inside the language. However, other programming languages like Pascal do not have this feature, which allows them to be Turing complete in principle. It is just Turing complete in principle, as memory allocation in a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot. Choice c-machines, oracle o-machines Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine": Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡, The Undecidable, p. 138) This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration. An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an entity unspecified by Turing "apart from saying that it cannot be a machine" (Turing (1939), The Undecidable, p. 166–168). Universal Turing machines As Turing wrote in The Undecidable, p. 128 (italics added): This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the stored-program computer. In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9) Comparison with real machines Turing machines are more powerful than some other kinds of automata, such as finite-state machines and pushdown automata. According to the Church–Turing thesis, they are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number of configurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations. There are a number of ways to explain why Turing machines are useful models of real computers: Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p. 157). A large enough FSA can also model any real computer, disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to real computers. The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data. Like a Turing machine, a real machine can have its storage space enlarged as needed, by acquiring more disks or other storage media. Descriptions of real machine programs using simpler abstract models are often much more complex than descriptions using Turing machines. For example, a Turing machine describing an algorithm may have a few hundred states, while the equivalent deterministic finite automaton (DFA) on a given real machine has quadrillions. This makes the DFA representation infeasible to analyze. Turing machines describe algorithms independent of how much memory they use. There is a limit to the memory possessed by any current machine, but this limit can rise arbitrarily in time. Turing machines allow us to make statements about algorithms which will (theoretically) hold forever, regardless of advances in conventional computing machine architecture. Algorithms running on Turing-equivalent abstract machines can have arbitrary-precision data types available and never have to deal with unexpected conditions (including, but not limited to, running out of memory). Limitations Computational complexity theory A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random-access stored-program machine or RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model. Interaction In the early days of computing, computer use was typically limited to batch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice. Since the 1970s, interactive use of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such as I/O automata are usually preferred. Comparison with the arithmetic model of computation The arithmetic model of computation differs from the Turing model in two aspects: In the arithmetic model, every real number requires a single memory cell, whereas in the Turing model the storage size of a real number depends on the number of bits required to represent it. In the arithmetic model, every basic arithmetic operation on real numbers (addition, subtraction, multiplication and division) can be done in a single step, whereas in the Turing model the run-time of each arithmetic operation depends on the length of the operands. Some algorithms run in polynomial time in one model but not in the other one. For example: The Euclidean algorithm runs in polynomial time in the Turing model, but not in the arithmetic model. The algorithm that reads n numbers and then computes by repeated squaring runs in polynomial time in the Arithmetic model, but not in the Turing model. This is because the number of bits required to represent the outcome is exponential in the input size. However, if an algorithm runs in polynomial time in the arithmetic model, and in addition, the binary length of all involved numbers is polynomial in the length of the input, then it is always polynomial-time in the Turing model. Such an algorithm is said to run in strongly polynomial time. History Historical background: computational machinery Robin Gandy (1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back to Charles Babbage (circa 1834) and actually proposes "Babbage's Thesis": Gandy's analysis of Babbage's analytical engine describes the following five operations (cf. p. 52–53): The arithmetic functions +, −, ×, where − indicates "proper" subtraction: if . Any sequence of operations is an operation. Iteration of an operation (repeating n times an operation P). Conditional iteration (repeating n times an operation P conditional on the "success" of test T). Conditional transfer (i.e., conditional "goto"). Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p. 53). He cites other proposals for "universal calculating machines" including those of Percy Ludgate (1909), Leonardo Torres Quevedo (1914), Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However: The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900 With regard to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows: By 1922, this notion of "Entscheidungsproblem" had developed a bit, and H. Behmann stated that By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 by Kurt Gödel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s. The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gödel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions. And Post had only proposed a definition of calculability and criticised Church's "definition", but had proved nothing. Alan Turing's a-machine In the spring of 1935, Turing as a young Master's student at King's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes: Gandy states that: While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function": Alan Turing invented the "a-machine" (automatic machine) in 1936. Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its Proceedings (cf. Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf. Hodges 1983:129) It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review. With this model, Turing was able to answer two questions in the negative: Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)? Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol? Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem'). When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper "On Computable Numbers, with an Application to the Entscheidungsproblem" (1937): Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable". 1937–1970: The "digital computer", the birth of "computer science" In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by both the Axis and Allied militaries in World War II (cf. Hodges p. 298–299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post–Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random-access machine models—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set. 1970–present: as a model of computation Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine: See also Arithmetical hierarchy Bekenstein bound, showing the impossibility of infinite-tape Turing machines of finite size and bounded energy BlooP and FlooP Chaitin's constant or Omega (computer science) for information relating to the halting problem Chinese room Conway's Game of Life, a Turing-complete cellular automaton Digital infinity The Emperor's New Mind Enumerator (in theoretical computer science) Genetix Gödel, Escher, Bach: An Eternal Golden Braid, a famous book that discusses, among other topics, the Church–Turing thesis Halting problem, for more references Harvard architecture Imperative programming Langton's ant and Turmites, simple two-dimensional analogues of the Turing machine List of things named after Alan Turing Modified Harvard architecture Quantum Turing machine Claude Shannon, another leading thinker in information theory Turing machine examples Turing tarpit, any computing system or language that, despite being Turing complete, is generally considered useless for practical computing Unorganised machine, for Turing's very early ideas on neural networks Von Neumann architecture Notes References Primary literature, reprints, and compilations B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford UK, . Contains the Turing papers plus a draft letter to Emil Post re his criticism of "Turing's convention", and Donald W. Davies' Corrections to Turing's Universal Computing Machine Martin Davis (ed.) (1965), The Undecidable, Raven Press, Hewlett, NY. Emil Post (1936), "Finite Combinatory Processes—Formulation 1", Journal of Symbolic Logic, 1, 103–105, 1936. Reprinted in The Undecidable, pp. 289ff. Emil Post (1947), "Recursive Unsolvability of a Problem of Thue", Journal of Symbolic Logic, vol. 12, pp. 1–11. Reprinted in The Undecidable, pp. 293ff. In the Appendix of this paper Post comments on and gives corrections to Turing's paper of 1936–1937. In particular see the footnotes 11 with corrections to the universal computing machine coding and footnote 14 with comments on Turing's first and second proofs. Reprinted in The Undecidable, pp. 115–154. Alan Turing, 1948, "Intelligent Machinery." Reprinted in "Cybernetics: Key Papers." Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p. 31. Reprinted in F. C. Hennie and R. E. Stearns. Two-tape simulation of multitape Turing machines. JACM, 13(4):533–546, 1966. Computability theory Some parts have been significantly rewritten by Burgess. Presentation of Turing machines in context of Lambek "abacus machines" (cf. Register machine) and recursive functions, showing their equivalence. Taylor L. Booth (1967), Sequential Machines and Automata Theory, John Wiley and Sons, Inc., New York. Graduate level engineering text; ranges over a wide variety of topics, Chapter IX Turing Machines includes some recursion theory. . On pages 12–20 he gives examples of 5-tuple tables for Addition, The Successor Function, Subtraction (x ≥ y), Proper Subtraction (0 if x < y), The Identity Function and various identity functions, and Multiplication. . On pages 90–103 Hennie discusses the UTM with examples and flow-charts, but no actual 'code'. Centered around the issues of machine-interpretation of "languages", NP-completeness, etc. Stephen Kleene (1952), Introduction to Metamathematics, North–Holland Publishing Company, Amsterdam Netherlands, 10th impression (with corrections of 6th reprint 1971). Graduate level text; most of Chapter XIII Computable functions is on Turing machine proofs of computability of recursive functions, etc. . With reference to the role of Turing machines in the development of computation (both hardware and software) see 1.4.5 History and Bibliography pp. 225ff and 2.6 History and Bibliographypp. 456ff. Zohar Manna, 1974, Mathematical Theory of Computation. Reprinted, Dover, 2003. Marvin Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Inc., N.J., 1967. See Chapter 8, Section 8.2 "Unsolvability of the Halting Problem." Chapter 2: Turing machines, pp. 19–56. Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computability, The MIT Press, Cambridge MA, paperback edition 1987, original McGraw-Hill edition 1967, (pbk.) Chapter 3: The Church–Turing Thesis, pp. 125–149. Peter van Emde Boas 1990, Machine Models and Simulations, pp. 3–66, in Jan van Leeuwen, ed., Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, [place?], (Volume A). QA76.H279 1990. Church's thesis Small Turing machines Rogozhin, Yurii, 1998, "A Universal Turing Machine with 22 States and 2 Symbols", Romanian Journal of Information Science and Technology, 1(3), 259–265, 1998. (surveys known results about small universal Turing machines) Stephen Wolfram, 2002, A New Kind of Science, Wolfram Media, Brunfiel, Geoff, Student snags maths prize, Nature, October 24. 2007. Jim Giles (2007), Simplest 'universal computer' wins student $25,000, New Scientist, October 24, 2007. Alex Smith, Universality of Wolfram's 2, 3 Turing Machine, Submission for the Wolfram 2, 3 Turing Machine Research Prize. Vaughan Pratt, 2007, "Simple Turing machines, Universality, Encodings, etc.", FOM email list. October 29, 2007. Martin Davis, 2007, "Smallest universal machine", and Definition of universal Turing machine FOM email list. October 26–27, 2007. Alasdair Urquhart, 2007 "Smallest universal machine", FOM email list. October 26, 2007. Hector Zenil (Wolfram Research), 2007 "smallest universal machine", FOM email list. October 29, 2007. Todd Rowland, 2007, "Confusion on FOM", Wolfram Science message board, October 30, 2007. Olivier and Marc RAYNAUD, 2014, A programmable prototype to achieve Turing machines " LIMOS Laboratory of Blaise Pascal University (Clermont-Ferrand in France). Other Robin Gandy, "The Confluence of Ideas in 1936", pp. 51–102 in Rolf Herken, see below. Stephen Hawking (editor), 2005, God Created the Integers: The Mathematical Breakthroughs that Changed History, Running Press, Philadelphia, . Includes Turing's 1936–1937 paper, with brief commentary and biography of Turing as written by Hawking. Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof. Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, Oxford and New York, 1989 (1990 corrections), . Hao Wang, "A variant to Turing's theory of computing machines", Journal of the Association for Computing Machinery (JACM) 4, 63–92 (1957). Charles Petzold, The Annotated Turing, John Wiley & Sons, Inc., Arora, Sanjeev; Barak, Boaz, "Complexity Theory: A Modern Approach", Cambridge University Press, 2009, , section 1.4, "Machines as strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9" Kirner, Raimund; Zimmermann, Wolf; Richter, Dirk: "On Undecidability Results of Real Programming Languages", In 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS'09), Maria Taferl, Austria, Oct. 2009. External links Turing Machine – Stanford Encyclopedia of Philosophy Turing Machine Causal Networks by Enrique Zeleny as part of the Wolfram Demonstrations Project. 1936 in computing 1937 in computing Educational abstract machines Theoretical computer science Alan Turing Models of computation Formal methods Computability theory English inventions Automata (computation) Formal languages Abstract machines
Turing machine
[ "Mathematics", "Engineering" ]
9,856
[ "Theoretical computer science", "Applied mathematics", "Formal languages", "Mathematical logic", "Software engineering", "Computability theory", "Formal methods" ]
30,426
https://en.wikipedia.org/wiki/Total%20internal%20reflection
In physics, total internal reflection (TIR) is the phenomenon in which waves arriving at the interface (boundary) from one medium to another (e.g., from water to air) are not refracted into the second ("external") medium, but completely reflected back into the first ("internal") medium. It occurs when the second medium has a higher wave speed (i.e., lower refractive index) than the first, and the waves are incident at a sufficiently oblique angle on the interface. For example, the water-to-air surface in a typical fish tank, when viewed obliquely from below, reflects the underwater scene like a mirror with no loss of brightness (Fig.1). TIR occurs not only with electromagnetic waves such as light and microwaves, but also with other types of waves, including sound and water waves. If the waves are capable of forming a narrow beam (Fig.2), the reflection tends to be described in terms of "rays" rather than waves; in a medium whose properties are independent of direction, such as air, water or glass, the "rays" are perpendicular to associated wavefronts. The total internal reflection occurs when critical angle is exceeded. Refraction is generally accompanied by partial reflection. When waves are refracted from a medium of lower propagation speed (higher refractive index) to a medium of higher propagation speed (lower refractive index)—e.g., from water to air—the angle of refraction (between the outgoing ray and the surface normal) is greater than the angle of incidence (between the incoming ray and the normal). As the angle of incidence approaches a certain threshold, called the critical angle, the angle of refraction approaches 90°, at which the refracted ray becomes parallel to the boundary surface. As the angle of incidence increases beyond the critical angle, the conditions of refraction can no longer be satisfied, so there is no refracted ray, and the partial reflection becomes total. For visible light, the critical angle is about 49° for incidence from water to air, and about 42° for incidence from common glass to air. Details of the mechanism of TIR give rise to more subtle phenomena. While total reflection, by definition, involves no continuing flow of power across the interface between the two media, the external medium carries a so-called evanescent wave, which travels along the interface with an amplitude that falls off exponentially with distance from the interface. The "total" reflection is indeed total if the external medium is lossless (perfectly transparent), continuous, and of infinite extent, but can be conspicuously less than total if the evanescent wave is absorbed by a lossy external medium ("attenuated total reflectance"), or diverted by the outer boundary of the external medium or by objects embedded in that medium ("frustrated" TIR). Unlike partial reflection between transparent media, total internal reflection is accompanied by a non-trivial phase shift (not just zero or 180°) for each component of polarization (perpendicular or parallel to the plane of incidence), and the shifts vary with the angle of incidence. The explanation of this effect by Augustin-Jean Fresnel, in 1823, added to the evidence in favor of the wave theory of light. The phase shifts are used by Fresnel's invention, the Fresnel rhomb, to modify polarization. The efficiency of the total internal reflection is exploited by optical fibers (used in telecommunications cables and in image-forming fiberscopes), and by reflective prisms, such as image-erecting Porro/roof prisms for monoculars and binoculars. Optical description Although total internal reflection can occur with any kind of wave that can be said to have oblique incidence, including (e.g.) microwaves and sound waves, it is most familiar in the case of light waves. Total internal reflection of light can be demonstrated using a semicircular-cylindrical block of common glass or acrylic glass. In Fig.3, a "ray box" projects a narrow beam of light (a "ray") radially inward. The semicircular cross-section of the glass allows the incoming ray to remain perpendicular to the curved portion of the air/glass surface, and then hence to continue in a straight line towards the flat part of the surface, although its angle with the flat part varies. Where the ray meets the flat glass-to-air interface, the angle between the ray and the normal (perpendicular) to the interface is called the angle of incidence. If this angle is sufficiently small, the ray is partly reflected but mostly transmitted, and the transmitted portion is refracted away from the normal, so that the angle of refraction (between the refracted ray and the normal to the interface) is greater than the angle of incidence. For the moment, let us call the angle of incidence θ and the angle of refraction θt (where t is for transmitted, reserving r for reflected). As θ increases and approaches a certain "critical angle", denoted by θc (or sometimes θcr), the angle of refraction approaches 90° (that is, the refracted ray approaches a tangent to the interface), and the refracted ray becomes fainter while the reflected ray becomes brighter. As θ increases beyond θc, the refracted ray disappears and only the reflected ray remains, so that all of the energy of the incident ray is reflected; this is total internal reflection (TIR). In brief: If θ < θc, the incident ray is split, being partly reflected and partly refracted; If θ > θc, the incident ray suffers total internal reflection (TIR); none of it is transmitted. Critical angle The critical angle is the smallest angle of incidence that yields total reflection, or equivalently the largest angle for which a refracted ray exists. For light waves incident from an "internal" medium with a single refractive index , to an "external" medium with a single refractive index , the critical angle is given by and is defined if . For some other types of waves, it is more convenient to think in terms of propagation velocities rather than refractive indices. The explanation of the critical angle in terms of velocities is more general and will therefore be discussed first. When a wavefront is refracted from one medium to another, the incident (incoming) and refracted (outgoing) portions of the wavefront meet at a common line on the refracting surface (interface). Let this line, denoted by L, move at velocity across the surface, where is measured normal to L (Fig.4). Let the incident and refracted wavefronts propagate with normal velocities and respectively, and let them make the dihedral angles θ1 and θ2 respectively with the interface. From the geometry, is the component of in the direction normal to the incident wave, so that Similarly, Solving each equation for and equating the results, we obtain the general law of refraction for waves: But the dihedral angle between two planes is also the angle between their normals. So θ1 is the angle between the normal to the incident wavefront and the normal to the interface, while θ2 is the angle between the normal to the refracted wavefront and the normal to the interface; and Eq.() tells us that the sines of these angles are in the same ratio as the respective velocities. This result has the form of "Snell's law", except that we have not yet said that the ratio of velocities is constant, nor identified θ1 and θ2 with the angles of incidence and refraction (called θi and θt above). However, if we now suppose that the properties of the media are isotropic (independent of direction), two further conclusions follow: first, the two velocities, and hence their ratio, are independent of their directions; and second, the wave-normal directions coincide with the ray directions, so that θ1 and θ2 coincide with the angles of incidence and refraction as defined above. Obviously the angle of refraction cannot exceed 90°. In the limiting case, we put and in Eq.(), and solve for the critical angle: In deriving this result, we retain the assumption of isotropic media in order to identify θ1 and θ2 with the angles of incidence and refraction. For electromagnetic waves, and especially for light, it is customary to express the above results in terms of refractive indices. The refractive index of a medium with normal velocity is defined as where c is the speed of light in vacuum. Hence Similarly, Making these substitutions in Eqs.() and (), we obtain and Eq.() is the law of refraction for general media, in terms of refractive indices, provided that θ1 and θ2 are taken as the dihedral angles; but if the media are isotropic, then and become independent of direction, while θ1 and θ2 may be taken as the angles of incidence and refraction for the rays, and Eq.() follows. So, for isotropic media, Eqs.() and () together describe the behavior in Fig.5. According to Eq.(), for incidence from water () to air (), we have , whereas for incidence from common or acrylic glass () to air (), we have . The arcsin function yielding θc is defined only if Hence, for isotropic media, total internal reflection cannot occur if the second medium has a higher refractive index (lower normal velocity) than the first. For example, there cannot be TIR for incidence from air to water; rather, the critical angle for incidence from water to air is the angle of refraction at grazing incidence from air to water (Fig.6). The medium with the higher refractive index is commonly described as optically denser, and the one with the lower refractive index as optically rarer. Hence it is said that total internal reflection is possible for "dense-to-rare" incidence, but not for "rare-to-dense" incidence. Everyday examples When standing beside an aquarium with one's eyes below the water level, one is likely to see fish or submerged objects reflected in the water-air surface (Fig.1). The brightness of the reflected image – just as bright as the "direct" view – can be startling. A similar effect can be observed by opening one's eyes while swimming just below the water's surface. If the water is calm, the surface outside the critical angle (measured from the vertical) appears mirror-like, reflecting objects below. The region above the water cannot be seen except overhead, where the hemispherical field of view is compressed into a conical field known as Snell's window, whose angular diameter is twice the critical angle (cf. Fig.6). The field of view above the water is theoretically 180° across, but seems less because as we look closer to the horizon, the vertical dimension is more strongly compressed by the refraction; e.g., by Eq.(), for air-to-water incident angles of 90°, 80°, and 70°, the corresponding angles of refraction are 48.6° (θcr in Fig.6), 47.6°, and 44.8°, indicating that the image of a point 20° above the horizon is 3.8° from the edge of Snell's window while the image of a point 10° above the horizon is only 1° from the edge. Fig.7, for example, is a photograph taken near the bottom of the shallow end of a swimming pool. What looks like a broad horizontal stripe on the right-hand wall consists of the lower edges of a row of orange tiles, and their reflections; this marks the water level, which can then be traced across the other wall. The swimmer has disturbed the surface above her, scrambling the lower half of her reflection, and distorting the reflection of the ladder (to the right). But most of the surface is still calm, giving a clear reflection of the tiled bottom of the pool. The space above the water is not visible except at the top of the frame, where the handles of the ladder are just discernible above the edge of Snell's window – within which the reflection of the bottom of the pool is only partial, but still noticeable in the photograph. One can even discern the color-fringing of the edge of Snell's window, due to variation of the refractive index, hence of the critical angle, with wavelength (see Dispersion). The critical angle influences the angles at which gemstones are cut. The round "brilliant" cut, for example, is designed to refract light incident on the front facets, reflect it twice by TIR off the back facets, and transmit it out again through the front facets, so that the stone looks bright. Diamond (Fig.8) is especially suitable for this treatment, because its high refractive index (about 2.42) and consequently small critical angle (about 24.5°) yield the desired behavior over a wide range of viewing angles. Cheaper materials that are similarly amenable to this treatment include cubic zirconia (index≈2.15) and moissanite (non-isotropic, hence doubly refractive, with an index ranging from about 2.65 to 2.69, depending on direction and polarization); both of these are therefore popular as diamond simulants. Evanescent wave Mathematically, waves are described in terms of time-varying fields, a "field" being a function of location in space. A propagating wave requires an "effort" field and a "flow" field, the latter being a vector (if we are working in two or three dimensions). The product of effort and flow is related to power (see System equivalence). For example, for sound waves in a non-viscous fluid, we might take the effort field as the pressure (a scalar), and the flow field as the fluid velocity (a vector). The product of these two is intensity (power per unit area). For electromagnetic waves, we shall take the effort field as the electric field and the flow field as the magnetizing field . Both of these are vectors, and their vector product is again the intensity (see Poynting vector). When a wave in (say) medium 1 is reflected off the interface between medium 1 and medium 2, the flow field in medium 1 is the vector sum of the flow fields due to the incident and reflected waves. If the reflection is oblique, the incident and reflected fields are not in opposite directions and therefore cannot cancel out at the interface; even if the reflection is total, either the normal component or the tangential component of the combined field (as a function of location and time) must be non-zero adjacent to the interface. Furthermore, the physical laws governing the fields will generally imply that one of the two components is continuous across the interface (that is, it does not suddenly change as we cross the interface); for example, for electromagnetic waves, one of the interface conditions is that the tangential component of is continuous if there is no surface current. Hence, even if the reflection is total, there must be some penetration of the flow field into medium 2; and this, in combination with the laws relating the effort and flow fields, implies that there will also be some penetration of the effort field. The same continuity condition implies that the variation ("waviness") of the field in medium 2 will be synchronized with that of the incident and reflected waves in medium 1. But, if the reflection is total, the spatial penetration of the fields into medium 2 must be limited somehow, or else the total extent and hence the total energy of those fields would continue to increase, draining power from medium 1. Total reflection of a continuing wavetrain permits some energy to be stored in medium 2, but does not permit a continuing transfer of power from medium 1 to medium 2. Thus, using mostly qualitative reasoning, we can conclude that total internal reflection must be accompanied by a wavelike field in the "external" medium, traveling along the interface in synchronism with the incident and reflected waves, but with some sort of limited spatial penetration into the "external" medium; such a field may be called an evanescent wave. Fig.9 shows the basic idea. The incident wave is assumed to be plane and sinusoidal. The reflected wave, for simplicity, is not shown. The evanescent wave travels to the right in lock-step with the incident and reflected waves, but its amplitude falls off with increasing distance from the interface. (Two features of the evanescent wave in Fig.9 are to be explained later: first, that the evanescent wave crests are perpendicular to the interface; and second, that the evanescent wave is slightly ahead of the incident wave.) Frustrated total internal reflection (FTIR) If the internal reflection is to be total, there must be no diversion of the evanescent wave. Suppose, for example, that electromagnetic waves incident from glass (with a higher refractive index) to air (with a lower refractive index) at a certain angle of incidence are subject to TIR. And suppose that we have a third medium (often identical to the first) whose refractive index is sufficiently high that, if the third medium were to replace the second, we would get a standard transmitted wavetrain for the same angle of incidence. Then, if the third medium is brought within a distance of a few wavelengths from the surface of the first medium, where the evanescent wave has significant amplitude in the second medium, then the evanescent wave is effectively refracted into the third medium, giving non-zero transmission into the third medium, and therefore less than total reflection back into the first medium. As the amplitude of the evanescent wave decays across the air gap, the transmitted waves are attenuated, so that there is less transmission, and therefore more reflection, than there would be with no gap; but as long as there is some transmission, the reflection is less than total. This phenomenon is called frustrated total internal reflection (where "frustrated" negates "total"), abbreviated "frustrated TIR" or "FTIR". Frustrated TIR can be observed by looking into the top of a glass of water held in one's hand (Fig.10). If the glass is held loosely, contact may not be sufficiently close and widespread to produce a noticeable effect. But if it is held more tightly, the ridges of one's fingerprints interact strongly with the evanescent waves, allowing the ridges to be seen through the otherwise totally reflecting glass-air surface. The same effect can be demonstrated with microwaves, using paraffin wax as the "internal" medium (where the incident and reflected waves exist). In this case the permitted gap width might be (e.g.) 1cm or several cm, which is easily observable and adjustable. The term frustrated TIR also applies to the case in which the evanescent wave is scattered by an object sufficiently close to the reflecting interface. This effect, together with the strong dependence of the amount of scattered light on the distance from the interface, is exploited in total internal reflection microscopy. The mechanism of FTIR is called evanescent-wave coupling, and is a good analog to visualize quantum tunneling. Due to the wave nature of matter, an electron has a non-zero probability of "tunneling" through a barrier, even if classical mechanics would say that its energy is insufficient. Similarly, due to the wave nature of light, a photon has a non-zero probability of crossing a gap, even if ray optics would say that its approach is too oblique. Another reason why internal reflection may be less than total, even beyond the critical angle, is that the external medium may be "lossy" (less than perfectly transparent), in which case the external medium will absorb energy from the evanescent wave, so that the maintenance of the evanescent wave will draw power from the incident wave. The consequent less-than-total reflection is called attenuated total reflectance (ATR). This effect, and especially the frequency-dependence of the absorption, can be used to study the composition of an unknown external medium. Derivation of evanescent wave In a uniform plane sinusoidal electromagnetic wave, the electric field has the form where is the (constant) complex amplitude vector, is the imaginary unit, is the wave vector (whose magnitude is the angular wavenumber), is the position vector, ω is the angular frequency, is time, and it is understood that the real part of the expression is the physical field. The magnetizing field has the same form with the same and ω. The value of the expression is unchanged if the position varies in a direction normal to ; hence is normal to the wavefronts. If ℓ is the component of in the direction of the field () can be written If the argument of is to be constant, ℓ must increase at the velocity known as the phase velocity. This in turn is equal to where is the phase velocity in the reference medium (taken as vacuum), and is the local refractive index w.r.t. the reference medium. Solving for gives i.e. where is the wavenumber in vacuum. From (), the electric field in the "external" medium has the form where is the wave vector for the transmitted wave (we assume isotropic media, but the transmitted wave is not yet assumed to be evanescent). In Cartesian coordinates , let the region have refractive index and let the region have refractive index . Then the plane is the interface, and the axis is normal to the interface (Fig.11). Let and be the unit vectors in the and directions respectively. Let the plane of incidence (containing the incident wave-normal and the normal to the interface) be the plane (the plane of the page), with the angle of incidence θi measured from towards . Let the angle of refraction, measured in the same sense, be θt ("t" for transmitted, reserving "r" for reflected). From (), the transmitted wave vector has magnitude . Hence, from the geometry, where the last step uses Snell's law. Taking the dot product with the position vector, we get so that Eq.() becomes In the case of TIR, the angle θt does not exist in the usual sense. But we can still interpret () for the transmitted (evanescent) wave by allowing to be complex. This becomes necessary when we write in terms of and thence in terms of using Snell's law: For θi greater than the critical angle, the value under the square-root symbol is negative, so that To determine which sign is applicable, we substitute () into (), obtaining where the undetermined sign is the opposite of that in (). For an evanescent transmitted wave that is, one whose amplitude decays as increases the undetermined sign in () must be minus, so the undetermined sign in () must be plus. With the correct sign, the result () can be abbreviated where and is the wavenumber in vacuum, i.e.  So the evanescent wave is a plane sinewave traveling in the direction, with an amplitude that decays exponentially in the direction (Fig.9). It is evident that the energy stored in this wave likewise travels in the direction and does not cross the interface. Hence the Poynting vector generally has a component in the direction, but its component averages to zero (although its instantaneous component is not identically zero). Eq.() indicates that the amplitude of the evanescent wave falls off by a factor as the coordinate (measured from the interface) increases by the distance commonly called the "penetration depth" of the evanescent wave. Taking reciprocals of the first equation of (), we find that the penetration depth is where λ0 is the wavelength in vacuum, i.e. Dividing the numerator and denominator by yields where is the wavelength in the second (external) medium. Hence we can plot in units of λ2 as a function of the angle of incidence for various values of (Fig.12). As θi decreases towards the critical angle, the denominator approaches zero, so that increases without limit as is to be expected, because as soon as θi is less than critical, uniform plane waves are permitted in the external medium. As θi approaches 90° (grazing incidence), approaches a minimum For incidence from water to air, or common glass to air, is not much different from λ2/(2π). But is larger at smaller angles of incidence (Fig.12), and the amplitude may still be significant at distances of several times ; for example, because is just greater than 0.01, the evanescent wave amplitude within a distance of the interface is at least 1% of its value at the interface. Hence, speaking loosely, we tend to say that the evanescent wave amplitude is significant within "a few wavelengths" of the interface. Phase shifts Between 1817 and 1823, Augustin-Jean Fresnel discovered that total internal reflection is accompanied by a non-trivial phase shift (that is, a phase shift that is not restricted to 0° or 180°), as the Fresnel reflection coefficient acquires a non-zero imaginary part. We shall now explain this effect for electromagnetic waves in the case of linear, homogeneous, isotropic, non-magnetic media. The phase shift turns out to be an advance, which grows as the incidence angle increases beyond the critical angle, but which depends on the polarization of the incident wave. In equations (), (), (), (), and (), we advance the phase by the angle ϕ if we replace by (that is, if we replace by ), with the result that the (complex) field is multiplied by . So a phase advance is equivalent to multiplication by a complex constant with a negative argument. This becomes more obvious when (e.g.) the field () is factored as where the last factor contains the time dependence. To represent the polarization of the incident, reflected, or transmitted wave, the electric field adjacent to an interface can be resolved into two perpendicular components, known as the s and p components, which are parallel to the surface and the plane of incidence respectively; in other words, the s and p components are respectively square and parallel to the plane of incidence. For each component of polarization, the incident, reflected, or transmitted electric field ( in Eq.()) has a certain direction and can be represented by its (complex) scalar component in that direction. The reflection or transmission coefficient can then be defined as a ratio of complex components at the same point, or at infinitesimally separated points on opposite sides of the interface. But, in order to fix the signs of the coefficients, we must choose positive senses for the "directions". For the s components, the obvious choice is to say that the positive directions of the incident, reflected, and transmitted fields are all the same (e.g., the direction in Fig.11). For the p components, this article adopts the convention that the positive directions of the incident, reflected, and transmitted fields are inclined towards the same medium (that is, towards the same side of the interface, e.g. like the red arrows in Fig.11). But the reader should be warned that some books use a different convention for the p components, causing a different sign in the resulting formula for the reflection coefficient. For the s polarization, let the reflection and transmission coefficients be and respectively. For the p polarization, let the corresponding coefficients be and . Then, for linear, homogeneous, isotropic, non-magnetic media, the coefficients are given by (For a derivation of the above, see .) Now we suppose that the transmitted wave is evanescent. With the correct sign (+), substituting () into () gives where that is, is the index of the "internal" medium relative to the "external" one, or the index of the internal medium if the external one is vacuum. So the magnitude of is 1, and the argument of is which gives a phase advance of Making the same substitution in (), we find that has the same denominator as with a positive real numerator (instead of a complex conjugate numerator) and therefore has half the argument of , so that the phase advance of the evanescent wave is half that of the reflected wave. With the same choice of sign, substituting () into () gives whose magnitude is 1, and whose argument is which gives a phase advance of Making the same substitution in (), we again find that the phase advance of the evanescent wave is half that of the reflected wave. Equations () and () apply when , where θi is the angle of incidence, and θc is the critical angle . These equations show that each phase advance is zero at the critical angle (for which the numerator is zero); each phase advance approaches 180° as ; and at intermediate values of θi (because the factor is in the numerator of () and the denominator of ()). For , the reflection coefficients are given by equations () and () and are real, so that the phase shift is either 0° (if the coefficient is positive) or 180° (if the coefficient is negative). In (), if we put (Snell's law) and multiply the numerator and denominator by , we obtain which is positive for all angles of incidence with a transmitted ray (since ), giving a phase shift of zero. If we do likewise with (), the result is easily shown to be equivalent to which is negative for small angles (that is, near normal incidence), but changes sign at Brewster's angle, where θi and θt are complementary. Thus the phase shift is 180° for small θi but switches to 0° at Brewster's angle. Combining the complementarity with Snell's law yields as Brewster's angle for dense-to-rare incidence. (Equations () and () are known as Fresnel's sine law and Fresnel's tangent law. Both reduce to 0/0 at normal incidence, but yield the correct results in the limit as . That they have opposite signs as we approach normal incidence is an obvious disadvantage of the sign convention used in this article; the corresponding advantage is that they have the same signs at grazing incidence.) That completes the information needed to plot and for all angles of incidence. This is done in Fig.13, with in red and in blue, for three refractive indices. On the angle-of-incidence scale (horizontal axis), Brewster's angle is where (red) falls from 180° to 0°, and the critical angle is where both and (red and blue) start to rise again. To the left of the critical angle is the region of partial reflection, where both reflection coefficients are real (phase 0° or 180°) with magnitudes less than 1. To the right of the critical angle is the region of total reflection, where both reflection coefficients are complex with magnitudes equal to 1. In that region, the black curves show the phase advance of the p component relative to the s component: It can be seen that a refractive index of 1.45 is not enough to give a 45° phase difference, whereas a refractive index of 1.5 is enough (by a slim margin) to give a 45° phase difference at two angles of incidence: about 50.2° and 53.3°. This 45° relative shift is employed in Fresnel's invention, now known as the Fresnel rhomb, in which the angles of incidence are chosen such that the two internal reflections cause a total relative phase shift of 90° between the two polarizations of an incident wave. This device performs the same function as a birefringent quarter-wave plate, but is more achromatic (that is, the phase shift of the rhomb is less sensitive to wavelength). Either device may be used, for instance, to transform linear polarization to circular polarization (which Fresnel also discovered) and conversely. In Fig.13, is computed by a final subtraction; but there are other ways of expressing it. Fresnel himself, in 1823, gave a formula for . Born and Wolf (1970, p.50) derive an expression for and find its maximum analytically. For TIR of a beam with finite width, the variation in the phase shift with the angle of incidence gives rise to the Goos–Hänchen effect, which is a lateral shift of the reflected beam within the plane of incidence. This effect applies to linear polarization in the s or p direction. The Imbert–Fedorov effect is an analogous effect for circular or elliptical polarization and produces a shift perpendicular to the plane of incidence. Applications Optical fibers exploit total internal reflection to carry signals over long distances with little attenuation. They are used in telecommunication cables, and in image-forming fiberscopes such as colonoscopes. In the catadioptric Fresnel lens, invented by Augustin-Jean Fresnel for use in lighthouses, the outer prisms use TIR to deflect light from the lamp through a greater angle than would be possible with purely refractive prisms, but with less absorption of light (and less risk of tarnishing) than with conventional mirrors. Other reflecting prisms that use TIR include the following (with some overlap between the categories): Image-erecting prisms for binoculars and spotting scopes include paired 45°-90°-45° Porro prisms (Fig.14), the Porro–Abbe prism, the inline Koenig and Abbe–Koenig prisms, and the compact inline Schmidt–Pechan prism. (The last consists of two components, of which one is a kind of Bauernfeind prism, which requires a reflective coating on one of its two reflecting faces, due to a sub-critical angle of incidence.) These prisms have the additional function of folding the optical path from the objective lens to the prime focus, reducing the overall length for a given primary focal length. A prismatic star diagonal for an astronomical telescope may consist of a single Porro prism (configured for a single reflection, giving a mirror-reversed image) or an Amici roof prism (which gives a non-reversed image). Roof prisms use TIR at two faces meeting at a sharp 90° angle. This category includes the Koenig, Abbe–Koenig, Schmidt–Pechan, and Amici types (already mentioned), and the roof pentaprism used in SLR cameras; the last of these requires a reflective coating on one face. A prismatic corner reflector uses three total internal reflections to reverse the direction of incoming light. The Dove prism gives an inline view with mirror-reversal. Polarizing prisms: Although the Fresnel rhomb, which converts between linear and elliptical polarization, is not birefringent (doubly refractive), there are other kinds of prisms that combine birefringence with TIR in such a way that light of a particular polarization is totally reflected while light of the orthogonal polarization is at least partly transmitted. Examples include the Nicol prism, Glan–Thompson prism, Glan–Foucault prism (or "Foucault prism"), and Glan–Taylor prism. Refractometers, which measure refractive indices, often use the critical angle. Rain sensors for automatic windscreen/windshield wipers have been implemented using the principle that total internal reflection will guide an infrared beam from a source to a detector if the outer surface of the windshield is dry, but any water drops on the surface will divert some of the light. Edge-lit LED panels, used (e.g.) for backlighting of LCD computer monitors, exploit TIR to confine the LED light to the acrylic glass pane, except that some of the light is scattered by etchings on one side of the pane, giving an approximately uniform luminous emittance. Total internal reflection microscopy (TIRM) uses the evanescent wave to illuminate small objects close to the reflecting interface. The consequent scattering of the evanescent wave (a form of frustrated TIR), makes the objects appear bright when viewed from the "external" side. In the total internal reflection fluorescence microscope (TIRFM), instead of relying on simple scattering, we choose an evanescent wavelength short enough to cause fluorescence (Fig.15). The high sensitivity of the illumination to the distance from the interface allows measurement of extremely small displacements and forces. A beam-splitter cube uses frustrated TIR to divide the power of the incoming beam between the transmitted and reflected beams. The width of the air gap (or low-refractive-index gap) between the two prisms can be made adjustable, giving higher transmission and lower reflection for a narrower gap, or higher reflection and lower transmission for a wider gap. Optical modulation can be accomplished by means of frustrated TIR with a rapidly variable gap. As the transmission coefficient is highly sensitive to the gap width (the function being approximately exponential until the gap is almost closed), this technique can achieve a large dynamic range. Optical fingerprinting devices have used frustrated TIR to record images of persons' fingerprints without the use of ink (cf. Fig.11). Gait analysis can be performed by using frustrated TIR with a high-speed camera, to capture and analyze footprints. A gonioscope, used in optometry and ophthalmology for the diagnosis of glaucoma, suppresses TIR in order to look into the angle between the iris and the cornea. This view is usually blocked by TIR at the cornea-air interface. The gonioscope replaces the air with a higher-index medium, allowing transmission at oblique incidence, typically followed by reflection in a "mirror", which itself may be implemented using TIR. Some multi-touch interactive tables and whiteboards utilise FTIR to detect fingers touching the screen. An infrared camera is placed behind the screen surface, which is edge-lit by infrared LEDs; when touching the surface FTIR causes some of the infrared light to escape the screen plane, and the camera sees this as bright areas. Computer vision software is then used to translate this into a series of coordinates and gestures. History Discovery The surprisingly comprehensive and largely correct explanations of the rainbow by Theodoric of Freiberg (written between 1304 and 1310) and Kamāl al-Dīn al-Fārisī (completed by 1309), although sometimes mentioned in connection with total internal reflection (TIR), are of dubious relevance because the internal reflection of sunlight in a spherical raindrop is not total. But, according to Carl Benjamin Boyer, Theodoric's treatise on the rainbow also classified optical phenomena under five causes, the last of which was "a total reflection at the boundary of two transparent media". Theodoric's work was forgotten until it was rediscovered by Giovanni Battista Venturi in 1814. Theodoric having fallen into obscurity, the discovery of TIR was generally attributed to Johannes Kepler, who published his findings in his Dioptrice in 1611. Although Kepler failed to find the true law of refraction, he showed by experiment that for air-to-glass incidence, the incident and refracted rays rotated in the same sense about the point of incidence, and that as the angle of incidence varied through ±90°, the angle of refraction (as we now call it) varied through ±42°. He was also aware that the incident and refracted rays were interchangeable. But these observations did not cover the case of a ray incident from glass to air at an angle beyond 42°, and Kepler promptly concluded that such a ray could only be reflected. René Descartes rediscovered the law of refraction and published it in his Dioptrique of 1637. In the same work he mentioned the senses of rotation of the incident and refracted rays and the condition of TIR. But he neglected to discuss the limiting case, and consequently failed to give an expression for the critical angle, although he could easily have done so. Huygens and Newton: Rival explanations Christiaan Huygens, in his Treatise on Light (1690), paid much attention to the threshold at which the incident ray is "unable to penetrate into the other transparent substance". Although he gave neither a name nor an algebraic expression for the critical angle, he gave numerical examples for glass-to-air and water-to-air incidence, noted the large change in the angle of refraction for a small change in the angle of incidence near the critical angle, and cited this as the cause of the rapid increase in brightness of the reflected ray as the refracted ray approaches the tangent to the interface. Huygens' insight is confirmed by modern theory: in Eqs.() and () above, there is nothing to say that the reflection coefficients increase exceptionally steeply as θt approaches 90°, except that, according to Snell's law, θt itself is an increasingly steep function of θi. Huygens offered an explanation of TIR within the same framework as his explanations of the laws of rectilinear propagation, reflection, ordinary refraction, and even the extraordinary refraction of "Iceland crystal" (calcite). That framework rested on two premises: first, every point crossed by a propagating wavefront becomes a source of secondary wavefronts ("Huygens' principle"); and second, given an initial wavefront, any subsequent position of the wavefront is the envelope (common tangent surface) of all the secondary wavefronts emitted from the initial position. All cases of reflection or refraction by a surface are then explained simply by considering the secondary waves emitted from that surface. In the case of refraction from a medium of slower propagation to a medium of faster propagation, there is a certain obliquity of incidence beyond which it is impossible for the secondary wavefronts to form a common tangent in the second medium; this is what we now call the critical angle. As the incident wavefront approaches this critical obliquity, the refracted wavefront becomes concentrated against the refracting surface, augmenting the secondary waves that produce the reflection back into the first medium. Huygens' system even accommodated partial reflection at the interface between different media, albeit vaguely, by analogy with the laws of collisions between particles of different sizes. However, as long as the wave theory continued to assume longitudinal waves, it had no chance of accommodating polarization, hence no chance of explaining the polarization-dependence of extraordinary refraction, or of the partial reflection coefficient, or of the phase shift in TIR. Isaac Newton rejected the wave explanation of rectilinear propagation, believing that if light consisted of waves, it would "bend and spread every way" into the shadows. His corpuscular theory of light explained rectilinear propagation more simply, and it accounted for the ordinary laws of refraction and reflection, including TIR, on the hypothesis that the corpuscles of light were subject to a force acting perpendicular to the interface. In this model, for dense-to-rare incidence, the force was an attraction back towards the denser medium, and the critical angle was the angle of incidence at which the normal velocity of the approaching corpuscle was just enough to reach the far side of the force field; at more oblique incidence, the corpuscle would be turned back. Newton gave what amounts to a formula for the critical angle, albeit in words: "as the Sines are which measure the Refraction, so is the Sine of Incidence at which the total Reflexion begins, to the Radius of the Circle". Newton went beyond Huygens in two ways. First, not surprisingly, Newton pointed out the relationship between TIR and dispersion: when a beam of white light approaches a glass-to-air interface at increasing obliquity, the most strongly-refracted rays (violet) are the first to be "taken out" by "total Reflexion", followed by the less-refracted rays. Second, he observed that total reflection could be frustrated (as we now say) by laying together two prisms, one plane and the other slightly convex; and he explained this simply by noting that the corpuscles would be attracted not only to the first prism, but also to the second. In two other ways, however, Newton's system was less coherent. First, his explanation of partial reflection depended not only on the supposed forces of attraction between corpuscles and media, but also on the more nebulous hypothesis of "Fits of easy Reflexion" and "Fits of easy Transmission". Second, although his corpuscles could conceivably have "sides" or "poles", whose orientations could conceivably determine whether the corpuscles suffered ordinary or extraordinary refraction in "Island-Crystal", his geometric description of the extraordinary refraction was theoretically unsupported and empirically inaccurate. Laplace, Malus, and attenuated total reflectance (ATR) William Hyde Wollaston, in the first of a pair of papers read to the Royal Society of London in 1802, reported his invention of a refractometer based on the critical angle of incidence from an internal medium of known "refractive power" (refractive index) to an external medium whose index was to be measured. With this device, Wollaston measured the "refractive powers" of numerous materials, some of which were too opaque to permit direct measurement of an angle of refraction. Translations of his papers were published in France in 1803, and apparently came to the attention of Pierre-Simon Laplace. According to Laplace's elaboration of Newton's theory of refraction, a corpuscle incident on a plane interface between two homogeneous isotropic media was subject to a force field that was symmetrical about the interface. If both media were transparent, total reflection would occur if the corpuscle were turned back before it exited the field in the second medium. But if the second medium were opaque, reflection would not be total unless the corpuscle were turned back before it left the first medium; this required a larger critical angle than the one given by Snell's law, and consequently impugned the validity of Wollaston's method for opaque media. Laplace combined the two cases into a single formula for the relative refractive index in terms of the critical angle (minimum angle of incidence for TIR). The formula contained a parameter which took one value for a transparent external medium and another value for an opaque external medium. Laplace's theory further predicted a relationship between refractive index and density for a given substance. In 1807, Laplace's theory was tested experimentally by his protégé, Étienne-Louis Malus. Taking Laplace's formula for the refractive index as given, and using it to measure the refractive index of beeswax in the liquid (transparent) state and the solid (opaque) state at various temperatures (hence various densities), Malus verified Laplace's relationship between refractive index and density. But Laplace's theory implied that if the angle of incidence exceeded his modified critical angle, the reflection would be total even if the external medium was absorbent. Clearly this was wrong: in Eqs.() above, there is no threshold value of the angle θi beyond which κ becomes infinite; so the penetration depth of the evanescent wave (1/κ) is always non-zero, and the external medium, if it is at all lossy, will attenuate the reflection. As to why Malus apparently observed such an angle for opaque wax, we must infer that there was a certain angle beyond which the attenuation of the reflection was so small that ATR was visually indistinguishable from TIR. Fresnel and the phase shift Fresnel came to the study of total internal reflection through his research on polarization. In 1811, François Arago discovered that polarized light was apparently "depolarized" in an orientation-dependent and color-dependent manner when passed through a slice of doubly-refractive crystal: the emerging light showed colors when viewed through an analyzer (second polarizer). Chromatic polarization, as this phenomenon came to be called, was more thoroughly investigated in 1812 by Jean-Baptiste Biot. In 1813, Biot established that one case studied by Arago, namely quartz cut perpendicular to its optic axis, was actually a gradual rotation of the plane of polarization with distance. In 1816, Fresnel offered his first attempt at a wave-based theory of chromatic polarization. Without (yet) explicitly invoking transverse waves, his theory treated the light as consisting of two perpendicularly polarized components. In 1817 he noticed that plane-polarized light seemed to be partly depolarized by total internal reflection, if initially polarized at an acute angle to the plane of incidence. By including total internal reflection in a chromatic-polarization experiment, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle. Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle. In that case, if the light was initially polarized at 45° to the plane of incidence and reflection, it appeared to be completely depolarized after the two reflections. These findings were reported in a memoir submitted and read to the French Academy of Sciences in November 1817. In 1821, Fresnel derived formulae equivalent to his sine and tangent laws (Eqs.() and (), above) by modeling light waves as transverse elastic waves with vibrations perpendicular to what had previously been called the plane of polarization. Using old experimental data, he promptly confirmed that the equations correctly predicted the direction of polarization of the reflected beam when the incident beam was polarized at 45° to the plane of incidence, for light incident from air onto glass or water. The experimental confirmation was reported in a "postscript" to the work in which Fresnel expounded his mature theory of chromatic polarization, introducing transverse waves. Details of the derivation were given later, in a memoir read to the academy in January 1823. The derivation combined conservation of energy with continuity of the tangential vibration at the interface, but failed to allow for any condition on the normal component of vibration. Meanwhile, in a memoir submitted in December 1822, Fresnel coined the terms linear polarization, circular polarization, and elliptical polarization. For circular polarization, the two perpendicular components were a quarter-cycle (±90°) out of phase. The new terminology was useful in the memoir of January 1823, containing the detailed derivations of the sine and tangent laws: in that same memoir, Fresnel found that for angles of incidence greater than the critical angle, the resulting reflection coefficients were complex with unit magnitude. Noting that the magnitude represented the amplitude ratio as usual, he guessed that the argument represented the phase shift, and verified the hypothesis by experiment. The verification involved calculating the angle of incidence that would introduce a total phase difference of 90° between the s and p components, for various numbers of total internal reflections at that angle (generally there were two solutions), subjecting light to that number of total internal reflections at that angle of incidence, with an initial linear polarization at 45° to the plane of incidence, and checking that the final polarization was circular. This procedure was necessary because, with the technology of the time, one could not measure the s and p phase-shifts directly, and one could not measure an arbitrary degree of ellipticality of polarization, such as might be caused by the difference between the phase shifts. But one could verify that the polarization was circular, because the brightness of the light was then insensitive to the orientation of the analyzer. For glass with a refractive index of 1.51, Fresnel calculated that a 45° phase difference between the two reflection coefficients (hence a 90° difference after two reflections) required an angle of incidence of 48°37' or 54°37'. He cut a rhomb to the latter angle and found that it performed as expected. Thus the specification of the Fresnel rhomb was completed. Similarly, Fresnel calculated and verified the angle of incidence that would give a 90° phase difference after three reflections at the same angle, and four reflections at the same angle. In each case there were two solutions, and in each case he reported that the larger angle of incidence gave an accurate circular polarization (for an initial linear polarization at 45° to the plane of reflection). For the case of three reflections he also tested the smaller angle, but found that it gave some coloration due to the proximity of the critical angle and its slight dependence on wavelength. (Compare Fig.13 above, which shows that the phase difference is more sensitive to the refractive index for smaller angles of incidence.) For added confidence, Fresnel predicted and verified that four total internal reflections at 68°27' would give an accurate circular polarization if two of the reflections had water as the external medium while the other two had air, but not if the reflecting surfaces were all wet or all dry. Fresnel's deduction of the phase shift in TIR is thought to have been the first occasion on which a physical meaning was attached to the argument of a complex number. Although this reasoning was applied without the benefit of knowing that light waves were electromagnetic, it passed the test of experiment, and survived remarkably intact after James Clerk Maxwell changed the presumed nature of the waves. Meanwhile, Fresnel's success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The imaginary part of the complex index represents absorption. The term critical angle, used for convenience in the above narrative, is anachronistic: it apparently dates from 1873. In the 20th century, quantum electrodynamics reinterpreted the amplitude of an electromagnetic wave in terms of the probability of finding a photon. In this framework, partial transmission and frustrated TIR concern the probability of a photon crossing a boundary, and attenuated total reflectance concerns the probability of a photon being absorbed on the other side. Research into the more subtle aspects of the phase shift in TIR, including the Goos–Hänchen and Imbert–Fedorov effects and their quantum interpretations, has continued into the 21st century. Gallery See also Notes References Bibliography S. Bochner (June 1963), "The significance of some basic mathematical conceptions for physics", Isis, 54 (2): 179–205; . M. Born and E. Wolf, 1970, Principles of Optics, 4th Ed., Oxford: Pergamon Press. C.B. Boyer, 1959, The Rainbow: From Myth to Mathematics, New York: Thomas Yoseloff. J.Z. Buchwald (December 1980), "Experimental investigations of double refraction from Huygens to Malus", Archive for History of Exact Sciences, 21 (4): 311–373. J.Z. Buchwald, 1989, The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century, University of Chicago Press, . O. Darrigol, 2012, A History of Optics: From Greek Antiquity to the Nineteenth Century, Oxford, . R. Fitzpatrick, 2013, Oscillations and Waves: An Introduction, Boca Raton, FL: CRC Press, . R. Fitzpatrick, 2013a, "Total Internal Reflection", University of Texas at Austin, accessed 14 March 2018. A. Fresnel, 1866 (ed. H. de Senarmont, E. Verdet, and L. Fresnel), Oeuvres complètes d'Augustin Fresnel, Paris: Imprimerie Impériale (3 vols., 1866–70), vol.1 (1866). E. Hecht, 2017, Optics, 5th Ed., Pearson Education, . C. Huygens, 1690, Traité de la Lumière (Leiden: Van der Aa), translated by S.P. Thompson as Treatise on Light, University of Chicago Press, 1912; Project Gutenberg, 2005. (Cited page numbers match the 1912 edition and the Gutenberg HTML edition.) F.A. Jenkins and H.E. White, 1976, Fundamentals of Optics, 4th Ed., New York: McGraw-Hill, . T.H. Levitt, 2013, A Short Bright Flash: Augustin Fresnel and the Birth of the Modern Lighthouse, New York: W.W. Norton, . H. Lloyd, 1834, "Report on the progress and present state of physical optics", Report of the Fourth Meeting of the British Association for the Advancement of Science (held at Edinburgh in 1834), London: J. Murray, 1835, pp.295–413. I. Newton, 1730, Opticks: or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light, 4th Ed. (London: William Innys, 1730; Project Gutenberg, 2010); republished with foreword by A. Einstein and Introduction by E.T. Whittaker (London: George Bell & Sons, 1931); reprinted with additional Preface by I.B. Cohen and Analytical Table of Contents by D.H.D. Roller,  Mineola, NY: Dover, 1952, 1979 (with revised preface), 2012. (Cited page numbers match the Gutenberg HTML edition and the Dover editions.) H.G.J. Rutten and M.A.M.van Venrooij, 1988 (fifth printing, 2002), Telescope Optics: A Comprehensive Manual for Amateur Astronomers, Richmond,VA: Willmann-Bell, . J.A. Stratton, 1941, Electromagnetic Theory, New York: McGraw-Hill. W. Whewell, 1857, History of the Inductive Sciences: From the Earliest to the Present Time, 3rd Ed., London: J.W. Parker & Son, vol.2. E. T. Whittaker, 1910, [https://archive.org/details/historyoftheorie00whitrich A History of the Theories of Aether and Electricity: From the Age of Descartes to the Close of the Nineteenth Century, London: Longmans, Green, & Co. External links Mr. Mangiacapre, "Fluorescence in a Liquid" (video, ), uploaded 13 March 2012. (Fluorescence and TIR of a violet laser beam in quinine water.) PhysicsatUVM, "Frustrated Total Internal Reflection" (video, 37s), uploaded 21 November 2011. ("A laser beam undergoes total internal reflection in a fogged piece of plexiglass...") SMUPhysics, "Internal Reflection" (video, 12s), uploaded 20 May 2010. (Transition from refraction through critical angle to TIR in a 45°-90°-45° prism.) Light Waves Physical phenomena Optical phenomena Physical optics Geometrical optics Glass physics History of physics Lighthouses Dimensionless numbers of physics
Total internal reflection
[ "Physics", "Materials_science", "Engineering" ]
12,536
[ "Glass engineering and science", "Physical phenomena", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Optical phenomena", "Waves", "Glass physics", "Motion (physics)", "Light", "Condensed matter physics" ]
30,436
https://en.wikipedia.org/wiki/Theory%20of%20everything
A theory of everything (TOE), final theory, ultimate theory, unified field theory, or master theory is a hypothetical singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all aspects of the universe. Finding a theory of everything is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, together, most closely resemble a theory of everything. These two theories upon which all modern physics rests are general relativity and quantum mechanics. General relativity is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large scale and high mass: planets, stars, galaxies, clusters of galaxies, etc. On the other hand, quantum mechanics is a theoretical framework that focuses primarily on three non-gravitational forces for understanding the universe in regions of both very small scale and low mass: subatomic particles, atoms, and molecules. Quantum mechanics successfully implemented the Standard Model that describes the three non-gravitational forces: strong nuclear, weak nuclear, and electromagnetic force – as well as all observed elementary particles. General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used. The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe. In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime. Name Initially, the term theory of everything was used with an ironic reference to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist Harald Fritzsch used the term in his 1977 lectures in Varenna. Physicist John Ellis claims to have introduced the acronym "TOE" into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research. Historical antecedents Antiquity to 19th century Many ancient cultures such as Babylonian astronomers and Indian astronomy studied the pattern of the Seven Sacred Luminaires/Classical Planets against the background of stars, with their interest being to relate celestial movement to human events (astrology), and the goal being to predict events by recording events against a time measure and then look for recurrent patterns. The debate between the universe having either a beginning or eternal cycles can be traced to ancient Babylonia. Hindu cosmology posits that time is infinite with a cyclic universe, where the current universe was preceded and will be followed by an infinite number of universes. Time scales mentioned in Hindu cosmology correspond to those of modern scientific cosmology. Its cycles run from our ordinary day and night to a day and night of Brahma, 8.64 billion years long. The natural philosophy of atomism appeared in several ancient traditions. In ancient Greek philosophy, the pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom' proposed by Democritus was an early philosophical attempt to unify phenomena observed in nature. The concept of 'atom' also appeared in the Nyaya-Vaisheshika school of ancient Indian philosophy. Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them. Following earlier atomistic thought, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles. In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory. In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time: Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable. In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter. In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection. In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything. Early 20th century In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known". After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert, Theodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory (see Einstein–Maxwell–Dirac equations). Late 20th century and the nuclear interactions In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped. Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force. Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses ( and , respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent. While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved. Modern physics Conventional sequence of theories A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph. In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV. Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies. The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories. In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven. String theory and M-theory Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue. One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms. Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality. In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape. One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics. Loop quantum gravity Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales. There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations. This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge). Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin. Other attempts Among other attempts to develop a theory of everything is the theory of causal fermion systems, giving the two current physical theories (general relativity and quantum field theory) as limiting cases. Another theory is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a theory of everything but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a theory of everything. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events. Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge). Present status At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything. Arguments against In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery. Gödel's incompleteness theorem A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory. Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because a "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything. Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them." Stephen Hawking was originally a believer in the Theory of Everything, but after considering Gödel's Theorem, he concluded that one was not obtainable. "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind." Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information. Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws. Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a theory of everything cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question. This definitional discrepancy may explain some of the disagreement among researchers. Fundamental limits in accuracy No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions. Definition of fundamental laws There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything. The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known debate over this took place between Steven Weinberg and Philip Anderson. Impossibility of calculation Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them. See also (SVT) References Bibliography Pais, Abraham (1982) Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford University Press, Oxford. Ch. 17, Weinberg, Steven (1993) Dreams of a Final Theory: The Search for the Fundamental Laws of Nature, Hutchinson Radius, London, Corey S. Powell Relativity versus quantum mechanics: the battle for the universe, The Guardian (2015) Relativity versus quantum mechanics: the battle for the universe External links The Elegant Universe, Nova episode about the search for the theory of everything and string theory. Theory of Everything, freeview video by the Vega Science Trust, BBC and Open University. The Theory of Everything: Are we getting closer, or is a final theory of matter and the universe impossible? Debate between John Ellis (physicist), Frank Close and Nicholas Maxwell. Why The World Exists, a discussion between physicist Laura Mersini-Houghton, cosmologist George Francis Rayner Ellis and philosopher David Wallace about dark matter, parallel universes and explaining why these and the present Universe exist. Theories of Everything, BBC Radio 4 discussion with Brian Greene, John Barrow & Val Gibson (In Our Time, March 25, 2004). Physics beyond the Standard Model Theories of gravity
Theory of everything
[ "Physics" ]
5,554
[ "Theoretical physics", "Unsolved problems in physics", "Particle physics", "Theories of gravity", "Physics beyond the Standard Model" ]
30,462
https://en.wikipedia.org/wiki/Triple%20point
In thermodynamics, the triple point of a substance is the temperature and pressure at which the three phases (gas, liquid, and solid) of that substance coexist in thermodynamic equilibrium. It is that temperature and pressure at which the sublimation, fusion, and vaporisation curves meet. For example, the triple point of mercury occurs at a temperature of and a pressure of 0.165 mPa. In addition to the triple point for solid, liquid, and gas phases, a triple point may involve more than one solid phase, for substances with multiple polymorphs. Helium-4 is unusual in that it has no sublimation/deposition curve and therefore no triple points where its solid phase meets its gas phase. Instead, it has a vapor-liquid-superfluid point, a solid-liquid-superfluid point, a solid-solid-liquid point, and a solid-solid-superfluid point. None of these should be confused with the Lambda Point, which is not any kind of triple point. The term "triple point" was coined in 1873 by James Thomson, brother of Lord Kelvin. The triple points of several substances are used to define points in the ITS-90 international temperature scale, ranging from the triple point of hydrogen (13.8033 K) to the triple point of water (273.16 K, 0.01 °C, or 32.018 °F). Before 2019, the triple point of water was used to define the kelvin, the base unit of thermodynamic temperature in the International System of Units (SI). The kelvin was defined so that the triple point of water is exactly 273.16 K, but that changed with the 2019 revision of the SI, where the kelvin was redefined so that the Boltzmann constant is exactly , and the triple point of water became an experimentally measured constant. Triple point of water Gas–liquid–solid triple point Following the 2019 revision of the SI, the value of the triple point of water is no longer used as a defining point. However, its empirical value remains important: the unique combination of pressure and temperature at which liquid water, solid ice, and water vapor coexist in a stable equilibrium is approximately and a vapor pressure of . Liquid water can only exist at pressures equal to or greater than the triple point. Below this, in the vacuum of outer space, solid ice sublimates, transitioning directly into water vapor when heated at a constant pressure. Conversely, above the triple point, solid ice first melts into liquid water upon heating at a constant pressure, then evaporates or boils to form vapor at a higher temperature. For most substances, the gas–liquid–solid triple point is the minimum temperature where the liquid can exist. For water, this is not the case. The melting point of ordinary ice decreases with pressure, as shown by the phase diagram's dashed green line. Just below the triple point, compression at a constant temperature transforms water vapor first to solid and then to liquid. Historically, during the Mariner 9 mission to Mars, the triple point pressure of water was used to define "sea level". Now, laser altimetry and gravitational measurements are preferred to define Martian elevation. High-pressure phases At high pressures, water has a complex phase diagram with 15 known phases of ice and several triple points, including 10 whose coordinates are shown in the diagram. For example, the triple point at 251 K (−22 °C) and 210 MPa (2070 atm) corresponds to the conditions for the coexistence of ice Ih (ordinary ice), ice III and liquid water, all at equilibrium. There are also triple points for the coexistence of three solid phases, for example ice II, ice V and ice VI at 218 K (−55 °C) and 620 MPa (6120 atm). For those high-pressure forms of ice which can exist in equilibrium with liquid, the diagram shows that melting points increase with pressure. At temperatures above 273 K (0 °C), increasing the pressure on water vapor results first in liquid water and then a high-pressure form of ice. In the range , ice I is formed first, followed by liquid water and then ice III or ice V, followed by other still denser high-pressure forms. Triple-point cells Triple-point cells are used in the calibration of thermometers. For exacting work, triple-point cells are typically filled with a highly pure chemical substance such as hydrogen, argon, mercury, or water (depending on the desired temperature). The purity of these substances can be such that only one part in a million is a contaminant, called "six nines" because it is 99.9999% pure. A specific isotopic composition (for water, VSMOW) is used because variations in isotopic composition cause small changes in the triple point. Triple-point cells are so effective at achieving highly precise, reproducible temperatures, that an international calibration standard for thermometers called ITS–90 relies upon triple-point cells of hydrogen, neon, oxygen, argon, mercury, and water for delineating six of its defined temperature points. Table of triple points This table lists the gas–liquid–solid triple points of several substances. Unless otherwise noted, the data come from the U.S. National Bureau of Standards (now NIST, National Institute of Standards and Technology). Notes: For comparison, typical atmospheric pressure is 101.325 kPa (1 atm). Before the new definition of SI units, water's triple point, 273.16 K, was an exact number. See also Critical point (thermodynamics) Gibbs' phase rule References External links Chemical properties Phase transitions Thermodynamics Threshold temperatures Gases 1873 introductions
Triple point
[ "Physics", "Chemistry", "Mathematics" ]
1,202
[ "Physical phenomena", "Phase transitions", "Matter", "Critical phenomena", "Threshold temperatures", "Phases of matter", "Thermodynamics", "nan", "Statistical mechanics", "Gases", "Dynamical systems" ]
30,549
https://en.wikipedia.org/wiki/Trans-Neptunian%20object
A trans-Neptunian object (TNO), also written transneptunian object, is any minor planet in the Solar System that orbits the Sun at a greater average distance than Neptune, which has an orbital semi-major axis of 30.1 astronomical units (AU). Typically, TNOs are further divided into the classical and resonant objects of the Kuiper belt, the scattered disc and detached objects with the sednoids being the most distant ones. As of July 2024, the catalog of minor planets contains 901 numbered and more than 3,000 unnumbered TNOs. However, nearly 5000 objects with semimajor axis over 30 AU are present in the MPC catalog, with 1000 being numbered. The first trans-Neptunian object to be discovered was Pluto in 1930. It took until 1992 to discover a second trans-Neptunian object orbiting the Sun directly, 15760 Albion. The most massive TNO known is Eris, followed by Pluto, , , and . More than 80 satellites have been discovered in orbit of trans-Neptunian objects. TNOs vary in color and are either grey-blue (BB) or very red (RR). They are thought to be composed of mixtures of rock, amorphous carbon and volatile ices such as water and methane, coated with tholins and other organic compounds. Twelve minor planets with a semi-major axis greater than 150 AU and perihelion greater than 30 AU are known, which are called extreme trans-Neptunian objects (ETNOs). History Discovery of Pluto The orbit of each of the planets is slightly affected by the gravitational influences of the other planets. Discrepancies in the early 1900s between the observed and expected orbits of Uranus and Neptune suggested that there were one or more additional planets beyond Neptune. The search for these led to the discovery of Pluto in February 1930, which was too small to explain the discrepancies. Revised estimates of Neptune's mass from the Voyager 2 flyby in 1989 showed that the problem was spurious. Pluto was easiest to find because it has the highest apparent magnitude of all known trans-Neptunian objects. It also has a lower inclination to the ecliptic than most other large TNOs. Subsequent discoveries After Pluto's discovery, American astronomer Clyde Tombaugh continued searching for some years for similar objects but found none. For a long time, no one searched for other TNOs as it was generally believed that Pluto, which up to August 2006 was classified as a planet, was the only major object beyond Neptune. Only after the 1992 discovery of a second TNO, 15760 Albion, did systematic searches for further such objects begin. A broad strip of the sky around the ecliptic was photographed and digitally evaluated for slowly moving objects. Hundreds of TNOs were found, with diameters in the range of 50 to 2,500 kilometers. Eris, the most massive TNO, was discovered in 2005, revisiting a long-running dispute within the scientific community over the classification of large TNOs, and whether objects like Pluto can be considered planets. Pluto and Eris were eventually classified as dwarf planets by the International Astronomical Union. Classification According to their distance from the Sun and their orbital parameters, TNOs are classified in two large groups: the Kuiper belt objects (KBOs) and the scattered disc objects (SDOs). The diagram to the right illustrates the distribution of known trans-Neptunian objects (up to 70 au) in relation to the orbits of the planets and the centaurs for reference. Different classes are represented in different colours. Resonant objects (including Neptune trojans) are plotted in red, classical Kuiper belt objects in blue. The scattered disc extends to the right, far beyond the diagram, with known objects at mean distances beyond 500 au (Sedna) and aphelia beyond 1,000  (). KBOs The EdgeworthKuiper belt contains objects with an average distance to the Sun of 30 to about 55 au, usually having close-to-circular orbits with a small inclination from the ecliptic. EdgeworthKuiper belt objects are further classified into the resonant trans-Neptunian object that are locked in an orbital resonance with Neptune, and the classical Kuiper belt objects, also called "cubewanos", that have no such resonance, moving on almost circular orbits, unperturbed by Neptune. There are a large number of resonant subgroups, the largest being the twotinos (1:2 resonance) and the plutinos (2:3 resonance), named after their most prominent member, Pluto. Members of the classical EdgeworthKuiper belt include 15760 Albion, Quaoar and Makemake. Another subclass of Kuiper belt objects is the so-called scattering objects (SO). These are non-resonant objects that come near enough to Neptune to have their orbits changed from time to time (such as causing changes in semi-major axis of at least 1.5 AU in 10 million years) and are thus undergoing gravitational scattering. Scattering objects are easier to detect than other trans-Neptunian objects of the same size because they come nearer to Earth, some having perihelia around 20 AU. Several are known with g-band absolute magnitude below 9, meaning that the estimated diameter is more than 100 km. It is estimated that there are between 240,000 and 830,000 scattering objects bigger than r-band absolute magnitude 12, corresponding to diameters greater than about 18 km. Scattering objects are hypothesized to be the source of the so-called Jupiter-family comets (JFCs), which have periods of less than 20 years. SDOs The scattered disc contains objects farther from the Sun, with very eccentric and inclined orbits. These orbits are non-resonant and non-planetary-orbit-crossing. A typical example is the most-massive-known TNO, Eris. Based on the Tisserand parameter relative to Neptune (TN), the objects in the scattered disc can be further divided into the "typical" scattered disc objects (SDOs, Scattered-near) with a TN of less than 3, and into the detached objects (ESDOs, Scattered-extended) with a TN greater than 3. In addition, detached objects have a time-averaged eccentricity greater than 0.2 The Sednoids are a further extreme sub-grouping of the detached objects with perihelia so distant that it is confirmed that their orbits cannot be explained by perturbations from the giant planets, nor by interaction with the galactic tides. However, a passing star could have moved them on their orbit. Physical characteristics Given the apparent magnitude (>20) of all but the biggest trans-Neptunian objects, the physical studies are limited to the following: thermal emissions for the largest objects (see size determination) colour indices, i.e. comparisons of the apparent magnitudes using different filters analysis of spectra, visual and infrared Studying colours and spectra provides insight into the objects' origin and a potential correlation with other classes of objects, namely centaurs and some satellites of giant planets (Triton, Phoebe), suspected to originate in the Kuiper belt. However, the interpretations are typically ambiguous as the spectra can fit more than one model of the surface composition and depend on the unknown particle size. More significantly, the optical surfaces of small bodies are subject to modification by intense radiation, solar wind and micrometeorites. Consequently, the thin optical surface layer could be quite different from the regolith underneath, and not representative of the bulk composition of the body. Small TNOs are thought to be low-density mixtures of rock and ice with some organic (carbon-containing) surface material such as tholins, detected in their spectra. On the other hand, the high density of , 2.6–3.3 g/cm3, suggests a very high non-ice content (compare with Pluto's density: 1.86 g/cm3). The composition of some small TNOs could be similar to that of comets. Indeed, some centaurs undergo seasonal changes when they approach the Sun, making the boundary blurred (see 2060 Chiron and 7968 Elst–Pizarro). However, population comparisons between centaurs and TNOs are still controversial. Color indices Colour indices are simple measures of the differences in the apparent magnitude of an object seen through blue (B), visible (V), i.e. green-yellow, and red (R) filters. The diagram illustrates known colour indices for all but the biggest objects (in slightly enhanced colour). For reference, two moons, Triton and Phoebe, the centaur Pholus and the planet Mars are plotted (yellow labels, size not to scale). Correlations between the colours and the orbital characteristics have been studied, to confirm theories of different origin of the different dynamic classes: Classical Kuiper belt object (cubewano) seem to be composed of two different colour populations: the so-called cold (inclination <5°) population, displaying only red colours, and the so-called hot (higher inclination) population displaying the whole range of colours from blue to very red. A recent analysis based on the data from Deep Ecliptic Survey confirms this difference in colour between low-inclination (named Core) and high-inclination (named Halo) objects. Red colours of the Core objects together with their unperturbed orbits suggest that these objects could be a relic of the original population of the belt. Scattered disc objects show colour resemblances with hot classical objects pointing to a common origin. While the relatively dimmer bodies, as well as the population as the whole, are reddish (V−I = 0.3–0.6), the bigger objects are often more neutral in colour (infrared index V−I < 0.2). This distinction leads to suggestion that the surface of the largest bodies is covered with ices, hiding the redder, darker areas underneath. Spectral type Among TNOs, as among centaurs, there is a wide range of colors from blue-grey (neutral) to very red, but unlike the centaurs, bimodally grouped into grey and red centaurs, the distribution for TNOs appears to be uniform. The wide range of spectra differ in reflectivity in visible red and near infrared. Neutral objects present a flat spectrum, reflecting as much red and infrared as visible spectrum. Very red objects present a steep slope, reflecting much more in red and infrared. A recent attempt at classification (common with centaurs) uses the total of four classes from BB (blue, or neutral color, average B−V 0.70, V−R 0.39, e.g. Orcus) to RR (very red, B−V 1.08, V−R 0.71, e.g. Sedna) with BR and IR as intermediate classes. BR (intermediate blue-red) and IR (moderately red) differ mostly in the infrared bands I, J and H. Typical models of the surface include water ice, amorphous carbon, silicates and organic macromolecules, named tholins, created by intense radiation. Four major tholins are used to fit the reddening slope: Titan tholin, believed to be produced from a mixture of 90% N2 (nitrogen) and 10% (methane) Triton tholin, as above but with very low (0.1%) methane content (ethane) Ice tholin I, believed to be produced from a mixture of 86% and 14% C2H6 (ethane) (methanol) Ice tholin II, 80% H2O, 16% CH3OH (methanol) and 3% As an illustration of the two extreme classes BB and RR, the following compositions have been suggested for Sedna (RR very red): 24% Triton tholin, 7% carbon, 10% N2, 26% methanol, and 33% methane for Orcus (BB, grey/blue): 85% amorphous carbon, +4% Titan tholin, and 11% H2O ice Size determination and distribution Characteristically, big (bright) objects are typically on inclined orbits, whereas the invariable plane regroups mostly small and dim objects. It is difficult to estimate the diameter of TNOs. For very large objects, with very well known orbital elements (like Pluto), diameters can be precisely measured by occultation of stars. For other large TNOs, diameters can be estimated by thermal measurements. The intensity of light illuminating the object is known (from its distance to the Sun), and one assumes that most of its surface is in thermal equilibrium (usually not a bad assumption for an airless body). For a known albedo, it is possible to estimate the surface temperature, and correspondingly the intensity of heat radiation. Further, if the size of the object is known, it is possible to predict both the amount of visible light and emitted heat radiation reaching Earth. A simplifying factor is that the Sun emits almost all of its energy in visible light and at nearby frequencies, while at the cold temperatures of TNOs, the heat radiation is emitted at completely different wavelengths (the far infrared). Thus there are two unknowns (albedo and size), which can be determined by two independent measurements (of the amount of reflected light and emitted infrared heat radiation). TNOs are so far from the Sun that they are very cold, hence producing black-body radiation around 60 micrometres in wavelength. This wavelength of light is impossible to observe on the Earth's surface, but only from space using, e.g. the Spitzer Space Telescope. For ground-based observations, astronomers observe the tail of the black-body radiation in the far infrared. This far infrared radiation is so dim that the thermal method is only applicable to the largest KBOs. For the majority of (small) objects, the diameter is estimated by assuming an albedo. However, the albedos found range from 0.50 down to 0.05, resulting in a size range of 1,200–3,700 km for an object of magnitude of 1.0. Notable objects Exploration The only mission to date that primarily targeted a trans-Neptunian object was NASA's New Horizons, which was launched in January 2006 and flew by the Pluto system in July 2015 and 486958 Arrokoth in January 2019. In 2011, a design study explored a spacecraft survey of Quaoar, Sedna, Makemake, Haumea, and Eris. In 2019 one mission to TNOs included designs for orbital capture and multi-target scenarios. Some TNOs that were studied in a design study paper were , , and Lempo. The existence of planets beyond Neptune, ranging from less than an Earth mass (Sub-Earth) up to a brown dwarf has been often postulated for different theoretical reasons to explain several observed or speculated features of the Kuiper belt and the Oort cloud. It was recently proposed to use ranging data from the New Horizons spacecraft to constrain the position of such a hypothesized body. NASA has been working towards a dedicated Interstellar Precursor in the 21st century, one intentionally designed to reach the interstellar medium, and as part of this the flyby of objects like Sedna are also considered. Overall this type of spacecraft studies have proposed a launch in the 2020s, and would try to go a little faster than the Voyagers using existing technology. One 2018 design study for an Interstellar Precursor, included a visit of minor planet 50000 Quaoar, in the 2030s. Extreme trans-Neptunian objects Among the extreme trans-Neptunian objects are three high-perihelion objects classified as sednoids: 90377 Sedna, , and 541132 Leleākūhonua. They are distant detached objects with perihelia greater than 70 au. Their high perihelia keep them at a sufficient distance to avoid significant gravitational perturbations from Neptune. Previous explanations for the high perihelion of Sedna include a close encounter with an unknown planet on a distant orbit and a distant encounter with a random star or a member of the Sun's birth cluster that passed near the Solar System. See also Dwarf planet Mesoplanet Nemesis (hypothetical star) Planet Nine Sednoid Small Solar System body Trans-Neptunian planets in fiction Triton Tyche (hypothetical planet) Notes References External links Nine planets, University of Arizona David Jewitt's Kuiper Belt site Large KBO page A list of the estimates of the diameters from johnstonarchive with references to the original papers Distant minor planets Neptune Solar System
Trans-Neptunian object
[ "Astronomy" ]
3,496
[ "Outer space", "Solar System" ]
30,551
https://en.wikipedia.org/wiki/Theogony
The Theogony (, , i.e. "the genealogy or birth of the gods") is a poem by Hesiod (8th–7th century BC) describing the origins and genealogies of the Greek gods, composed . It is written in the Epic dialect of Ancient Greek and contains 1022 lines. It is one of the most important sources for the understanding of early Greek cosmology. Descriptions Hesiod's Theogony is a large-scale synthesis of a vast variety of local Greek traditions concerning the gods, organized as a narrative that tells how they came to be and how they established permanent control over the cosmos. It is the first known Greek mythical cosmogony. The initial state of the universe is chaos, a dark indefinite void considered a divine primordial condition from which everything else appeared. Theogonies are a part of Greek mythology which embodies the desire to articulate reality as a whole; this universalizing impulse was fundamental for the first later projects of speculative theorizing. Further, in the "Kings and Singers" passage (80–103) Hesiod appropriates to himself the authority usually reserved to sacred kingship. The poet declares that it is he, where we might have expected some king instead, upon whom the Muses have bestowed the two gifts of a scepter and an authoritative voice (Hesiod, Theogony 30–3), which are the visible signs of kingship. It is not that this gesture is meant to make Hesiod a king. Rather, the point is that the authority of kingship now belongs to the poetic voice, the voice that is declaiming the Theogony. Although it is often used as a sourcebook for Greek mythology, the Theogony is both more and less than that. In formal terms it is a hymn invoking Zeus and the Muses: parallel passages between it and the much shorter Homeric Hymn to the Muses make it clear that the Theogony developed out of a tradition of hymnic preludes with which an ancient Greek rhapsode would begin his performance at poetic competitions. It is necessary to see the Theogony not as the definitive source of Greek mythology, but rather as a snapshot of a dynamic tradition that happened to crystallize when Hesiod formulated the myths he knew—and to remember that the traditions have continued evolving since that time. The written form of the Theogony was established in the 6th century BC. Even some conservative editors have concluded that the Typhon episode (820–68) is an interpolation. Hesiod was probably influenced by some Near-Eastern traditions, such as the Babylonian Dynasty of Dunnum, which were mixed with local traditions, but they are more likely to be lingering traces from the Mycenaean tradition than the result of oriental contacts in Hesiod's own time. The decipherment of Hittite mythical texts, notably the Kingship in Heaven text first presented in 1946, with its castration mytheme, offers in the figure of Kumarbi an Anatolian parallel to Hesiod's Uranus–Cronus conflict. The succession myth One of the principal components of the Theogony is the presentation of what is called the "succession myth", which tells how Cronus overthrew Uranus, and how in turn Zeus overthrew Cronus and his fellow Titans, and how Zeus was eventually established as the final and permanent ruler of the cosmos. Uranus (Sky) initially produced eighteen children with his mother Gaia (Earth): the twelve Titans, the three Cyclopes, and the three Hecatoncheires (Hundred-Handers), but hating them, he hid them away somewhere inside Gaia. Angry and in distress, Gaia fashioned a sickle made of adamant and urged her children to punish their father. Only her son Cronus, the youngest Titan, was willing to do so. So Gaia hid Cronus in "ambush" and gave him the adamantine sickle, and when Uranus came to lie with Gaia, Cronus reached out and castrated his father. This enabled the Titans to be born and Cronus to assume supreme command of the cosmos. Cronus, having now taken over control of the cosmos from Uranus, wanted to ensure that he maintained control. Uranus and Gaia had prophesied to Cronus that one of Cronus' own children would overthrow him, so when Cronus married Rhea, he made sure to swallow each of the children she birthed: Hestia, Demeter, Hera, Hades, Poseidon, and Zeus (in that order), to Rhea's great sorrow. However, when Rhea was pregnant with Zeus, Rhea begged her parents Gaia and Uranus to help her save Zeus. So they sent Rhea to Lyctus on Crete to bear Zeus, and Gaia took the newborn Zeus to raise, hiding him deep in a cave beneath Mount Aigaion. Meanwhile, Rhea gave Cronus a huge stone wrapped in baby's clothes which he swallowed thinking that it was another of Rhea's children. Zeus, now grown, forced Cronus (using some unspecified trickery of Gaia) to disgorge his other five children. Zeus then released his uncles the Cyclopes (apparently still imprisoned beneath the earth, along with the Hundred-Handers, where Uranus had originally confined them) who then provide Zeus with his great weapon, the thunderbolt, which had been hidden by Gaia. A great war was begun, the Titanomachy, between the new gods, Zeus and his siblings, and the old gods, Cronus and the Titans, for control of the cosmos. In the tenth year of that war, following Gaia's counsel, Zeus released the Hundred-Handers, who joined the war against the Titans, helping Zeus to gain the upper hand. Zeus then cast the fury of his thunderbolt at the Titans, defeating them and throwing them into Tartarus, thus ending the Titanomachy. A final threat to Zeus' power was to come in the form of the monster Typhon, son of Gaia and Tartarus. Zeus with his thunderbolt was quickly victorious, and Typhon was also imprisoned in Tartarus. Zeus, by Gaia's advice, was elected king of the gods, and he distributed various honors among the gods. Zeus then married his first wife Metis, but when he learned that Metis was fated to produce a son which might overthrow his rule, by the advice of Gaia and Uranus, Zeus swallowed Metis (while still pregnant with Athena). And so Zeus managed to end the cycle of succession and secure his eternal rule over the cosmos. The genealogies The first gods The world began with the spontaneous generation of four beings: first arose Chaos (Chasm); then came Gaia (Earth), "the ever-sure foundation of all"; "dim" Tartarus, in the depths of the Earth; and Eros (Desire) "fairest among the deathless gods". From Chaos came Erebus (Darkness) and Nyx (Night). And Nyx "from union in love" with Erebus produced Aether (Brightness) and Hemera (Day). From Gaia came Uranus (Sky), the Ourea (Mountains), and Pontus (Sea). Children of Gaia and Uranus Uranus mated with Gaia, and she gave birth to the twelve Titans: Oceanus, Coeus, Crius, Hyperion, Iapetus, Theia, Rhea, Themis, Mnemosyne, Phoebe, Tethys and Cronus; the Cyclopes: Brontes, Steropes and Arges; and the Hecatoncheires ("Hundred-Handers"): Cottus, Briareos, and Gyges. Children of Gaia and Uranus' blood, and Uranus' genitals When Cronus castrated Uranus, from Uranus' blood which splattered onto the earth, came the Erinyes (Furies), the Giants, and the Meliai. Cronus threw the severed genitals into the sea, around which foam developed and transformed into the goddess Aphrodite. Descendants of Nyx Meanwhile, Nyx (Night) alone produced children: Moros (Doom), Ker (Destiny), Thanatos (Death), Hypnos (Sleep), Oneiroi (Dreams), Momus (Blame), Oizys (Pain), Hesperides (Daughters of Night), Moirai (Fates), Keres (Destinies), Nemesis (Retribution), Apate (Deceit), Philotes (Love), Geras (Old Age), and Eris (Discord). And from Eris alone, came Ponos (Hardship), Lethe (Forgetfulness), Limos (Starvation), Algea (Pains), Hysminai (Battles), Makhai (Wars), Phonoi (Murders), Androktasiai (Manslaughters), Neikea (Quarrels), Pseudea (Lies), Logoi (Stories), Amphillogiai (Disputes), Dysnomia (Anarchy), Ate (Ruin), and Horkos (Oath). Descendants of Gaia and Pontus After Uranus's castration, Gaia mated with her son Pontus (Sea) producing a descendent line consisting primarily of sea deities, sea nymphs, and hybrid monsters. Their first child Nereus (Old Man of the Sea) married Doris, one of the Oceanid daughters of the Titans Oceanus and Tethys, and they produced the Nereids, fifty sea nymphs, which included Amphitrite, Thetis, and Psamathe. Their second child Thaumas married Electra, another Oceanid, and their offspring were Iris (Rainbow) and the two Harpies: Aello and Ocypete. Gaia and Pontus' third and fourth children, Phorcys and Ceto, married each other and produced the two Graiae: Pemphredo and Enyo, and the three Gorgons: Stheno, Euryale, and Medusa. Poseidon mated with Medusa and two offspring, the winged horse Pegasus and the warrior Chrysaor, were born when the hero Perseus cut off Medusa's head. Chrysaor married Callirhoe, another Oceanid, and they produced the three-headed Geryon. Next comes the half-nymph half-snake Echidna (her mother is unclear, probably Ceto, or possibly Callirhoe). The last offspring of Ceto and Phorcys was a serpent (unnamed in the Theogony, later called Ladon, by Apollonius of Rhodes) who guards the golden apples. Descendants of Echidna and Typhon Gaia also mated with Tartarus to produce Typhon, whom Echidna married, producing several monstrous descendants. Their first three offspring were Orthus, Cerberus, and the Hydra. Next comes the Chimera (whose mother is unclear, either Echidna or the Hydra). Finally Orthus (his mate is unclear, either the Chimera or Echidna) produced two offspring: the Sphinx and the Nemean Lion. Descendants of the Titans The Titans, Oceanus, Hyperion, Coeus, and Cronus married their sisters Tethys, Theia, Phoebe and Rhea, and Crius married his half-sister Eurybia, the daughter of Gaia and her son, Pontus. From Oceanus and Tethys came the three thousand river gods (including Nilus [Nile], Alpheus, and Scamander) and three thousand Oceanid nymphs (including Doris, Electra, Callirhoe, Styx, Clymene, Metis, Eurynome, Perseis, and Idyia). From Hyperion and Theia came Helios (Sun), Selene (Moon), and Eos (Dawn), and from Crius and Eurybia came Astraios, Pallas, and Perses. From Eos and Astraios came the winds: Zephyrus, Boreas and Notos, Eosphoros (Dawn-bringer, i.e. Venus, the Morning Star), and the Stars. From Pallas and the Oceanid Styx came Zelus (Envy), Nike (Victory), Kratos (Power), and Bia (Force). From Coeus and Phoebe came Leto and Asteria, who married Perses, producing Hekate, and from Cronus and his older sister, Rhea, came Hestia, Demeter, Hera, Hades, Poseidon, and Zeus. The Titan Iapetos married the Oceanid Clymene and produced Atlas, Menoetius, Prometheus, and Epimetheus. Children of Zeus and his seven wives Zeus married seven wives. His first wife was the Oceanid Metis, whom he impregnated with Athena, then, on the advice of Gaia and Uranus, swallowed Metis so that no son of his by Metis would overthrow him, as had been foretold. Zeus' second wife was his aunt the Titan Themis, who bore the three Horae (Seasons): Eunomia (Order), Dikē (Justice), Eirene (Peace); and the three Moirai (Fates): Clotho (Spinner), Lachesis (Allotter), and Atropos (Unbending). Zeus then married his third wife, another Oceanid, Eurynome, who bore the three Charites (Graces): Aglaea (Splendor), whom Hephaestus married, Euphrosyne (Joy), and Thalia (Good Cheer). Zeus' fourth wife was his sister, Demeter, who bore Persephone. The fifth wife of Zeus was another aunt, the Titan Mnemosyne, from whom came the nine Muses: Clio, Euterpe, Thalia, Melpomene, Terpsichore, Erato, Polymnia, Urania, and Calliope. His sixth wife was the Titan Leto, who gave birth to Apollo and Artemis. Zeus' seventh and final wife was his sister Hera, the mother by Zeus of Hebe, Ares, and Eileithyia. Zeus finally "gave birth" himself to Athena, from his head, which angered Hera so much that she produced, by herself, her own son Hephaestus, god of fire and blacksmiths. Other descendants of divine fathers From Poseidon and the Nereid Amphitrite was born Triton, and from Ares and Aphrodite came Phobos (Fear), Deimos (Terror), and Harmonia (Harmony). Zeus, with Atlas's daughter Maia, produced Hermes, and with the mortal Alcmene, produced the hero Heracles, who married Hebe. Zeus and the mortal Semele, daughter of Harmonia and Cadmus, the founder and first king of Thebes, produced Dionysus, who married Ariadne, daughter of Minos, king of Crete. Helios and the Oceanid Perseis produced Circe, Aeetes, who became king of Colchis and married the Oceanid Idyia, producing Medea. Children of divine mothers with mortal fathers The goddess Demeter joined with the mortal Iasion to produce Plutus. In addition to Semele, the goddess Harmonia and the mortal Cadmus also produced Ino, Agave, Autonoe and Polydorus. Eos (Dawn) with the mortal Tithonus, produced the hero Memnon, and Emathion, and with Cephalus, produced Phaethon. Medea with the mortal Jason, produced Medius, the Nereid Psamathe with the mortal Aeacus, produced the hero Phocus, the Nereid Thetis, with Peleus produced the great warrior Achilles, and the goddess Aphrodite with the mortal Anchises produced the Trojan hero Aeneas. With the hero Odysseus, Circe would give birth to Agrius, Latinus, and Telegonus, and Atlas' daughter Calypso would also bear Odysseus two sons, Nausithoos and Nausinous. Prometheus The Theogony, after listing the offspring of the Titan Iapetus and the Oceanid Clymene, as Atlas, Menoitios, Prometheus, and Epimetheus, and telling briefly what happened to each, tells the story of Prometheus. When the gods and men met at Mekone to decide how sacrifices should be distributed, Prometheus sought to trick Zeus. Slaughtering an ox, he took the valuable fat and meat, and covered it with the ox's stomach. Prometheus then took the bones and hid them with a thin glistening layer of fat. Prometheus asked Zeus' opinion on which offering pile he found more desirable, hoping to trick the god into selecting the less desirable portion. Though Zeus saw through the trick, he chose the fat covered bones, and so it was established that ever after men would burn the bones as sacrifice to the gods, keeping the choice meat and fat for themselves. But in punishment for this trick, an angry Zeus decided to deny mankind the use of fire. But Prometheus stole fire inside a fennel stalk, and gave it to humanity. Zeus then ordered the creation of the first woman Pandora as a new punishment for mankind. And Prometheus was chained to a cliff, where an eagle fed on his ever-regenerating liver every day, until eventually Zeus' son Heracles came to free him. Manuscripts The earliest existing manuscripts of the Theogony date from the end of the 13th century. An early example is found in Vaticanus gr. 1825. This manuscript dates to about 1310 based on watermarks. There are about 64 known manuscripts that date from 1600 AD or earlier. Influence on earliest Greek philosophy The heritage of Greek mythology already embodied the desire to articulate reality as a whole, and this universalizing impulse was fundamental for the first projects of speculative theorizing. It appears that the order of being was first imaginatively visualized before it was abstractly thought. Hesiod, impressed by necessity governing the ordering of things, discloses a definite pattern in the genesis and appearance of the gods. These ideas made something like cosmological speculation possible. The earliest rhetoric of reflection all centers about two interrelated things: the experience of wonder as a living involvement with the divine order of things; and the absolute conviction that, beyond the totality of things, reality forms a beautiful and harmonious whole. In the Theogony, the origin (arche) is Chaos, a divine primordial condition, and there are the roots and the ends of the earth, sky, sea, and Tartarus. Pherecydes of Syros (6th century BC), believed that there were three pre-existent divine principles and called the water also Chaos. In the language of the archaic period (8th – 6th century BC), arche (or archai) designates the source, origin, or root of things that exist. If a thing is to be well established or founded, its arche or static point must be secure, and the most secure foundations are those provided by the gods: the indestructible, immutable, and eternal ordering of things. In ancient Greek philosophy, arche is the element or first principle of all things, a permanent nature or substance which is conserved in the generation of the rest of it. From this, all things come to be, and into it they are resolved in a final state. It is the divine horizon of substance that encompasses and rules all things. Thales (7th – 6th century BC), the first Greek philosopher, claimed that the first principle of all things is water. Anaximander (6th century BC) was the first philosopher who used the term arche for that which writers from Aristotle on call the "substratum". Anaximander claimed that the beginning or first principle is an endless mass (Apeiron) subject to neither age nor decay, from which all things are being born and then they are destroyed there. A fragment from Xenophanes (6th century BC) shows the transition from Chaos to Apeiron: "The upper limit of earth borders on air. The lower limit of earth reaches down to the unlimited (i.e the Apeiron)." Christian views of the Theogony John Milton, a Calvinist, viewed the Theogony as inspired by Satan. Milton's view, as articulated in Paradise Lost, was that once Satan was cast out from heaven, he became the muse that inspired Hesiod. What Hesiod wrote, therefore, was a corruption of the "actual" events that happened in the cosmological struggle of Satan against God. In particular, Milton asserted that the triumph of Zeus (i.e., the supreme deity) through guile, negotiation and alliances, was a corruption of God's omnipotence which did not require any ally. Milton's view echoes the views of early Christian patristic writers. Justin Martyr and Athenagoras of Athens, for example, asserted that heathen mythologies in general are demonic distortions of the "true" cosmological history. Other cosmogonies in ancient literature In the Theogony the initial state of the universe, or the origin (arche) is Chaos, a gaping void (abyss) considered as a divine primordial condition, from which appeared everything that exists. Then came Gaia (Earth), Tartarus (the cave-like space under the earth; the later-born Erebus is the darkness in this space), and Eros (representing sexual desire—the urge to reproduce—instead of the emotion of love as is the common misconception). Hesiod made an abstraction because his original chaos is something completely indefinite. By contrast, in the Orphic cosmogony the unaging Chronos produced Aether and Chaos and made a silvery egg in divine Aether. From it appeared the androgynous god Phanes, identified by the Orphics as Eros, who becomes the creator of the world. Some similar ideas appear in the Vedic and Hindu cosmologies. In the Vedic cosmology the universe is created from nothing by the great heat. Kāma (Desire) the primal seed of spirit, is the link which connected the existent with the non-existent In the Hindu cosmology, in the beginning there was nothing in the universe but only darkness and the divine essence who removed the darkness and created the primordial waters. His seed produced the universal germ (Hiranyagarbha), from which everything else appeared. In the Babylonian creation story Enûma Eliš the universe was in a formless state and is described as a watery chaos. From it emerged two primary gods, the male Apsu and female Tiamat, and a third deity who is the maker Mummu and his power for the progression of cosmogonic births to begin. Norse mythology also describes Ginnungagap as the primordial abyss from which sprang the first living creatures, including the giant Ymir whose body eventually became the world, whose blood became the seas, and so on; another version describes the origin of the world as a result of the fiery and cold parts of Hel colliding. Editions Selected translations Athanassakis, Apostolos N., Theogony; Works and days; Shield / Hesiod; introduction, translation, and notes, Baltimore: Johns Hopkins University Press, 1983. Cook, Thomas, "The Works of Hesiod," 1728. Frazer, R.M. (Richard McIlwaine), The Poems of Hesiod, Norman: University of Oklahoma Press, 1983. Most, Glenn, translator, Hesiod, 2 vols., Loeb Classical Library, Cambridge, Massachusetts, 2006–07. Schlegel, Catherine M., and Henry Weinfield, translators, Theogony and Works and Days, Ann Arbor, Michigan, 2006 Johnson, Kimberly, Theogony and Works and Days: A New Critical Edition, Northwestern University Press, 2017. . See also Ancient literature Gigantomachy Theomachy Pherecydes of Syros Notes References Apollodorus, Apollodorus, The Library, with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes. Cambridge, Massachusetts, Harvard University Press; London, William Heinemann Ltd. 1921. Online version at the Perseus Digital Library. Apollonius of Rhodes, Argonautica, edited and translated by William H. Race, Loeb Classical Library No. 1, Cambridge, Massachusetts, Harvard University Press, 2009. . Online version at Harvard University Press. Brown, Norman O. Introduction to Hesiod: Theogony (New York: Liberal Arts Press) 1953. Caldwell, Richard, Hesiod's Theogony, Focus Publishing/R. Pullins Company (June 1, 1987). . Clay, Jenny Strauss, Hesiod's Cosmos, Cambridge University Press, 2003. . Gantz, Timothy, Early Greek Myth: A Guide to Literary and Artistic Sources, Johns Hopkins University Press, 1996, Two volumes: (Vol. 1), (Vol. 2). Hard, Robin, The Routledge Handbook of Greek Mythology: Based on H.J. Rose's "Handbook of Greek Mythology", Psychology Press, 2004, . Lamberton, Robert, Hesiod, New Haven : Yale University Press, 1988. . Cf. Chapter II, "The Theogony", pp. 38–104. Most, G.W., Hesiod, Theogony, Works and Days, Testimonia, Edited and translated by Glenn W. Most, Loeb Classical Library No. 57, Cambridge, Massachusetts, Harvard University Press, 2018. . Online version at Harvard University Press. Tandy, David W., and Neale, Walter C. [translators], Works and Days: a translation and commentary for the social sciences, Berkeley: University of California Press, 1996. West, M. L. (1966), Hesiod: Theogony, Oxford University Press. . West, M. L. (1988), Hesiod: Theogony and Works and Days, Oxford University Press. . Verdenius, Willem Jacob, A Commentary on Hesiod Works and Days vv 1-382 (Leiden: E.J. Brill, 1985). External links Hesiod, Theogony: text in English translation. Hesiod, Theogony e-text in Ancient Greek (from Perseus) Hesiod, Theogony e-text in English (from Perseus) 8th-century BC books 8th-century BC poems Hesiod Ancient Greek poems 700s BC Greek religion texts Iron Age Greece References on Greek mythology Greek and Roman deities in fiction
Theogony
[ "Astronomy" ]
5,701
[ "Cosmogony", "Creation myths" ]
30,651
https://en.wikipedia.org/wiki/Transposable%20element
A transposable element (TE), also transposon, or jumping gene, is a type of mobile genetic element, a nucleic acid sequence in DNA that can change its position within a genome, sometimes creating or reversing mutations and altering the cell's genetic identity and genome size. Transposition often results in duplication of the same genetic material. The discovery of mobile genetic elements earned Barbara McClintock a Nobel Prize in 1983. Further research into transposons has potential for use in gene therapy, and the finding of new drug targets in personalized medicine. The vast number of variables in the transposon makes data analytics difficult but combined with other sequencing technologies significant advances may be made in the understanding and treatment of disease. Transposable elements make up about half of the genome in a eukaryotic cell, accounting for much of human genetic diversity. Although TEs are selfish genetic elements, many are important in genome function and evolution. Transposons are also very useful to researchers as a means to alter DNA inside a living organism. There are at least two classes of TEs: Class I TEs or retrotransposons generally function via reverse transcription, while Class II TEs or DNA transposons encode the protein transposase, which they require for insertion and excision, and some of these TEs also encode other proteins. Discovery by Barbara McClintock Barbara McClintock discovered the first TEs in maize (Zea mays) at the Cold Spring Harbor Laboratory in New York. McClintock was experimenting with maize plants that had broken chromosomes. In the winter of 1944–1945, McClintock planted corn kernels that were self-pollinated, meaning that the silk (style) of the flower received pollen from its own anther. These kernels came from a long line of plants that had been self-pollinated, causing broken arms on the end of their ninth chromosomes. As the maize plants began to grow, McClintock noted unusual color patterns on the leaves. For example, one leaf had two albino patches of almost identical size, located side by side on the leaf. McClintock hypothesized that during cell division certain cells lost genetic material, while others gained what they had lost. However, when comparing the chromosomes of the current generation of plants with the parent generation, she found certain parts of the chromosome had switched position. This refuted the popular genetic theory of the time that genes were fixed in their position on a chromosome. McClintock found that genes could not only move but they could also be turned on or off due to certain environmental conditions or during different stages of cell development. McClintock also showed that gene mutations could be reversed. She presented her report on her findings in 1951, and published an article on her discoveries in Genetics in November 1953 entitled "Induction of Instability at Selected Loci in Maize". At the 1951 Cold Spring Harbor Symposium where she first publicized her findings, her talk was met with silence. Her work was largely dismissed and ignored until the late 1960s–1970s when, after TEs were found in bacteria, it was rediscovered. She was awarded a Nobel Prize in Physiology or Medicine in 1983 for her discovery of TEs, more than thirty years after her initial research. Classification Transposable elements represent one of several types of mobile genetic elements. TEs are assigned to one of two classes according to their mechanism of transposition, which can be described as either copy and paste (Class I TEs) or cut and paste (Class II TEs). Retrotransposon Class I TEs are copied in two stages: first, they are transcribed from DNA to RNA, and the RNA produced is then reverse transcribed to DNA. This copied DNA is then inserted back into the genome at a new position. The reverse transcription step is catalyzed by a reverse transcriptase, which is often encoded by the TE itself. The characteristics of retrotransposons are similar to retroviruses, such as HIV. Despite the potential negative effects of retrotransposons, like inserting itself into the middle of a necessary DNA sequence, which can render important genes unusable, they are still essential to keep a species' ribosomal DNA intact over the generations, preventing infertility. Retrotransposons are commonly grouped into three main orders: Retrotransposons, with long terminal repeats (LTRs), which encode reverse transcriptase, similar to retroviruses Retroposons, long interspersed nuclear elements (LINEs, LINE-1s, or L1s), which encode reverse transcriptase but lack LTRs, and are transcribed by RNA polymerase II Short interspersed nuclear elements (SINEs) do not encode reverse transcriptase and are transcribed by RNA polymerase III Retroviruses can also be considered TEs. For example, after the conversion of retroviral RNA into DNA inside a host cell, the newly produced retroviral DNA is integrated into the genome of the host cell. These integrated DNAs are termed proviruses. The provirus is a specialized form of eukaryotic retrotransposon, which can produce RNA intermediates that may leave the host cell and infect other cells. The transposition cycle of retroviruses has similarities to that of prokaryotic TEs, suggesting a distant relationship between the two. DNA transposons The cut-and-paste transposition mechanism of class II TEs does not involve an RNA intermediate. The transpositions are catalyzed by several transposase enzymes. Some transposases non-specifically bind to any target site in DNA, whereas others bind to specific target sequences. The transposase makes a staggered cut at the target site producing sticky ends, cuts out the DNA transposon and ligates it into the target site. A DNA polymerase fills in the resulting gaps from the sticky ends and DNA ligase closes the sugar-phosphate backbone. This results in target site duplication and the insertion sites of DNA transposons may be identified by short direct repeats (a staggered cut in the target DNA filled by DNA polymerase) followed by inverted repeats (which are important for the TE excision by transposase). Cut-and-paste TEs may be duplicated if their transposition takes place during S phase of the cell cycle, when a donor site has already been replicated but a target site has not yet been replicated. Such duplications at the target site can result in gene duplication, which plays an important role in genomic evolution. Not all DNA transposons transpose through the cut-and-paste mechanism. In some cases, a replicative transposition is observed in which a transposon replicates itself to a new target site (e.g. helitron). Class II TEs comprise less than 2% of the human genome, making the rest Class I. Autonomous and non-autonomous Transposition can be classified as either "autonomous" or "non-autonomous" in both Class I and Class II TEs. Autonomous TEs can move by themselves, whereas non-autonomous TEs require the presence of another TE to move. This is often because dependent TEs lack transposase (for Class II) or reverse transcriptase (for Class I). Activator element (Ac) is an example of an autonomous TE, and dissociation elements (Ds) is an example of a non-autonomous TE. Without Ac, Ds is not able to transpose. Class III Some researchers also identify a third class of transposable elements, which has been described as "a grab-bag consisting of transposons that don't clearly fit into the other two categories". Examples of such TEs are the Foldback (FB) elements of Drosophila melanogaster, the TU elements of Strongylocentrotus purpuratus, and Miniature Inverted-repeat Transposable Elements. Distribution Approximately 64% of the maize genome is made up of TEs, as is 44% of the human genome, and almost half of murine genomes. New discoveries of transposable elements have shown the exact distribution of TEs with respect to their transcription start sites (TSSs) and enhancers. A recent study found that a promoter contains 25% of regions that harbor TEs. It is known that older TEs are not found in TSS locations because TEs frequency starts as a function once there is a distance from the TSS. A possible theory for this is that TEs might interfere with the transcription pausing or the first-intro splicing. Also as mentioned before, the presence of TEs closed by the TSS locations is correlated to their evolutionary age (number of different mutations that TEs can develop during the time). Examples The first TEs were discovered in maize (Zea mays) by Barbara McClintock in 1948, for which she was later awarded a Nobel Prize. She noticed chromosomal insertions, deletions, and translocations caused by these elements. These changes in the genome could, for example, lead to a change in the color of corn kernels. About 64% of the maize genome consists of TEs. The Ac/Ds system described by McClintock are Class II TEs. Transposition of Ac in tobacco has been demonstrated by B. Baker. In the pond microorganism, Oxytricha, TEs play such a critical role that when removed, the organism fails to develop. One family of TEs in the fruit fly Drosophila melanogaster are called P elements. They seem to have first appeared in the species only in the middle of the twentieth century; within the last 50 years, they spread through every population of the species. Gerald M. Rubin and Allan C. Spradling pioneered technology to use artificial P elements to insert genes into Drosophila by injecting the embryo. In bacteria, TEs usually carry an additional gene for functions other than transposition, often for antibiotic resistance. In bacteria, transposons can jump from chromosomal DNA to plasmid DNA and back, allowing for the transfer and permanent addition of genes such as those encoding antibiotic resistance (multi-antibiotic resistant bacterial strains can be generated in this way). Bacterial transposons of this type belong to the Tn family. When the transposable elements lack additional genes, they are known as insertion sequences. In humans, the most common TE is the Alu sequence. It is approximately 300 bases long and can be found between 300,000 and one million times in the human genome. Alu alone is estimated to make up 15–17% of the human genome. Mariner-like elements are another prominent class of transposons found in multiple species, including humans. The Mariner transposon was first discovered by Jacobson and Hartl in Drosophila. This Class II transposable element is known for its uncanny ability to be transmitted horizontally in many species. There are an estimated 14,000 copies of Mariner in the human genome comprising 2.6 million base pairs. The first mariner-element transposons outside of animals were found in Trichomonas vaginalis. Mu phage transposition is the best-known example of replicative transposition. In Yeast genomes (Saccharomyces cerevisiae) there are five distinct retrotransposon families: Ty1, Ty2, Ty3, Ty4 and Ty5. A helitron is a TE found in eukaryotes that is thought to replicate by a rolling-circle mechanism. In human embryos, two types of transposons combined to form noncoding RNA that catalyzes the development of stem cells. During the early stages of a fetus's growth, the embryo's inner cell mass expands as these stem cells enumerate. The increase of this type of cells is crucial, since stem cells later change form and give rise to all the cells in the body. In peppered moths, a transposon in a gene called cortex caused the moths' wings to turn completely black. This change in coloration helped moths to blend in with ash and soot-covered areas during the Industrial Revolution. Aedes aegypti carries a large and diverse number of TEs. This analysis by Matthews et al. 2018 also suggests this is common to all mosquitoes. Negative effects Transposons have coexisted with eukaryotes for thousands of years and through their coexistence have become integrated in many organisms' genomes. Colloquially known as 'jumping genes', transposons can move within and between genomes allowing for this integration. While there are many positive effects of transposons in their host eukaryotic genomes, there are some instances of mutagenic effects that TEs have on genomes leading to disease and malignant genetic alterations. Mechanisms of mutagenesis TEs are mutagens and due to the contribution to the formation of new cis-regulatory DNA elements that are connected to many transcription factors that are found in living cells; TEs can undergo many evolutionary mutations and alterations. These are often the causes of genetic disease, and gives the potential lethal effects of ectopic expression. TEs can damage the genome of their host cell in different ways: A transposon or a retrotransposon that inserts itself into a functional gene can disable that gene. After a DNA transposon leaves a gene, the resulting gap may not be repaired correctly. Multiple copies of the same sequence, such as Alu sequences, can hinder precise chromosomal pairing during mitosis and meiosis, resulting in unequal crossovers, one of the main reasons for chromosome duplication. TEs use a number of different mechanisms to cause genetic instability and disease in their host genomes. Expression of disease-causing, damaging proteins that inhibit normal cellular function. Many TEs contain promoters which drive transcription of their own transposase. These promoters can cause aberrant expression of linked genes, causing disease or mutant phenotypes. Diseases Diseases often caused by TEs include Hemophilia A and B LINE1 (L1) TEs that land on the human Factor VIII have been shown to cause haemophilia Severe combined immunodeficiency Insertion of L1 into the APC gene causes colon cancer, confirming that TEs play an important role in disease development. Porphyria Insertion of Alu element into the PBGD gene leads to interference with the coding region and leads to acute intermittent porphyria (AIP). Predisposition to cancer LINE1(L1) TE's and other retrotransposons have been linked to cancer because they cause genomic instability. Duchenne muscular dystrophy. Caused by SVA transposable element insertion in the fukutin (FKTN) gene which renders the gene inactive. Alzheimer's Disease and other Tauopathies Transposable element dysregulation can cause neuronal death, leading to neurodegenerative disorders Rate of transposition, induction and defense One study estimated the rate of transposition of a particular retrotransposon, the Ty1 element in Saccharomyces cerevisiae. Using several assumptions, the rate of successful transposition event per single Ty1 element came out to be about once every few months to once every few years. Some TEs contain heat-shock like promoters and their rate of transposition increases if the cell is subjected to stress, thus increasing the mutation rate under these conditions, which might be beneficial to the cell. Cells defend against the proliferation of TEs in a number of ways. These include piRNAs and siRNAs, which silence TEs after they have been transcribed. If organisms are mostly composed of TEs, one might assume that disease caused by misplaced TEs is very common, but in most cases TEs are silenced through epigenetic mechanisms like DNA methylation, chromatin remodeling and piRNA, such that little to no phenotypic effects nor movements of TEs occur as in some wild-type plant TEs. Certain mutated plants have been found to have defects in methylation-related enzymes (methyl transferase) which cause the transcription of TEs, thus affecting the phenotype. One hypothesis suggests that only approximately 100 LINE1 related sequences are active, despite their sequences making up 17% of the human genome. In human cells, silencing of LINE1 sequences is triggered by an RNA interference (RNAi) mechanism. Surprisingly, the RNAi sequences are derived from the 5′ untranslated region (UTR) of the LINE1, a long terminal which repeats itself. Supposedly, the 5′ LINE1 UTR that codes for the sense promoter for LINE1 transcription also encodes the antisense promoter for the miRNA that becomes the substrate for siRNA production. Inhibition of the RNAi silencing mechanism in this region showed an increase in LINE1 transcription. Evolution TEs are found in almost all life forms, and the scientific community is still exploring their evolution and their effect on genome evolution. It is unclear whether TEs originated in the last universal common ancestor, arose independently multiple times, or arose once and then spread to other kingdoms by horizontal gene transfer. While some TEs confer benefits on their hosts, most are regarded as selfish DNA parasites. In this way, they are similar to viruses. Various viruses and TEs also share features in their genome structures and biochemical abilities, leading to speculation that they share a common ancestor. Because excessive TE activity can damage exons, many organisms have acquired mechanisms to inhibit their activity. Bacteria may undergo high rates of gene deletion as part of a mechanism to remove TEs and viruses from their genomes, while eukaryotic organisms typically use RNA interference to inhibit TE activity. Nevertheless, some TEs generate large families often associated with speciation events. Evolution often deactivates DNA transposons, leaving them as introns (inactive gene sequences). In vertebrate animal cells, nearly all 100,000+ DNA transposons per genome have genes that encode inactive transposase polypeptides. The first synthetic transposon designed for use in vertebrate (including human) cells, the Sleeping Beauty transposon system, is a Tc1/mariner-like transposon. Its dead ("fossil") versions are spread widely in the salmonid genome and a functional version was engineered by comparing those versions. Human Tc1-like transposons are divided into Hsmar1 and Hsmar2 subfamilies. Although both types are inactive, one copy of Hsmar1 found in the SETMAR gene is under selection as it provides DNA-binding for the histone-modifying protein. Many other human genes are similarly derived from transposons. Hsmar2 has been reconstructed multiple times from the fossil sequences. The frequency and location of TE integrations influence genomic structure and evolution and affect gene and protein regulatory networks during development and in differentiated cell types. Large quantities of TEs within genomes may still present evolutionary advantages, however. Interspersed repeats within genomes are created by transposition events accumulating over evolutionary time. Because interspersed repeats block gene conversion, they protect novel gene sequences from being overwritten by similar gene sequences and thereby facilitate the development of new genes. TEs may also have been co-opted by the vertebrate immune system as a means of producing antibody diversity. The V(D)J recombination system operates by a mechanism similar to that of some TEs. TEs also serve to generate repeating sequences that can form dsRNA to act as a substrate for the action of ADAR in RNA editing. TEs can contain many types of genes, including those conferring antibiotic resistance and the ability to transpose to conjugative plasmids. Some TEs also contain integrons, genetic elements that can capture and express genes from other sources. These contain integrase, which can integrate gene cassettes. There are over 40 antibiotic resistance genes identified on cassettes, as well as virulence genes. Transposons do not always excise their elements precisely, sometimes removing the adjacent base pairs; this phenomenon is called exon shuffling. Shuffling two unrelated exons can create a novel gene product or, more likely, an intron. Some non-autonomous DNA TEs found in plants can capture coding DNA from genes and shuffle them across the genome. This process can duplicate genes in the genome (a phenomenon called transduplication), and can contribute to generate novel genes by exon shuffling. Evolutionary drive for TEs on the genomic context There is a hypothesis that states that TEs might provide a ready source of DNA that could be co-opted by the cell to help regulate gene expression. Research showed that many diverse modes of TEs co-evolution along with some transcription factors targeting TE-associated genomic elements and chromatin are evolving from TE sequences. Most of the time, these particular modes do not follow the simple model of TEs and regulating host gene expression. Applications Transposable elements can be harnessed in laboratory and research settings to study genomes of organisms and even engineer genetic sequences. The use of transposable elements can be split into two categories: for genetic engineering and as a genetic tool. Genetic engineering Insertional mutagenesis uses the features of a TE to insert a sequence. In most cases, this is used to remove a DNA sequence or cause a frameshift mutation. In some cases the insertion of a TE into a gene can disrupt that gene's function in a reversible manner where transposase-mediated excision of the DNA transposon restores gene function. This produces plants in which neighboring cells have different genotypes. This feature allows researchers to distinguish between genes that must be present inside of a cell in order to function (cell-autonomous) and genes that produce observable effects in cells other than those where the gene is expressed. Genetic tool In addition to the qualities mentioned for Genetic engineering, a Genetic tool also:- Used for analysis of gene expression and protein functioning in signature-tagging mutagenesis. This analytical tool allows researchers the ability to determine phenotypic expression of gene sequences. Also, this analytic technique mutates the desired locus of interest so that the phenotypes of the original and the mutated gene can be compared. Specific applications TEs are also a widely used tool for mutagenesis of most experimentally tractable organisms. The Sleeping Beauty transposon system has been used extensively as an insertional tag for identifying cancer genes. The Tc1/mariner-class of TEs Sleeping Beauty transposon system, awarded Molecule of the Year in 2009, is active in mammalian cells and is being investigated for use in human gene therapy. TEs are used for the reconstruction of phylogenies by the means of presence/absence analyses. Transposons can act as biological mutagen in bacteria. Common organisms which the use of Transposons has been well developed are: Drosophila Arabidopsis thaliana Escherichia coli De novo repeat identification De novo repeat identification is an initial scan of sequence data that seeks to find the repetitive regions of the genome, and to classify these repeats. Many computer programs exist to perform de novo repeat identification, all operating under the same general principles. As short tandem repeats are generally 1–6 base pairs in length and are often consecutive, their identification is relatively simple. Dispersed repetitive elements, on the other hand, are more challenging to identify, due to the fact that they are longer and have often acquired mutations. However, it is important to identify these repeats as they are often found to be transposable elements (TEs). De novo identification of transposons involves three steps: 1) find all repeats within the genome, 2) build a consensus of each family of sequences, and 3) classify these repeats. There are three groups of algorithms for the first step. One group is referred to as the k-mer approach, where a k-mer is a sequence of length k. In this approach, the genome is scanned for overrepresented k-mers; that is, k-mers that occur more often than is likely based on probability alone. The length k is determined by the type of transposon being searched for. The k-mer approach also allows mismatches, the number of which is determined by the analyst. Some k-mer approach programs use the k-mer as a base, and extend both ends of each repeated k-mer until there is no more similarity between them, indicating the ends of the repeats. Another group of algorithms employs a method called sequence self-comparison. Sequence self-comparison programs use databases such as AB-BLAST to conduct an initial sequence alignment. As these programs find groups of elements that partially overlap, they are useful for finding highly diverged transposons, or transposons with only a small region copied into other parts of the genome. Another group of algorithms follows the periodicity approach. These algorithms perform a Fourier transformation on the sequence data, identifying periodicities, regions that are repeated periodically, and are able to use peaks in the resultant spectrum to find candidate repetitive elements. This method works best for tandem repeats, but can be used for dispersed repeats as well. However, it is a slow process, making it an unlikely choice for genome-scale analysis. The second step of de novo repeat identification involves building a consensus of each family of sequences. A consensus sequence is a sequence that is created based on the repeats that comprise a TE family. A base pair in a consensus is the one that occurred most often in the sequences being compared to make the consensus. For example, in a family of 50 repeats where 42 have a T base pair in the same position, the consensus sequence would have a T at this position as well, as the base pair is representative of the family as a whole at that particular position, and is most likely the base pair found in the family's ancestor at that position. Once a consensus sequence has been made for each family, it is then possible to move on to further analysis, such as TE classification and genome masking in order to quantify the overall TE content of the genome. Adaptive TEs Transposable elements have been recognized as good candidates for stimulating gene adaptation, through their ability to regulate the expression levels of nearby genes. Combined with their "mobility", transposable elements can be relocated adjacent to their targeted genes, and control the expression levels of the gene, dependent upon the circumstances. The study conducted in 2008, "High Rate of Recent Transposable Element–Induced Adaptation in Drosophila melanogaster", used D. melanogaster that had recently migrated from Africa to other parts of the world, as a basis for studying adaptations caused by transposable elements. Although most of the TEs were located on introns, the experiment showed a significant difference in gene expressions between the population in Africa and other parts of the world. The four TEs that caused the selective sweep were more prevalent in D. melanogaster from temperate climates, leading the researchers to conclude that the selective pressures of the climate prompted genetic adaptation. From this experiment, it has been confirmed that adaptive TEs are prevalent in nature, by enabling organisms to adapt gene expression as a result of new selective pressures. However, not all effects of adaptive TEs are beneficial to the population. In the research conducted in 2009, "A Recent Adaptive Transposable Element Insertion Near Highly Conserved Developmental Loci in Drosophila melanogaster", a TE, inserted between Jheh 2 and Jheh 3, revealed a downgrade in the expression level of both of the genes. Downregulation of such genes has caused Drosophila to exhibit extended developmental time and reduced egg to adult viability. Although this adaptation was observed in high frequency in all non-African populations, it was not fixed in any of them. This is not hard to believe, since it is logical for a population to favor higher egg to adult viability, therefore trying to purge the trait caused by this specific TE adaptation. At the same time, there have been several reports showing the advantageous adaptation caused by TEs. In the research done with silkworms, "An Adaptive Transposable Element insertion in the Regulatory Region of the EO Gene in the Domesticated Silkworm", a TE insertion was observed in the cis-regulatory region of the EO gene, which regulates molting hormone 20E, and enhanced expression was recorded. While populations without the TE insert are often unable to effectively regulate hormone 20E under starvation conditions, those with the insert had a more stable development, which resulted in higher developmental uniformity. These three experiments all demonstrated different ways in which TE insertions can be advantageous or disadvantageous, through means of regulating the expression level of adjacent genes. The field of adaptive TE research is still under development and more findings can be expected in the future. TEs participates in gene control networks Recent studies have confirmed that TEs can contribute to the generation of transcription factors. However, how this process of contribution can have an impact on the participation of genome control networks. TEs are more common in many regions of the DNA and it makes up 45% of total human DNA. Also, TEs contributed to 16% of transcription factor binding sites. A larger number of motifs are also found in non-TE-derived DNA, and the number is larger than TE-derived DNA. All these factors correlate to the direct participation of TEs in many ways of gene control networks. See also Notes References External links – A possible connection between aberrant reinsertions and lymphoma. Repbase – a database of transposable element sequences Dfam - a database of transposable element families, multiple sequence alignments, and sequence models RepeatMasker – a computer program used by computational biologists to annotate transposons in DNA sequences Use of the Sleeping Beauty Transposon System for Stable Gene Expression in Mouse Embryonic Stem Cells Introduction to Transposons, 2018 YouTube video Modification of genetic information Mobile genetic elements Molecular biology Non-coding DNA
Transposable element
[ "Chemistry", "Biology" ]
6,243
[ "Mobile genetic elements", "Modification of genetic information", "Molecular genetics", "Molecular biology", "Biochemistry" ]
30,652
https://en.wikipedia.org/wiki/Trypsin
Trypsin is an enzyme in the first section of the small intestine that starts the digestion of protein molecules by cutting long chains of amino acids into smaller pieces. It is a serine protease from the PA clan superfamily, found in the digestive system of many vertebrates, where it hydrolyzes proteins. Trypsin is formed in the small intestine when its proenzyme form, the trypsinogen produced by the pancreas, is activated. Trypsin cuts peptide chains mainly at the carboxyl side of the amino acids lysine or arginine. It is used for numerous biotechnological processes. The process is commonly referred to as trypsinogen proteolysis or trypsinization, and proteins that have been digested/treated with trypsin are said to have been trypsinized. Trypsin was discovered in 1876 by Wilhelm Kühne. Although many sources say that Kühne named trypsin from the Ancient Greek word for rubbing, 'tripsis', because the enzyme was first isolated by rubbing the pancreas with glass powder and alcohol, in fact Kühne named trypsin from the Ancient Greek word 'thrýpto' which means 'I break' or 'I break apart'. Function In the duodenum, trypsin catalyzes the hydrolysis of peptide bonds, breaking down proteins into smaller peptides. The peptide products are then further hydrolyzed into amino acids via other proteases, rendering them available for absorption into the blood stream. Tryptic digestion is a necessary step in protein absorption, as proteins are generally too large to be absorbed through the lining of the small intestine. Trypsin is produced as the inactive zymogen trypsinogen in the pancreas. When the pancreas is stimulated by cholecystokinin, it is then secreted into the first part of the small intestine (the duodenum) via the pancreatic duct. Once in the small intestine, the enzyme enterokinase (also called enteropeptidase) activates trypsinogen into trypsin by proteolytic cleavage. The trypsin then activates additional trypsin, chymotrypsin and carboxypeptidase. Mechanism The enzymatic mechanism is similar to that of other serine proteases. These enzymes contain a catalytic triad consisting of histidine-57, aspartate-102, and serine-195. This catalytic triad was formerly called a charge relay system, implying the abstraction of protons from serine to histidine and from histidine to aspartate, but owing to evidence provided by NMR that the resultant alkoxide form of serine would have a much stronger pull on the proton than does the imidazole ring of histidine, current thinking holds instead that serine and histidine each have effectively equal share of the proton, forming short low-barrier hydrogen bonds therewith. By these means, the nucleophilicity of the active site serine is increased, facilitating its attack on the amide carbon during proteolysis. The enzymatic reaction that trypsin catalyzes is thermodynamically favorable, but requires significant activation energy (it is "kinetically unfavorable"). In addition, trypsin contains an "oxyanion hole" formed by the backbone amide hydrogen atoms of Gly-193 and Ser-195, which through hydrogen bonding stabilize the negative charge which accumulates on the amide oxygen after nucleophilic attack on the planar amide carbon by the serine oxygen causes that carbon to assume a tetrahedral geometry. Such stabilization of this tetrahedral intermediate helps to reduce the energy barrier of its formation and is concomitant with a lowering of the free energy of the transition state. Preferential binding of the transition state is a key feature of enzyme chemistry. The negative aspartate residue (Asp 189) located in the catalytic pocket (S1) of trypsin is responsible for attracting and stabilizing positively charged lysine and/or arginine, and is, thus, responsible for the specificity of the enzyme. This means that trypsin predominantly cleaves proteins at the carboxyl side (or "C-terminal side") of the amino acids lysine and arginine except when either is bound to a C-terminal proline, although large-scale mass spectrometry data suggest cleavage occurs even with proline. Trypsin is considered an endopeptidase, i.e., the cleavage occurs within the polypeptide chain rather than at the terminal amino acids located at the ends of polypeptides. Properties Human trypsin has an optimal operating temperature of about 37 °C. In contrast, the Atlantic cod has several types of trypsins for the poikilotherm fish to survive at different body temperatures. Cod trypsins include trypsin I with an activity range of and maximal activity at , as well as trypsin Y with a range of and a maximal activity at . As a protein, trypsin has various molecular weights depending on the source. For example, a molecular weight of 23.3 kDa is reported for trypsin from bovine and porcine sources. The activity of trypsin is not affected by the enzyme inhibitor tosyl phenylalanyl chloromethyl ketone, TPCK, which deactivates chymotrypsin. Trypsin should be stored at very cold temperatures (between −20 and −80 °C) to prevent autolysis, which may also be impeded by storage of trypsin at pH 3 or by using trypsin modified by reductive methylation. When the pH is adjusted back to pH 8, activity returns. Isozymes These human genes encode proteins with trypsin enzymatic activity: Other isoforms of trypsin may also be found in other organisms. Clinical significance Activation of trypsin from proteolytic cleavage of trypsinogen in the pancreas can lead to a series of events that cause pancreatic self-digestion, resulting in pancreatitis. One consequence of the autosomal recessive disease cystic fibrosis is a deficiency in transport of trypsin and other digestive enzymes from the pancreas. This leads to the disorder termed meconium ileus, which involves intestinal obstruction (ileus) due to overly thick meconium, which is normally broken down by trypsin and other proteases, then passed in feces. Applications Trypsin is available in high quantity in pancreases, and can be purified rather easily. Hence, it has been used widely in various biotechnological processes. In a tissue culture lab, trypsin is used to resuspend cells adherent to the cell culture dish wall during the process of harvesting cells. Some cell types adhere to the sides and bottom of a dish when cultivated in vitro. Trypsin is used to cleave proteins holding the cultured cells to the dish, so that the cells can be removed from the plates. Trypsin can also be used to dissociate dissected cells (for example, prior to cell fixing and sorting). Trypsin can be used to break down casein in breast milk. If trypsin is added to a solution of milk powder, the breakdown of casein causes the milk to become translucent. The rate of reaction can be measured by using the amount of time needed for the milk to turn translucent. Trypsin is commonly used in biological research during proteomics experiments to digest proteins into peptides for mass spectrometry analysis, e.g. in-gel digestion. Trypsin is particularly suited for this, since it has a very well defined specificity, as it hydrolyzes only the peptide bonds in which the carbonyl group is contributed either by an arginine or lysine residue. Trypsin can also be used to dissolve blood clots in its microbial form and treat inflammation in its pancreatic form. In veterinary medicine, trypsin is an ingredient in wound spray products, such as Debrisol, to dissolve dead tissue and pus in wounds in horses, cattle, dogs, and cats. In India, Trypsin - Chymotrypsin is widely prescribed for inflammation reduction during laryngitis and surgery recovery. Use outside India is not well documented and most papers written on its effectivity in the aforementioned situations have been funded by Torrent Pharmaceuticals which is one major brand that makes these tablets in India. In food Commercial protease preparations usually consist of a mixture of various protease enzymes that often includes trypsin. These preparations are widely used in food processing: as a baking enzyme to improve the workability of dough in the extraction of seasonings and flavorings from vegetable or animal proteins and in the manufacture of sauces to control aroma formation in cheese and milk products to improve the texture of fish products to tenderize meat during cold stabilization of beer in the production of hypoallergenic food where proteases break down specific allergenic proteins into nonallergenic peptides, for example, proteases are used to produce hypoallergenic baby food from cow's milk, thereby diminishing the risk of babies developing milk allergies. Trypsin inhibitor To prevent the action of active trypsin in the pancreas, which can be highly damaging, inhibitors such as BPTI and SPINK1 in the pancreas and α1-antitrypsin in the serum are present as part of the defense against its inappropriate activation. Any trypsin prematurely formed from the inactive trypsinogen is then bound by the inhibitor. The protein-protein interaction between trypsin and its inhibitors is one of the tightest bound, and trypsin is bound by some of its pancreatic inhibitors nearly irreversibly. In contrast with nearly all known protein assemblies, some complexes of trypsin bound by its inhibitors do not readily dissociate after treatment with 8M urea. Trypsin inhibitors can serve as tools when addressing metabolic and obesity disorders. Metabolic disorders, obesity, and being overweight are known to increase non-communicable chronic disease prevalence. It is of public health policy interest to explore various ways to mitigate this occurrence including use of trypsin inhibitors. These inhibitors have capabilities of reducing colon, breast, skin, and prostate cancer by way of radioprotective and anticarcinogenic activity. Trypsin inhibitors can act as regulatory mechanisms to control release of neutrophil proteases and avoid significant tissue damage. In regards to cardiovascular conditions associated with unproductive serine protease activity, trypsin inhibitors can block their activity in platelet aggregation, fibrinolysis, coagulation, and blood coagulation. The multifunctionality of trypsin inhibitors includes being potential protease inhibitors for AMP activity. While the antibacterial action mechanisms of trypsin inhibitors are unclear, studies have aimed to study their mechanisms as potential applications in bacterial infection treatments. Research and scanning microscopy showed antibacterial effects on bacterial membranes from Staphylococcus aureus. Trypsin inhibitors from amphibian skin showed bacterial death promotion that affected the cell wall and membrane of Staphylococcus aureus. Studies also analyzed antibacterial actions in trypsin inhibitor peptides, proteins, and E. coli. The results showed sufficient bacterial growth prevention. However, trypsin inhibitors have to meet certain criteria to be utilized in foods and medical treatments. Trypsin alternatives Trypsin digestion of extra cellular matrix is a common practice in cell culture. However, this enzymatic degradation of the cells can negatively effect cell viability and surface markers, especially in stem cells. There are gentler alternatives than trypsin such as Accutase which doesn't effect surface markers such as cd14, cd117, cd49f, cd292. However Accutase decreases the surface levels of FasL and Fas receptor on macrophages, these receptors are associated with cell cytotoxicity in the immune system and can also facilitate apoptosis-related cell death. ProAlanase could also serve as an alternative to Trypsin in proteomic applications. ProAlanase is an Aspergillus niger fungus protease that can achieve high proteolytic activity and specificity for digestion under the correct conditions. ProAnalase, the acidic prolyl-endopeptidase protease, previously studied as An-PEP, has been observed in various experiments to define its specificity. ProAnalase performed optimally in LC-MS applications with short digestion times and highly acidic pH. See also References Further reading External links The MEROPS online database for peptidases and their inhibitors: Trypsin 1 S01.151 , Trypsin 2 S01.258 , Trypsin 3 S01.174 Trypsin Inhibitors and Trypsin Assay Method at Sigma-Aldrich Cell culture reagents EC 3.4.21 Proteases
Trypsin
[ "Chemistry", "Biology" ]
2,781
[ "Reagents for biochemistry", "Cell culture reagents" ]
30,990
https://en.wikipedia.org/wiki/Thermocouple
A thermocouple, also known as a "thermoelectrical thermometer", is an electrical device consisting of two dissimilar electrical conductors forming an electrical junction. A thermocouple produces a temperature-dependent voltage as a result of the Seebeck effect, and this voltage can be interpreted to measure temperature. Thermocouples are widely used as temperature sensors. Commercial thermocouples are inexpensive, interchangeable, are supplied with standard connectors, and can measure a wide range of temperatures. In contrast to most other methods of temperature measurement, thermocouples are self-powered and require no external form of excitation. The main limitation with thermocouples is accuracy; system errors of less than one degree Celsius (°C) can be difficult to achieve. Thermocouples are widely used in science and industry. Applications include temperature measurement for kilns, gas turbine exhaust, diesel engines, and other industrial processes. Thermocouples are also used in homes, offices and businesses as the temperature sensors in thermostats, and also as flame sensors in safety devices for gas-powered appliances. Principle of operation In 1821, the German physicist Thomas Johann Seebeck discovered that a magnetic needle held near a circuit made up of two dissimilar metals got deflected when one of the dissimilar metal junctions was heated. At the time, Seebeck referred to this consequence as thermo-magnetism. The magnetic field he observed was later shown to be due to thermo-electric current. In practical use, the voltage generated at a single junction of two different types of wire is what is of interest as this can be used to measure temperature at very high and low temperatures. The magnitude of the voltage depends on the types of wire being used. Generally, the voltage is in the microvolt range and care must be taken to obtain a usable measurement. Although very little current flows, power can be generated by a single thermocouple junction. Power generation using multiple thermocouples, as in a thermopile, is common. The standard configuration of a thermocouple is shown in the figure. The dissimilar conductors contact at the measuring (aka hot) junction and at the reference (aka cold) junction. The thermocouple is connected to the electrical system at its reference junction. The figure shows the measuring junction on the left, the reference junction in the middle and represents the rest of the electrical system as a voltage meter on the right. The temperature Tsense is obtained via a characteristic function E(T) for the type of thermocouple which requires inputs: measured voltage V and reference junction temperature Tref. The solution to the equation E(Tsense) = V + E(Tref) yields Tsense. Sometimes these details are hidden inside a device that packages the reference junction block (with Tref thermometer), voltmeter, and equation solver. Seebeck effect The Seebeck effect refers to the development of an electromotive force across two points of an electrically conducting material when there is a temperature difference between those two points. Under open-circuit conditions where there is no internal current flow, the gradient of voltage () is directly proportional to the gradient in temperature (): where is a temperature-dependent material property known as the Seebeck coefficient. The standard measurement configuration shown in the figure shows four temperature regions and thus four voltage contributions: Change from to , in the lower copper wire. Change from to , in the alumel wire. Change from to , in the chromel wire. Change from to , in the upper copper wire. The first and fourth contributions cancel out exactly, because these regions involve the same temperature change and an identical material. As a result, does not influence the measured voltage. The second and third contributions do not cancel, as they involve different materials. The measured voltage turns out to be where and are the Seebeck coefficients of the conductors attached to the positive and negative terminals of the voltmeter, respectively (chromel and alumel in the figure). Characteristic function The thermocouple's behaviour is captured by a characteristic function , which needs only to be consulted at two arguments: In terms of the Seebeck coefficients, the characteristic function is defined by The constant of integration in this indefinite integral has no significance, but is conventionally chosen such that . Thermocouple manufacturers and metrology standards organizations such as NIST provide tables of the function that have been measured and interpolated over a range of temperatures, for particular thermocouple types (see External links section for access to these tables). Reference junction To obtain the desired measurement of , it is not sufficient to just measure . The temperature at the reference junctions must also be known. Two strategies are often used here: "Ice bath": The reference junction block is maintained at a known temperature as it is immersed in a semi-frozen bath of distilled water at atmospheric pressure. The precise temperature of the melting point phase transition acts as a natural thermostat, fixing to 0 °C. Reference junction sensor (known as ""): The reference junction block is allowed to vary in temperature, but the temperature is measured at this block using a separate temperature sensor. This secondary measurement is used to compensate for temperature variation at the junction block. The thermocouple junction is often exposed to extreme environments, while the reference junction is often mounted near the instrument's location. Semiconductor thermometer devices are often used in modern thermocouple instruments. In both cases the value is calculated, then the function is searched for a matching value. The argument where this match occurs is the value of : . Practical concerns Thermocouples ideally should be very simple measurement devices, with each type being characterized by a precise curve, independent of any other details. In reality, thermocouples are affected by issues such as alloy manufacturing uncertainties, aging effects, and circuit design mistakes/misunderstandings. Circuit construction A common error in thermocouple construction is related to cold junction compensation. If an error is made on the estimation of , an error will appear in the temperature measurement. For the simplest measurements, thermocouple wires are connected to copper far away from the hot or cold point whose temperature is measured; this reference junction is then assumed to be at room temperature, but that temperature can vary. Because of the nonlinearity in the thermocouple voltage curve, the errors in and are generally unequal values. Some thermocouples, such as Type B, have a relatively flat voltage curve near room temperature, meaning that a large uncertainty in a room-temperature translates to only a small error in . Junctions should be made in a reliable manner, but there are many possible approaches to accomplish this. For low temperatures, junctions can be brazed or soldered; however, it may be difficult to find a suitable flux and this may not be suitable at the sensing junction due to the solder's low melting point. Reference and extension junctions are therefore usually made with screw terminal blocks. For high temperatures, the most common approach is the spot weld or crimp using a durable material. One common myth regarding thermocouples is that junctions must be made cleanly without involving a third metal, to avoid unwanted added EMFs. This may result from another common misunderstanding that the voltage is generated at the junction. In fact, the junctions should in principle have uniform internal temperature; therefore, no voltage is generated at the junction. The voltage is generated in the thermal gradient, along the wire. A thermocouple produces small signals, often microvolts in magnitude. Precise measurements of this signal require an amplifier with low input offset voltage and with care taken to avoid thermal EMFs from self-heating within the voltmeter itself. If the thermocouple wire has a high resistance for some reason (poor contact at junctions, or very thin wires used for fast thermal response), the measuring instrument should have high input impedance to prevent an offset in the measured voltage. A useful feature in thermocouple instrumentation will simultaneously measure resistance and detect faulty connections in the wiring or at thermocouple junctions. Metallurgical grades While a thermocouple wire type is often described by its chemical composition, the actual aim is to produce a pair of wires that follow a standardized curve. Impurities affect each batch of metal differently, producing variable Seebeck coefficients. To match the standard behaviour, thermocouple wire manufacturers will deliberately mix in additional impurities to "dope" the alloy, compensating for uncontrolled variations in source material. As a result, there are standard and specialized grades of thermocouple wire, depending on the level of precision demanded in the thermocouple behaviour. Precision grades may only be available in matched pairs, where one wire is modified to compensate for deficiencies in the other wire. A special case of thermocouple wire is known as "extension grade", designed to carry the thermoelectric circuit over a longer distance. Extension wires follow the stated curve but for various reasons they are not designed to be used in extreme environments and so they cannot be used at the sensing junction in some applications. For example, an extension wire may be in a different form, such as highly flexible with stranded construction and plastic insulation, or be part of a multi-wire cable for carrying many thermocouple circuits. With expensive noble metal thermocouples, the extension wires may even be made of a completely different, cheaper material that mimics the standard type over a reduced temperature range. Aging Thermocouples are often used at high temperatures and in reactive furnace atmospheres. In this case, the practical lifetime is limited by thermocouple aging. The thermoelectric coefficients of the wires in a thermocouple that is used to measure very high temperatures may change with time, and the measurement voltage accordingly drops. The simple relationship between the temperature difference of the junctions and the measurement voltage is only correct if each wire is homogeneous (uniform in composition). As thermocouples age in a process, their conductors can lose homogeneity due to chemical and metallurgical changes caused by extreme or prolonged exposure to high temperatures. If the aged section of the thermocouple circuit is exposed to a temperature gradient, the measured voltage will differ, resulting in error. Aged thermocouples are only partly modified; for example, being unaffected in the parts outside the furnace. For this reason, aged thermocouples cannot be taken out of their installed location and recalibrated in a bath or test furnace to determine error. This also explains why error can sometimes be observed when an aged thermocouple is pulled partly out of a furnace—as the sensor is pulled back, aged sections may see exposure to increased temperature gradients from hot to cold as the aged section now passes through the cooler refractory area, contributing significant error to the measurement. Likewise, an aged thermocouple that is pushed deeper into the furnace might sometimes provide a more accurate reading if being pushed further into the furnace causes the temperature gradient to occur only in a fresh section. Types Certain combinations of alloys have become popular as industry standards. Selection of the combination is driven by cost, availability, convenience, melting point, chemical properties, stability, and output. Different types are best suited for different applications. They are usually selected on the basis of the temperature range and sensitivity needed. Thermocouples with low sensitivities (B, R, and S types) have correspondingly lower resolutions. Other selection criteria include the chemical inertness of the thermocouple material and whether it is magnetic or not. Standard thermocouple types are listed below with the positive electrode (assuming ) first, followed by the negative electrode. Nickel-alloy thermocouples Type E Type E (chromel–constantan) has a high output (68 μV/°C), which makes it well suited to cryogenic use. Additionally, it is non-magnetic. Wide range is −270 °C to +740 °C and narrow range is −110 °C to +140 °C. Type J Type J (iron–constantan) has a more restricted range (−40 °C to +1200 °C) than type K but higher sensitivity of about 50 μV/°C. The Curie point of the iron (770 °C) causes a smooth change in the characteristic, which determines the upper-temperature limit. Note, the European/German Type L is a variant of the type J, with a different specification for the EMF output (reference DIN 43712:1985-01). The positive wire is made of hard iron, while the negative wire consists of softer copper-nickel. Due to its iron content, the J-type is slightly heavier and the positive wire is magnetic. It is highly vulnerable to corrosion in reducing atmospheres, which can lead to significant degradation of the thermocouple's performance. Type K Type K (chromel–alumel) is the most common general-purpose thermocouple with a sensitivity of approximately 41 μV/°C. It is inexpensive, and a wide variety of probes are available in its −200 °C to +1350 °C (−330 °F to +2460 °F) range. Type K was specified at a time when metallurgy was less advanced than it is today, and consequently characteristics may vary considerably between samples. One of the constituent metals, nickel, is magnetic; a characteristic of thermocouples made with magnetic material is that they undergo a deviation in output when the material reaches its Curie point, which occurs for type K thermocouples at around 185 °C. They operate very well in oxidizing atmospheres. If, however, a mostly reducing atmosphere (such as hydrogen with a small amount of oxygen) comes into contact with the wires, the chromium in the chromel alloy oxidizes. This reduces the emf output, and the thermocouple reads low. This phenomenon is known as green rot, due to the color of the affected alloy. Although not always distinctively green, the chromel wire will develop a mottled silvery skin and become magnetic. An easy way to check for this problem is to see whether the two wires are magnetic (normally, chromel is non-magnetic). Hydrogen in the atmosphere is the usual cause of green rot. At high temperatures, it can diffuse through solid metals or an intact metal thermowell. Even a sheath of magnesium oxide insulating the thermocouple will not keep the hydrogen out. Green rot does not occur in atmospheres sufficiently rich in oxygen, or oxygen-free. A sealed thermowell can be filled with inert gas, or an oxygen scavenger (e.g. a sacrificial titanium wire) can be added. Alternatively, additional oxygen can be introduced into the thermowell. Another option is using a different thermocouple type for the low-oxygen atmospheres where green rot can occur; a type N thermocouple is a suitable alternative. Type M Type M (82%Ni/18%Mo–99.2%Ni/0.8%Co, by weight) are used in vacuum furnaces for the same reasons as with type C (described below). Upper temperature is limited to 1400 °C. It is less commonly used than other types. Type N Type N (Nicrosil–Nisil) thermocouples are suitable for use between −270 °C and +1300 °C, owing to its stability and oxidation resistance. Sensitivity is about 39 μV/°C at 900 °C, slightly lower compared to type K. Designed at the Defence Science and Technology Organisation (DSTO) of Australia, by Noel A. Burley, type-N thermocouples overcome the three principal characteristic types and causes of thermoelectric instability in the standard base-metal thermoelement materials: A gradual and generally cumulative drift in thermal EMF on long exposure at elevated temperatures. This is observed in all base-metal thermoelement materials and is mainly due to compositional changes caused by oxidation, carburization, or neutron irradiation that can produce transmutation in nuclear reactor environments. In the case of type-K thermocouples, manganese and aluminium atoms from the KN (negative) wire migrate to the KP (positive) wire, resulting in a down-scale drift due to chemical contamination. This effect is cumulative and irreversible. A short-term cyclic change in thermal EMF on heating in the temperature range about 250–650 °C, which occurs in thermocouples of types K, J, T, and E. This kind of EMF instability is associated with structural changes such as magnetic short-range order in the metallurgical composition. A time-independent perturbation in thermal EMF in specific temperature ranges. This is due to composition-dependent magnetic transformations that perturb the thermal EMFs in type-K thermocouples in the range about 25–225 °C, and in type J above 730 °C. The Nicrosil and Nisil thermocouple alloys show greatly enhanced thermoelectric stability relative to the other standard base-metal thermocouple alloys because their compositions substantially reduce the thermoelectric instabilities described above. This is achieved primarily by increasing component solute concentrations (chromium and silicon) in a base of nickel above those required to cause a transition from internal to external modes of oxidation, and by selecting solutes (silicon and magnesium) that preferentially oxidize to form a diffusion-barrier, and hence oxidation-inhibiting films. Type N thermocouples are suitable alternative to type K for low-oxygen conditions where type K is prone to green rot. They are suitable for use in vacuum, inert atmospheres, oxidizing atmospheres, or dry reducing atmospheres. They do not tolerate the presence of sulfur. Type T Type T (copper–constantan) thermocouples are suited for measurements in the −200 to 350 °C range. Often used as a differential measurement, since only copper wire touches the probes. Since both conductors are non-magnetic, there is no Curie point and thus no abrupt change in characteristics. Type-T thermocouples have a sensitivity of about 43 μV/°C. Note that copper has a much higher thermal conductivity than the alloys generally used in thermocouple constructions, and so it is necessary to exercise extra care with thermally anchoring type-T thermocouples. A similar composition is found in the obsolete Type U in the German specification DIN 43712:1985-01. Platinum/rhodium-alloy thermocouples Types B, R, and S thermocouples use platinum or a platinum/rhodium alloy for each conductor. These are among the most stable thermocouples, but have lower sensitivity than other types, approximately 10 μV/°C. Type B, R, and S thermocouples are usually used only for high-temperature measurements due to their high cost and low sensitivity. For type R and S thermocouples, HTX platinum wire can be used in place of the pure platinum leg to strengthen the thermocouple and prevent failures from grain growth that can occur in high temperature and harsh conditions. Type B Type B (70%Pt/30%Rh–94%Pt/6%Rh, by weight) thermocouples are suited for use at up to 1800 °C. Type-B thermocouples produce the same output at 0 °C and 42 °C, limiting their use below about 50 °C. The emf function has a minimum around 21 °C (for 21.020262 °C emf=-2.584972 μV), meaning that cold-junction compensation is easily performed, since the compensation voltage is essentially a constant for a reference at typical room temperatures. Type R Type R (87%Pt/13%Rh–Pt, by weight) thermocouples are used 0 to 1600 °C. Type R Thermocouples are quite stable and capable of long operating life when used in clean, favorable conditions. When used above 1100 °C ( 2000 °F), these thermocouples must be protected from exposure to metallic and non-metallic vapors. Type R is not suitable for direct insertion into metallic protecting tubes. Long term high temperature exposure causes grain growth which can lead to mechanical failure and a negative calibration drift caused by Rhodium diffusion to pure platinum leg as well as from Rhodium volatilization. This type has the same uses as type S, but is not interchangeable with it. Type S Type S (90%Pt/10%Rh–Pt, by weight) thermocouples, similar to type R, are used up to 1600 °C. Before the introduction of the International Temperature Scale of 1990 (ITS-90), precision type-S thermocouples were used as the practical standard thermometers for the range of 630 °C to 1064 °C, based on an interpolation between the freezing points of antimony, silver, and gold. Starting with ITS-90, platinum resistance thermometers have taken over this range as standard thermometers. Tungsten/rhenium-alloy thermocouples These thermocouples are well-suited for measuring extremely high temperatures. Typical uses are hydrogen and inert atmospheres, as well as vacuum furnaces. They are not used in oxidizing environments at high temperatures because of embrittlement. A typical range is 0 to 2315 °C, which can be extended to 2760 °C in inert atmosphere and to 3000 °C for brief measurements. Pure tungsten at high temperatures undergoes recrystallization and becomes brittle. Therefore, types C and D are preferred over type G in some applications. In presence of water vapor at high temperature, tungsten reacts to form tungsten(VI) oxide, which volatilizes away, and hydrogen. Hydrogen then reacts with tungsten oxide, after which water is formed again. Such a "water cycle" can lead to erosion of the thermocouple and eventual failure. In high temperature vacuum applications, it is therefore desirable to avoid the presence of traces of water. An alternative to tungsten/rhenium is tungsten/molybdenum, but the voltage–temperature response is weaker and has minimum at around 1000 K. The thermocouple temperature is limited also by other materials used. For example beryllium oxide, a popular material for high temperature applications, tends to gain conductivity with temperature; a particular configuration of sensor had the insulation resistance dropping from a megaohm at 1000 K to 200 ohms at 2200 K. At high temperatures, the materials undergo chemical reaction. At 2700 K beryllium oxide slightly reacts with tungsten, tungsten-rhenium alloy, and tantalum; at 2600 K molybdenum reacts with BeO, tungsten does not react. BeO begins melting at about 2820 K, magnesium oxide at about 3020 K. Type C (95%W/5%Re–74%W/26%Re, by weight) maximum temperature will be measured by type-c thermocouple is 2329 °C. Type D (97%W/3%Re–75%W/25%Re, by weight) Type G (W–74%W/26%Re, by weight) Others Chromel–gold/iron-alloy thermocouples In these thermocouples (chromel–gold/iron alloy), the negative wire is gold with a small fraction (0.03–0.15 atom percent) of iron. The impure gold wire gives the thermocouple a high sensitivity at low temperatures (compared to other thermocouples at that temperature), whereas the chromel wire maintains the sensitivity near room temperature. It can be used for cryogenic applications (1.2–300 K and even up to 600 K). Both the sensitivity and the temperature range depend on the iron concentration. The sensitivity is typically around 15 μV/K at low temperatures, and the lowest usable temperature varies between 1.2 and 4.2 K. Type P (noble-metal alloy) or "Platinel II" Type P (55%Pd/31%Pt/14%Au–65%Au/35%Pd, by weight) thermocouples give a thermoelectric voltage that mimics the type K over the range 500 °C to 1400 °C, however they are constructed purely of noble metals and so shows enhanced corrosion resistance. This combination is also known as Platinel II. Platinum/molybdenum-alloy thermocouples Thermocouples of platinum/molybdenum-alloy (95%Pt/5%Mo–99.9%Pt/0.1%Mo, by weight) are sometimes used in nuclear reactors, since they show a low drift from nuclear transmutation induced by neutron irradiation, compared to the platinum/rhodium-alloy types. Iridium/rhodium alloy thermocouples The use of two wires of iridium/rhodium alloys can provide a thermocouple that can be used up to about 2000 °C in inert atmospheres. Pure noble-metal thermocouples Au–Pt, Pt–Pd Thermocouples made from two different, high-purity noble metals can show high accuracy even when uncalibrated, as well as low levels of drift. Two combinations in use are gold–platinum and platinum–palladium. Their main limitations are the low melting points of the metals involved (1064 °C for gold and 1555 °C for palladium). These thermocouples tend to be more accurate than type S, and due to their economy and simplicity are even regarded as competitive alternatives to the platinum resistance thermometers that are normally used as standard thermometers. HTIR-TC (High Temperature Irradiation Resistant) thermocouples HTIR-TC offers a breakthrough in measuring high-temperature processes. Its characteristics are: durable and reliable at high temperatures, up to at least 1700 °C; resistant to irradiation; moderately priced; available in a variety of configurations - adaptable to each application; easily installed. Originally developed for use in nuclear test reactors, HTIR-TC may enhance the safety of operations in future reactors. This thermocouple was developed by researchers at the Idaho National Laboratory (INL). Comparison of types The table below describes properties of several different thermocouple types. Within the tolerance columns, T represents the temperature of the hot junction, in degrees Celsius. For example, a thermocouple with a tolerance of ±0.0025×T would have a tolerance of ±2.5 °C at 1000 °C. Each cell in the Color Code columns depicts the end of a thermocouple cable, showing the jacket color and the color of the individual leads. The background color represents the color of the connector body. Thermocouple insulation Wires insulation The wires that make up the thermocouple must be insulated from each other everywhere, except at the sensing junction. Any additional electrical contact between the wires, or contact of a wire to other conductive objects, can modify the voltage and give a false reading of temperature. Plastics are suitable insulators for low temperatures parts of a thermocouple, whereas ceramic insulation can be used up to around 1000 °C. Other concerns (abrasion and chemical resistance) also affect the suitability of materials. When wire insulation disintegrates, it can result in an unintended electrical contact at a different location from the desired sensing point. If such a damaged thermocouple is used in the closed loop control of a thermostat or other temperature controller, this can lead to a runaway overheating event and possibly severe damage, as the false temperature reading will typically be lower than the sensing junction temperature. Failed insulation will also typically outgas, which can lead to process contamination. For parts of thermocouples used at very high temperatures or in contamination-sensitive applications, the only suitable insulation may be vacuum or inert gas; the mechanical rigidity of the thermocouple wires is used to keep them separated. Table of insulation materials Temperature ratings for insulations may vary based on what the overall thermocouple construction cable consists of. Note: T300 is a new high-temperature material that was recently approved by UL for 300 °C operating temperatures. Applications Thermocouples are suitable for measuring over a large temperature range, from −270 up to 3000 °C (for a short time, in inert atmosphere). Applications include temperature measurement for kilns, gas turbine exhaust, diesel engines, other industrial processes and fog machines. They are less suitable for applications where smaller temperature differences need to be measured with high accuracy, for example the range 0–100 °C with 0.1 °C accuracy. For such applications thermistors, silicon bandgap temperature sensors and resistance thermometers are more suitable. Steel industry Type B, S, R and K thermocouples are used extensively in the steel and iron industries to monitor temperatures and chemistry throughout the steel making process. Disposable, immersible, type S thermocouples are regularly used in the electric arc furnace process to accurately measure the temperature of steel before tapping. The cooling curve of a small steel sample can be analyzed and used to estimate the carbon content of molten steel. Gas appliance safety Many gas-fed heating appliances such as ovens and water heaters make use of a pilot flame to ignite the main gas burner when required. If the pilot flame goes out, unburned gas may be released, which is an explosion risk and a health hazard. To prevent this, some appliances use a thermocouple in a fail-safe circuit to sense when the pilot light is burning. The tip of the thermocouple is placed in the pilot flame, generating a voltage which operates the supply valve which feeds gas to the pilot. So long as the pilot flame remains lit, the thermocouple remains hot, and the pilot gas valve is held open. If the pilot light goes out, the thermocouple temperature falls, causing the voltage across the thermocouple to drop and the valve to close. Where the probe may be easily placed above the flame, a rectifying sensor may often be used instead. With part ceramic construction, they may also be known as flame rods, flame sensors or flame detection electrodes. Some combined main burner and pilot gas valves (mainly by Honeywell) reduce the power demand to within the range of a single universal thermocouple heated by a pilot (25 mV open circuit falling by half with the coil connected to a 10–12 mV, 0.2–0.25 A source, typically) by sizing the coil to be able to hold the valve open against a light spring, but only after the initial turning-on force is provided by the user pressing and holding a knob to compress the spring during lighting of the pilot. These systems are identifiable by the "press and hold for x minutes" in the pilot lighting instructions. (The holding current requirement of such a valve is much less than a bigger solenoid designed for pulling the valve in from a closed position would require.) Special test sets are made to confirm the valve let-go and holding currents, because an ordinary milliammeter cannot be used as it introduces more resistance than the gas valve coil. Apart from testing the open circuit voltage of the thermocouple, and the near short-circuit DC continuity through the thermocouple gas valve coil, the easiest non-specialist test is substitution of a known good gas valve. Some systems, known as millivolt control systems, extend the thermocouple concept to both open and close the main gas valve as well. Not only does the voltage created by the pilot thermocouple activate the pilot gas valve, it is also routed through a thermostat to power the main gas valve as well. Here, a larger voltage is needed than in a pilot flame safety system described above, and a thermopile is used rather than a single thermocouple. Such a system requires no external source of electricity for its operation and thus can operate during a power failure, provided that all the other related system components allow for this. This excludes common forced air furnaces because external electrical power is required to operate the blower motor, but this feature is especially useful for un-powered convection heaters. A similar gas shut-off safety mechanism using a thermocouple is sometimes employed to ensure that the main burner ignites within a certain time period, shutting off the main burner gas supply valve should that not happen. Out of concern about energy wasted by the standing pilot flame, designers of many newer appliances have switched to an electronically controlled pilot-less ignition, also called intermittent ignition. With no standing pilot flame, there is no risk of gas buildup should the flame go out, so these appliances do not need thermocouple-based pilot safety switches. As these designs lose the benefit of operation without a continuous source of electricity, standing pilots are still used in some appliances. The exception is later model instantaneous (aka "tankless") water heaters that use the flow of water to generate the current required to ignite the gas burner; these designs also use a thermocouple as a safety cut-off device in the event the gas fails to ignite, or if the flame is extinguished. Thermopile radiation sensors Thermopiles are used for measuring the intensity of incident radiation, typically visible or infrared light, which heats the hot junctions, while the cold junctions are on a heat sink. It is possible to measure radiative intensities of only a few μW/cm2 with commercially available thermopile sensors. For example, some laser power meters are based on such sensors; these are specifically known as thermopile laser sensor. The principle of operation of a thermopile sensor is distinct from that of a bolometer, as the latter relies on a change in resistance. Manufacturing Thermocouples can generally be used in the testing of prototype electrical and mechanical apparatus. For example, switchgear under test for its current carrying capacity may have thermocouples installed and monitored during a heat run test, to confirm that the temperature rise at rated current does not exceed designed limits. Power production A thermocouple can produce current to drive some processes directly, without the need for extra circuitry and power sources. For example, the power from a thermocouple can activate a valve when a temperature difference arises. The electrical energy generated by a thermocouple is converted from the heat which must be supplied to the hot side to maintain the electric potential. A continuous transfer of heat is necessary because the current flowing through the thermocouple tends to cause the hot side to cool down and the cold side to heat up (the Peltier effect). Thermocouples can be connected in series to form a thermopile, where all the hot junctions are exposed to a higher temperature and all the cold junctions to a lower temperature. The output is the sum of the voltages across the individual junctions, giving larger voltage and power output. In a radioisotope thermoelectric generator, the radioactive decay of transuranic elements as a heat source has been used to power spacecraft on missions too far from the Sun to use solar power. Thermopiles heated by kerosene lamps were used to run batteryless radio receivers in isolated areas. There are commercially produced lanterns that use the heat from a candle to run several light-emitting diodes, and thermoelectrically powered fans to improve air circulation and heat distribution in wood stoves. Process plants Chemical production and petroleum refineries will usually employ computers for logging and for limit testing the many temperatures associated with a process, typically numbering in the hundreds. For such cases, a number of thermocouple leads will be brought to a common reference block (a large block of copper) containing the second thermocouple of each circuit. The temperature of the block is in turn measured by a thermistor. Simple computations are used to determine the temperature at each measured location. Thermocouple as vacuum gauge A thermocouple can be used as a vacuum gauge over the range of approximately 0.001 to 1 torr absolute pressure. In this pressure range, the mean free path of the gas is comparable to the dimensions of the vacuum chamber, and the flow regime is neither purely viscous nor purely molecular. In this configuration, the thermocouple junction is attached to the centre of a short heating wire, which is usually energised by a constant current of about 5 mA, and the heat is removed at a rate related to the thermal conductivity of the gas. The temperature detected at the thermocouple junction depends on the thermal conductivity of the surrounding gas, which depends on the pressure of the gas. The potential difference measured by a thermocouple is proportional to the square of pressure over the low- to medium-vacuum range. At higher (viscous flow) and lower (molecular flow) pressures, the thermal conductivity of air or any other gas is essentially independent of pressure. The thermocouple was first used as a vacuum gauge by Voege in 1906. The mathematical model for the thermocouple as a vacuum gauge is quite complicated, as explained in detail by Van Atta, but can be simplified to: where P is the gas pressure, B is a constant that depends on the thermocouple temperature, the gas composition and the vacuum-chamber geometry, V0 is the thermocouple voltage at zero pressure (absolute), and V is the voltage indicated by the thermocouple. The alternative is the Pirani gauge, which operates in a similar way, over approximately the same pressure range, but is only a 2-terminal device, sensing the change in resistance with temperature of a thin electrically heated wire, rather than using a thermocouple. See also Heat flux sensor Bolometer Giuseppe Domenico Botto Thermistor Thermoelectric power List of sensors International Temperature Scale of 1990 Bimetal (mechanical) References External links Thermocouple Operating Principle – University Of Cambridge Thermocouple Drift – University Of Cambridge Two Ways to Measure Temperature Using Thermocouples Thermocouple data tables: Text tables: NIST ITS-90 Thermocouple Database (B, E, J, K, N, R, S, T) PDF tables: J K T E N R S B Python package thermocouples_reference containing characteristic curves of many thermocouple types. R package Temperature Measurement with Thermocouples, RTD and IC Sensors. Data table: Thermocouple wire sizes Temperature control Thermometers Sensors Thermoelectricity Bimetal
Thermocouple
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
8,267
[ "Home automation", "Metallurgy", "Temperature control", "Measuring instruments", "Bimetal", "Thermometers", "Sensors" ]
31,032
https://en.wikipedia.org/wiki/Protein%20tertiary%20structure
Protein tertiary structure is the three-dimensional shape of a protein. The tertiary structure will have a single polypeptide chain "backbone" with one or more protein secondary structures, the protein domains. Amino acid side chains and the backbone may interact and bond in a number of ways. The interactions and bonds of side chains within a particular protein determine its tertiary structure. The protein tertiary structure is defined by its atomic coordinates. These coordinates may refer either to a protein domain or to the entire tertiary structure. A number of these structures may bind to each other, forming a quaternary structure. History The science of the tertiary structure of proteins has progressed from one of hypothesis to one of detailed definition. Although Emil Fischer had suggested proteins were made of polypeptide chains and amino acid side chains, it was Dorothy Maud Wrinch who incorporated geometry into the prediction of protein structures. Wrinch demonstrated this with the Cyclol model, the first prediction of the structure of a globular protein. Contemporary methods are able to determine, without prediction, tertiary structures to within 5 Å (0.5 nm) for small proteins (<120 residues) and, under favorable conditions, confident secondary structure predictions. Determinants Stability of native states Thermostability A protein folded into its native state or native conformation typically has a lower Gibbs free energy (a combination of enthalpy and entropy) than the unfolded conformation. A protein will tend towards low-energy conformations, which will determine the protein's fold in the cellular environment. Because many similar conformations will have similar energies, protein structures are dynamic, fluctuating between these similar structures. Globular proteins have a core of hydrophobic amino acid residues and a surface region of water-exposed, charged, hydrophilic residues. This arrangement may stabilize interactions within the tertiary structure. For example, in secreted proteins, which are not bathed in cytoplasm, disulfide bonds between cysteine residues help to maintain the tertiary structure. There is a commonality of stable tertiary structures seen in proteins of diverse function and diverse evolution. For example, the TIM barrel, named for the enzyme triosephosphateisomerase, is a common tertiary structure as is the highly stable, dimeric, coiled coil structure. Hence, proteins may be classified by the structures they hold. Databases of proteins which use such a classification include SCOP and CATH. Kinetic traps Folding kinetics may trap a protein in a high-energy conformation, i.e. a high-energy intermediate conformation blocks access to the lowest-energy conformation. The high-energy conformation may contribute to the function of the protein. For example, the influenza hemagglutinin protein is a single polypeptide chain which when activated, is proteolytically cleaved to form two polypeptide chains. The two chains are held in a high-energy conformation. When the local pH drops, the protein undergoes an energetically favorable conformational rearrangement that enables it to penetrate the host cell membrane. Metastability Some tertiary protein structures may exist in long-lived states that are not the expected most stable state. For example, many serpins (serine protease inhibitors) show this metastability. They undergo a conformational change when a loop of the protein is cut by a protease. Chaperone proteins It is commonly assumed that the native state of a protein is also the most thermodynamically stable and that a protein will reach its native state, given its chemical kinetics, before it is translated. Protein chaperones within the cytoplasm of a cell assist a newly synthesised polypeptide to attain its native state. Some chaperone proteins are highly specific in their function, for example, protein disulfide isomerase; others are general in their function and may assist most globular proteins, for example, the prokaryotic GroEL/GroES system of proteins and the homologous eukaryotic heat shock proteins (the Hsp60/Hsp10 system). Cytoplasmic environment Prediction of protein tertiary structure relies on knowing the protein's primary structure and comparing the possible predicted tertiary structure with known tertiary structures in protein data banks. This only takes into account the cytoplasmic environment present at the time of protein synthesis to the extent that a similar cytoplasmic environment may also have influenced the structure of the proteins recorded in the protein data bank. Ligand binding The structure of a protein, such as an enzyme, may change upon binding of its natural ligands, for example a cofactor. In this case, the structure of the protein bound to the ligand is known as holo structure, while the unbound protein has an apo structure. Structure stabilized by the formation of weak bonds between amino acid side chains - Determined by the folding of the polypeptide chain on itself (nonpolar residues are located inside the protein, while polar residues are mainly located outside) - Envelopment of the protein brings the protein closer and relates a-to located in distant regions of the sequence - Acquisition of the tertiary structure leads to the formation of pockets and sites suitable for the recognition and the binding of specific molecules (biospecificity). Determination The knowledge of the tertiary structure of soluble globular proteins is more advanced than that of membrane proteins because the former are easier to study with available technology. X-ray crystallography X-ray crystallography is the most common tool used to determine protein structure. It provides high resolution of the structure but it does not give information about protein's conformational flexibility. NMR Protein NMR gives comparatively lower resolution of protein structure. It is limited to smaller proteins. However, it can provide information about conformational changes of a protein in solution. Cryogenic electron microscopy Cryogenic electron microscopy (cryo-EM) can give information about both a protein's tertiary and quaternary structure. It is particularly well-suited to large proteins and symmetrical complexes of protein subunits. Dual polarisation interferometry Dual polarisation interferometry provides complementary information about surface captured proteins. It assists in determining structure and conformation changes over time. Projects Prediction algorithm The Folding@home project at the University of Pennsylvania is a distributed computing research effort which uses approximately 5 petaFLOPS (≈10 x86 petaFLOPS) of available computing. It aims to find an algorithm which will consistently predict protein tertiary and quaternary structures given the protein's amino acid sequence and its cellular conditions. A list of software for protein tertiary structure prediction can be found at List of protein structure prediction software. Protein aggregation diseases Protein aggregation diseases such as Alzheimer's disease and Huntington's disease and prion diseases such as bovine spongiform encephalopathy can be better understood by constructing (and reconstructing) disease models. This is done by causing the disease in laboratory animals, for example, by administering a toxin, such as MPTP to cause Parkinson's disease, or through genetic manipulation. Protein structure prediction is a new way to create disease models, which may avoid the use of animals. Protein Tertiary Structure Retrieval Project (CoMOGrad) Matching patterns in tertiary structure of a given protein to huge number of known protein tertiary structures and retrieve most similar ones in ranked order is in the heart of many research areas like function prediction of novel proteins, study of evolution, disease diagnosis, drug discovery, antibody design etc. The CoMOGrad project at BUET is a research effort to device an extremely fast and much precise method for protein tertiary structure retrieval and develop online tool based on research outcome. See also Folding (chemistry) I-TASSER Nucleic acid tertiary structure Protein contact map Proteopedia Structural biology Structural motif Protein tandem repeats References External links Protein Data Bank Display, analyse and superimpose protein 3D structures Alphabet of protein structures. Display, analyse and superimpose protein 3D structures WWW-based course teaching elementary protein bioinformatics Critical Assessment of Structure Prediction (CASP) Structural Classification of Proteins (SCOP) CATH Protein Structure Classification DALI/FSSP software and database of superposed protein structures TOPOFIT-DB Invariant Structural Cores between proteins PDBWiki — PDBWiki Home Page – a website for community annotation of PDB structures. Protein structure 3
Protein tertiary structure
[ "Chemistry" ]
1,721
[ "Protein structure", "Structural biology" ]
31,146
https://en.wikipedia.org/wiki/Transfer%20function
In engineering, a transfer function (also known as system function or network function) of a system, sub-system, or component is a mathematical function that models the system's output for each possible input. It is widely used in electronic engineering tools like circuit simulators and control systems. In simple cases, this function can be represented as a two-dimensional graph of an independent scalar input versus the dependent scalar output (known as a transfer curve or characteristic curve). Transfer functions for components are used to design and analyze systems assembled from components, particularly using the block diagram technique, in electronics and control theory. Dimensions and units of the transfer function model the output response of the device for a range of possible inputs. The transfer function of a two-port electronic circuit, such as an amplifier, might be a two-dimensional graph of the scalar voltage at the output as a function of the scalar voltage applied to the input; the transfer function of an electromechanical actuator might be the mechanical displacement of the movable arm as a function of electric current applied to the device; the transfer function of a photodetector might be the output voltage as a function of the luminous intensity of incident light of a given wavelength. The term "transfer function" is also used in the frequency domain analysis of systems using transform methods, such as the Laplace transform; it is the amplitude of the output as a function of the frequency of the input signal. The transfer function of an electronic filter is the amplitude at the output as a function of the frequency of a constant amplitude sine wave applied to the input. For optical imaging devices, the optical transfer function is the Fourier transform of the point spread function (a function of spatial frequency). Linear time-invariant systems Transfer functions are commonly used in the analysis of systems such as single-input single-output filters in signal processing, communication theory, and control theory. The term is often used exclusively to refer to linear time-invariant (LTI) systems. Most real systems have non-linear input-output characteristics, but many systems operated within nominal parameters (not over-driven) have behavior close enough to linear that LTI system theory is an acceptable representation of their input-output behavior. Continuous-time Descriptions are given in terms of a complex variable, . In many applications it is sufficient to set (thus ), which reduces the Laplace transforms with complex arguments to Fourier transforms with the real argument ω. This is common in applications primarily interested in the LTI system's steady-state response (often the case in signal processing and communication theory), not the fleeting turn-on and turn-off transient response or stability issues. For continuous-time input signal and output , dividing the Laplace transform of the output, , by the Laplace transform of the input, , yields the system's transfer function : which can be rearranged as: Discrete-time Discrete-time signals may be notated as arrays indexed by an integer (e.g. for input and for output). Instead of using the Laplace transform (which is better for continuous-time signals), discrete-time signals are dealt with using the z-transform (notated with a corresponding capital letter, like and ), so a discrete-time system's transfer function can be written as: Direct derivation from differential equations A linear differential equation with constant coefficients where u and r are suitably smooth functions of t, and L is the operator defined on the relevant function space transforms u into r. That kind of equation can be used to constrain the output function u in terms of the forcing function r. The transfer function can be used to define an operator that serves as a right inverse of L, meaning that . Solutions of the homogeneous constant-coefficient differential equation can be found by trying . That substitution yields the characteristic polynomial The inhomogeneous case can be easily solved if the input function r is also of the form . By substituting , if we define Other definitions of the transfer function are used, for example Gain, transient behavior and stability A general sinusoidal input to a system of frequency may be written . The response of a system to a sinusoidal input beginning at time will consist of the sum of the steady-state response and a transient response. The steady-state response is the output of the system in the limit of infinite time, and the transient response is the difference between the response and the steady-state response; it corresponds to the homogeneous solution of the differential equation. The transfer function for an LTI system may be written as the product: where sPi are the N roots of the characteristic polynomial and will be the poles of the transfer function. In a transfer function with a single pole where , the Laplace transform of a general sinusoid of unit amplitude will be . The Laplace transform of the output will be , and the temporal output will be the inverse Laplace transform of that function: The second term in the numerator is the transient response, and in the limit of infinite time it will diverge to infinity if σP is positive. For a system to be stable, its transfer function must have no poles whose real parts are positive. If the transfer function is strictly stable, the real parts of all poles will be negative and the transient behavior will tend to zero in the limit of infinite time. The steady-state output will be: The frequency response (or "gain") G of the system is defined as the absolute value of the ratio of the output amplitude to the steady-state input amplitude: which is the absolute value of the transfer function evaluated at . This result is valid for any number of transfer-function poles. Signal processing If is the input to a general linear time-invariant system, and is the output, and the bilateral Laplace transform of and is The output is related to the input by the transfer function as and the transfer function itself is If a complex harmonic signal with a sinusoidal component with amplitude , angular frequency and phase , where arg is the argument where is input to a linear time-invariant system, the corresponding component in the output is: In a linear time-invariant system, the input frequency has not changed; only the amplitude and phase angle of the sinusoid have been changed by the system. The frequency response describes this change for every frequency in terms of gain and phase shift The phase delay (the frequency-dependent amount of delay introduced to the sinusoid by the transfer function) is The group delay (the frequency-dependent amount of delay introduced to the envelope of the sinusoid by the transfer function) is found by computing the derivative of the phase shift with respect to angular frequency , The transfer function can also be shown using the Fourier transform, a special case of bilateral Laplace transform where . Common transfer-function families Although any LTI system can be described by some transfer function, "families" of special transfer functions are commonly used: Butterworth filter – maximally flat in passband and stopband for the given order Chebyshev filter (Type I) – maximally flat in stopband, sharper cutoff than a Butterworth filter of the same order Chebyshev filter (Type II) – maximally flat in passband, sharper cutoff than a Butterworth filter of the same order Bessel filter – maximally constant group delay for a given order Elliptic filter – sharpest cutoff (narrowest transition between passband and stopband) for the given order Optimum "L" filter Gaussian filter – minimum group delay; gives no overshoot to a step function Raised-cosine filter Control engineering In control engineering and control theory, the transfer function is derived with the Laplace transform. The transfer function was the primary tool used in classical control engineering. A transfer matrix can be obtained for any linear system to analyze its dynamics and other properties; each element of a transfer matrix is a transfer function relating a particular input variable to an output variable. A representation bridging state space and transfer function methods was proposed by Howard H. Rosenbrock, and is known as the Rosenbrock system matrix. Imaging In imaging, transfer functions are used to describe the relationship between the scene light, the image signal and the displayed light. Non-linear systems Transfer functions do not exist for many non-linear systems, such as relaxation oscillators; however, describing functions can sometimes be used to approximate such nonlinear time-invariant systems. See also References External links ECE 209: Review of Circuits as LTI Systems — Short primer on the mathematical analysis of (electrical) LTI systems. Electrical circuits Frequency-domain analysis Types of functions
Transfer function
[ "Physics", "Mathematics", "Engineering" ]
1,764
[ "Functions and mappings", "Spectrum (physical sciences)", "Frequency-domain analysis", "Mathematical objects", "Electronic engineering", "Mathematical relations", "Electrical engineering", "Types of functions", "Electrical circuits" ]
2,456,044
https://en.wikipedia.org/wiki/Table%20of%20explosive%20detonation%20velocities
This is a compilation of published detonation velocities for various high explosive compounds. Detonation velocity is the speed with which the detonation shock wave travels through the explosive. It is a key, directly measurable indicator of explosive performance, but depends on density which must always be specified, and may be too low if the test charge diameter is not large enough. Especially for little studied explosives there may be divergent published values due to charge diameter issues. In liquid explosives, like nitroglycerin, there may be two detonation velocities, one much higher than the other. The detonation velocity values presented here are typically for the highest practical density which maximizes achievable detonation velocity. The velocity of detonation is an important indicator for overall energy and power of detonation, and in particular for the brisance or shattering effect of an explosive which is due to the detonation pressure. The pressure can be calculated using Chapman-Jouguet theory from the velocity and density. See also TNT equivalent RE factor References Chemistry-related lists Explosive chemicals
Table of explosive detonation velocities
[ "Chemistry", "Engineering" ]
228
[ "Explosive chemicals", "Explosives engineering", "nan" ]
2,456,297
https://en.wikipedia.org/wiki/Seiberg%20duality
In quantum field theory, Seiberg duality, conjectured by Nathan Seiberg in 1994, is an S-duality relating two different supersymmetric QCDs. The two theories are not identical, but they agree at low energies. More precisely under a renormalization group flow they flow to the same IR fixed point, and so are in the same universality class. It is an extension to nonabelian gauge theories with N=1 supersymmetry of Montonen–Olive duality in N=4 theories and electromagnetic duality in abelian theories. The statement of Seiberg duality Seiberg duality is an equivalence of the IR fixed points in an N=1 theory with SU(Nc) as the gauge group and Nf flavors of fundamental chiral multiplets and Nf flavors of antifundamental chiral multiplets in the chiral limit (no bare masses) and an N=1 chiral QCD with Nf-Nc colors and Nf flavors, where Nc and Nf are positive integers satisfying . A stronger version of the duality relates not only the chiral limit but also the full deformation space of the theory. In the special case in which the IR fixed point is a nontrivial interacting superconformal field theory. For a superconformal field theory, the anomalous scaling dimension of a chiral superfield where R is the R-charge. This is an exact result. The dual theory contains a fundamental "meson" chiral superfield M which is color neutral but transforms as a bifundamental under the flavor symmetries. The dual theory contains the superpotential . Relations between the original and dual theories Being an S-duality, Seiberg duality relates the strong coupling regime with the weak coupling regime, and interchanges chromoelectric fields (gluons) with chromomagnetic fields (gluons of the dual gauge group), and chromoelectric charges (quarks) with nonabelian 't Hooft–Polyakov monopoles. In particular, the Higgs phase is dual to the confinement phase as in the dual superconducting model. The mesons and baryons are preserved by the duality. However, in the electric theory the meson is a quark bilinear (), while in the magnetic theory it is a fundamental field. In both theories the baryons are constructed from quarks, but the number of quarks in one baryon is the rank of the gauge group, which differs in the two dual theories. The gauge symmetries of the theories do not agree, which is not problematic as the gauge symmetry is a feature of the formulation and not of the fundamental physics. The global symmetries relate distinct physical configurations, and so they need to agree in any dual description. Evidence for Seiberg duality The moduli spaces of the dual theories are identical. The global symmetries agree, as do the charges of the mesons and baryons. In certain cases it reduces to ordinary electromagnetic duality. It may be embedded in string theory via Hanany–Witten brane cartoons consisting of intersecting D-branes. There it is realized as the motion of an NS5-brane which is conjectured to preserve the universality class. Six nontrivial anomalies may be computed on both sides of the duality, and they agree as they must in accordance with Gerard 't Hooft's anomaly matching conditions. The role of the additional fundamental meson superfield M in the dual theory is very crucial in matching the anomalies. The global gravitational anomalies also match up as the parity of the number of chiral fields is the same in both theories. The R-charge of the Weyl fermion in a chiral superfield is one less than the R-charge of the superfield. The R-charge of a gaugino is +1. Another evidence for Seiberg duality comes from identifying the superconformal index, which is a generalization of the Witten index, for the electric and the magnetic phase. The identification gives rise to complicated integral identities which have been studied in the mathematical literature. Generalizations Seiberg duality has been generalized in many directions. One generalization applies to quiver gauge theories, in which the flavor symmetries are also gauged. The simplest of these is a super QCD with the flavor group gauged and an additional term in the superpotential. It leads to a series of Seiberg dualities known as a duality cascade, introduced by Igor Klebanov and Matthew Strassler. Whether Seiberg duality exists in 3-dimensional nonabelian gauge theories with only 4 supercharges is not known, although it is conjectured in some special cases with Chern–Simons terms. References Further reading Nathan Seiberg, Electric-Magnetic Duality in Supersymmetric Non-Abelian Gauge Theories. David Tong, Supersymmetric Field Theory. Gauge theories Supersymmetric quantum field theory Quantum chromodynamics Duality theories Renormalization group
Seiberg duality
[ "Physics", "Mathematics" ]
1,074
[ "Physical phenomena", "Supersymmetric quantum field theory", "Mathematical structures", "Critical phenomena", "Renormalization group", "Category theory", "Duality theories", "Geometry", "Statistical mechanics", "Supersymmetry", "Symmetry" ]
2,457,822
https://en.wikipedia.org/wiki/Pressure%20swing%20adsorption
Pressure swing adsorption (PSA) is a technique used to separate some gas species from a mixture of gases (typically air) under pressure according to the species' molecular characteristics and affinity for an adsorbent material. It operates at near-ambient temperature and significantly differs from the cryogenic distillation commonly used to separate gases. Selective adsorbent materials (e.g., zeolites, (aka molecular sieves), activated carbon, etc.) are used as trapping material, preferentially adsorbing the target gas species at high pressure. The process then swings to low pressure to desorb the adsorbed gas. Process The pressure swing adsorption (PSA) process is based on the phenomenon that under high pressure, gases tend to be trapped onto solid surfaces, i.e. to be "adsorbed". The higher the pressure, the more gas is adsorbed. When the pressure is dropped, the gas is released, or desorbed. PSA can be used to separate gases in a mixture because different gases are adsorbed onto a given solid surface more or less strongly. For example, if a gas mixture such as air is passed under pressure through a vessel containing an adsorbent bed of zeolite that attracts nitrogen more strongly than oxygen, a fraction of nitrogen will stay in the bed, and the gas exiting the vessel will be richer in oxygen than the mixture entering. When the bed reaches the limit of its capacity to adsorb nitrogen, it can be regenerated by decreasing the pressure, thus releasing the adsorbed nitrogen. It is then ready for another cycle of producing oxygen-enriched air. Using two adsorbent vessels allows for near-continuous production of the target gas. It also allows a pressure equalisation, where the gas leaving the vessel being depressurised is used to partially pressurise the second vessel. This results in significant energy savings, and is a common industrial practice. Adsorbents Aside from their ability to discriminate between different gases, adsorbents for PSA systems are usually very porous materials chosen because of their large specific surface areas. Typical adsorbents are zeolite, activated carbon, silica gel, alumina, or synthetic resins. Though the gas adsorbed on these surfaces may consist of a layer only one or at most a few molecules thickness, surface areas of several hundred square meters per gram enable the adsorption of a large portion of the adsorbent's weight in gas. In addition to their affinity for different gases, zeolites and some types of activated carbon may utilize their molecular sieve characteristics to exclude some gas molecules from their structure based on the size and shape of the molecules, thereby restricting the ability of the larger molecules to be adsorbed. Applications Distribution process for oxygen produced by PSA plants Aside from its use to supply medical oxygen, or as a substitute for bulk cryogenic or compressed-cylinder storage, which is the primary oxygen source for any hospital, PSA has numerous other uses. One of the primary applications of PSA is in the removal of carbon dioxide (CO2) as the final step in the large-scale commercial synthesis of hydrogen (H2) for use in oil refineries and in the production of ammonia (NH3). Refineries often use PSA technology in the removal of hydrogen sulfide (H2S) from hydrogen feed and recycle streams of hydrotreating and hydrocracking units. Another application of PSA is the separation of carbon dioxide from biogas to increase the methane (CH4) ratio. Through PSA the biogas can be upgraded to a quality similar to natural gas. This includes a process in landfill gas utilization to upgrade landfill gas to utility-grade high purity methane gas to be sold as natural gas. PSA is also used in: Hypoxic air fire prevention systems to produce air with a low oxygen content. On purpose propylene plants via propane dehydrogenation. They consist of a selective medium for the preferred adsorption of methane and ethane over hydrogen. Industrial nitrogen generator units based on the PSA process can produce high-purity nitrogen gas (up to 99.9995%) from compressed air. However, such generators are more suited to supply intermediate ranges of purity and flows. Capacities of such units are given in Nm3/h, normal cubic meters per hour, one Nm3/h being equivalent to 1000 liters per hour under any of several standard conditions of temperature, pressure, and humidity. for nitrogen: from 100 Nm3/h at 99.9% purity, to 9000 Nm3/h at 97% purity; for oxygen: up to 1500 Nm3/h with a purity between 88% and 93%. In the frame of carbon capture and storage (CCS), research is also currently underway to capture CO2 in large quantities from coal-fired power plants prior to geosequestration, in order to reduce greenhouse gas production from these plants. PSA has also been discussed as a future alternative to the non-regenerable sorbent technology used in space suit primary life support systems, in order to save weight and to extend the operating time of the suit. This is the process used in medical oxygen concentrators used by emphysema and COVID-19 patients and others requiring oxygen-enriched air for breathing. Variations of PSA technology Double Stage PSA (DS-PSA, sometimes also referred to as Dual Step PSA) With this variant of PSA developed for use in laboratory nitrogen generators, nitrogen gas is produced into two steps: in the first step, the compressed air is forced to pass through a carbon molecular sieve to produce nitrogen at a purity of approximately 98%; in the second step this nitrogen is forced to pass into a second carbon molecular sieve and the nitrogen gas reaches a final purity up to 99.999%. The purge gas from the second step is recycled and partially used as feed gas in the first step. In addition, the purge process is supported by active evacuation for better performance in the next cycle. The goals of both of these changes is to improve efficiency over a conventional PSA process. DS-PSA can also be applied to increase the oxygen concentration. In this case, an aluminum silica based zeolite adsorbs nitrogen in the first stage reaching 95% oxygen in the outlet, and in the second stage a carbon-based molecular sieve adsorbs the residual nitrogen in a reverse cycle, concentrating oxygen up to 99%. Rapid PSA Rapid pressure swing adsorption, or RPSA, is frequently used in portable oxygen concentrators. It allows a large reduction in the size of the adsorbent bed when high purity is not essential and when the feed gas (air) can be discarded. It works by quickly cycling the pressure while alternately venting opposite ends of the column at the same rate. This means that non-adsorbed gases progress along the column much faster and are vented at the distal end, while adsorbed gases do not get the chance to progress and are vented at the proximal extremity. Vacuum swing adsorption Vacuum swing adsorption (VSA) segregates certain gases from a gaseous mixture at near ambient pressure; the process then swings to a vacuum to regenerate the adsorbent material. VSA differs from other PSA techniques because it operates at near-ambient temperatures and pressures. VSA typically draws the gas through the separation process with a vacuum. For oxygen and nitrogen VSA systems, the vacuum is typically generated by a blower. Hybrid vacuum pressure swing adsorption (VPSA) systems also exist. VPSA systems apply pressurized gas to the separation process and also apply a vacuum to the purge gas. VPSA systems, like one of the portable oxygen concentrators, are among the most efficient systems measured on customary industry indices, such as recovery (product gas out/product gas in) and productivity (product gas out/mass of sieve material). Generally, higher recovery leads to a smaller compressor, blower, or other compressed gas or vacuum source and lower power consumption. Higher productivity leads to smaller sieve beds. The consumer will most likely consider indices which have a more directly measurable difference in the overall system, like the amount of product gas divided by the system weight and size, the system initial and maintenance costs, the system power consumption or other operational costs, and reliability. See also Hypoxicator – Device for providing breathing air with reduced oxygen content References Further reading Hutson, Nick D.; Rege, Salil U.; and Yang, Ralph T. (2001). “Air Separation by Pressure Swing Absorption Using Superior Absorbent,” National Energy Technology Laboratory, Department of Energy, March 2001. Ruthven, Douglas M. (2004). Principles of Absorption and Absorption Process, Wiley-InterScience, Hoboken, NJ, p. 1 Yang, Ralph T. (1997). “Gas Separation by Absorption Processes”, Series on Chemical Engineering, Vol. I, World Scientific Publishing Co., Singapore. Santos, João C.; Magalhães, Fernão D.; and Mendes, Adélio, “Pressure Swing Absorption and Zeolites for Oxygen Production”, in Processos de Separação, Universidado do Porto, Porto, Portugal Gas separation Gas technologies Industrial gases Separation processes
Pressure swing adsorption
[ "Chemistry" ]
1,959
[ "Separation processes by phases", "Separation processes", "Gas separation", "Industrial gases", "nan", "Chemical process engineering" ]
2,458,485
https://en.wikipedia.org/wiki/Conserved%20quantity
A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system. Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity. Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative. Differential equations For a first order system of differential equations where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain, Note that by using the multivariate chain rule, so that the definition may be written as which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists. Hamiltonian mechanics For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution and hence is conserved if and only if . Here denotes the Poisson bracket. Lagrangian mechanics Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by is conserved. Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by is conserved. This may be derived by using the Euler–Lagrange equations. See also Conservative system Lyapunov function Hamiltonian system Conservation law Noether's theorem Charge (physics) Invariant (physics) References Differential equations Dynamical systems
Conserved quantity
[ "Physics", "Mathematics" ]
395
[ "Mathematical objects", "Differential equations", "Equations", "Mechanics", "Dynamical systems" ]
2,458,875
https://en.wikipedia.org/wiki/Data%20assimilation
Data assimilation refers to a large group of methods that update information from numerical computer models with information from observations. Data assimilation is used to update model states, model trajectories over time, model parameters, and combinations thereof. What distinguishes data assimilation from other estimation methods is that the computer model is a dynamical model, i.e. the model describes how model variables change over time, and its firm mathematical foundation in Bayesian Inference. As such, it generalizes inverse methods and has close connections with machine learning. Data assimilation initially developed in the field of numerical weather prediction. Numerical weather prediction models are equations describing the evolution of the atmosphere, typically coded into a computer program. When these models are used for forecasting the model output quickly deviates from the real atmosphere. Hence, we use observations of the atmosphere to keep the model on track. Data assimilation provides a very large number of practical ways to bring these observations into the models. Simply inserting point-wise measurements into the numerical models did not provide a satisfactory solution. Real world measurements contain errors both due to the quality of the instrument and how accurately the position of the measurement is known. These errors can cause instabilities in the models that eliminate any level of skill in a forecast. Thus, more sophisticated methods were needed in order to initialize a model using all available data while making sure to maintain stability in the numerical model. Such data typically includes the measurements as well as a previous forecast valid at the same time the measurements are made. If applied iteratively, this process begins to accumulate information from past observations into all subsequent forecasts. Because data assimilation developed out of the field of numerical weather prediction, it initially gained popularity amongst the geosciences. In fact, one of the most cited publication in all of the geosciences is an application of data assimilation to reconstruct the observed history of the atmosphere. Details of the data assimilation process Classically, data assimilation has been applied to chaotic dynamical systems that are too difficult to predict using simple extrapolation methods. The cause of this difficulty is that small changes in initial conditions can lead to large changes in prediction accuracy. This is sometimes known as the butterfly effect – the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. At any update time, data assimilation usually takes a forecast (also known as the first guess, or background information) and applies a correction to the forecast based on a set of observed data and estimated errors that are present in both the observations and the forecast itself. The difference between the forecast and the observations at that time is called the departure or the innovation (as it provides new information to the data assimilation process). A weighting factor is applied to the innovation to determine how much of a correction should be made to the forecast based on the new information from the observations. The best estimate of the state of the system based on the correction to the forecast determined by a weighting factor times the innovation is called the analysis. In one dimension, computing the analysis could be as simple as forming a weighted average of a forecasted and observed value. In multiple dimensions the problem becomes more difficult. Much of the work in data assimilation is focused on adequately estimating the appropriate weighting factor based on intricate knowledge of the errors in the system. The measurements are usually made of a real-world system, rather than of the model's incomplete representation of that system, and so a special function called the observation operator (usually depicted by h() for a nonlinear operator or H for its linearization) is needed to map the modeled variable to a form that can be directly compared with the observation. Data assimilation as statistical estimation One of the common mathematical philosophical perspectives is to view data assimilation as a Bayesian estimation problem. From this perspective, the analysis step is an application of Bayes' theorem and the overall assimilation procedure is an example of recursive Bayesian estimation. However, the probabilistic analysis is usually simplified to a computationally feasible form. Advancing the probability distribution in time would be done exactly in the general case by the Fokker–Planck equation, but that is not feasible for high-dimensional systems; so, various approximations operating on simplified representations of the probability distributions are used instead. Often the probability distributions are assumed Gaussian so that they can be represented by their mean and covariance, which gives rise to the Kalman filter. Many methods represent the probability distributions only by the mean and input some pre-calculated covariance. An example of a direct (or sequential) method to compute this is called optimal statistical interpolation, or simply optimal interpolation (OI). An alternative approach is to iteratively solve a cost function that solves an identical problem. These are called variational methods, such as 3D-Var and 4D-Var. Typical minimization algorithms are the conjugate gradient method or the generalized minimal residual method. The ensemble Kalman filter is sequential method that uses a Monte Carlo approach to estimate both the mean and the covariance of a Gaussian probability distribution by an ensemble of simulations. More recently, hybrid combinations of ensemble approaches and variational methods have become more popular (e.g. they are used for operational forecasts both at the European Centre for Medium-Range Weather Forecasts (ECMWF) and at the NOAA National Centers for Environmental Prediction (NCEP). Data assimilation as a model update Data assimilation can also be achieved within a model update loop, where we will iterate an initial model (or initial guess) in an optimisation loop to constrain the model to the observed data. Many optimisation approaches exist and all of them can be setup to update the model, for instance, evolutionary algorithm have proven to be efficient as free of hypothesis, but computationally expensive. Weather forecasting applications In numerical weather prediction applications, data assimilation is most widely known as a method for combining observations of meteorological variables such as temperature and atmospheric pressure with prior forecasts in order to initialize numerical forecast models. Necessity The atmosphere is a fluid. The idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation. The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports, or every six hours in SYNOP reports. These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms. Some global models use finite differences, in which the world is represented as discrete points on a regularly spaced grid of latitude and longitude; other models use spectral methods that solve for a range of wavelengths. The data are then used in the model as the starting point for a forecast. A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere. Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes and ship reports along shipping routes. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones. Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent. Sea ice began to be initialized in forecast models in 1971. Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific. History In 1922, Lewis Fry Richardson published the first attempt at forecasting the weather numerically. Using a hydrostatic variation of Bjerknes's primitive equations, Richardson produced by hand a 6-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. His forecast calculated that the change in surface pressure would be , an unrealistic value incorrect by two orders of magnitude. The large error was caused by an imbalance in the pressure and wind velocity fields used as the initial conditions in his analysis, indicating the need for a data assimilation scheme. Originally "subjective analysis" had been used in which numerical weather prediction (NWP) forecasts had been adjusted by meteorologists using their operational expertise. Then "objective analysis" (e.g. Cressman algorithm) was introduced for automated data assimilation. These objective methods used simple interpolation approaches, and thus were 3DDA (three-dimensional data assimilation) methods. Later, 4DDA (four-dimensional data assimilation) methods, called "nudging", were developed, such as in the MM5 model. They are based on the simple idea of Newtonian relaxation (the 2nd axiom of Newton). They introduce into the right part of dynamical equations of the model a term that is proportional to the difference of the calculated meteorological variable and the observed value. This term that has a negative sign keeps the calculated state vector closer to the observations. Nudging can be interpreted as a variant of the Kalman-Bucy filter (a continuous time version of the Kalman filter) with the gain matrix prescribed rather than obtained from covariances. A major development was achieved by L. Gandin (1963) who introduced the "statistical interpolation" (or "optimal interpolation") method, which developed earlier ideas of Kolmogorov. This is a 3DDA method and is a type of regression analysis which utilizes information about the spatial distributions of covariance functions of the errors of the "first guess" field (previous forecast) and "true field". These functions are never known. However, the different approximations were assumed. The optimal interpolation algorithm is the reduced version of the Kalman filtering (KF) algorithm and in which the covariance matrices are not calculated from the dynamical equations but are pre-determined in advance. Attempts to introduce the KF algorithms as a 4DDA tool for NWP models came later. However, this was (and remains) a difficult task because the full version requires solution of the enormous number of additional equations (~N*N~10**12, where N=Nx*Ny*Nz is the size of the state vector, Nx~100, Ny~100, Nz~100 – the dimensions of the computational grid). To overcome this difficulty, approximate or suboptimal Kalman filters were developed. These include the Ensemble Kalman filter and the Reduced-Rank Kalman filters (RRSQRT). Another significant advance in the development of the 4DDA methods was utilizing the optimal control theory (variational approach) in the works of Le Dimet and Talagrand (1986), based on the previous works of J.-L. Lions and G. Marchuk, the latter being the first to apply that theory in the environmental modeling. The significant advantage of the variational approaches is that the meteorological fields satisfy the dynamical equations of the NWP model and at the same time they minimize the functional, characterizing their difference from observations. Thus, the problem of constrained minimization is solved. The 3DDA variational methods were developed for the first time by Sasaki (1958). As was shown by Lorenc (1986), all the above-mentioned 4DDA methods are in some limit equivalent, i.e. under some assumptions they minimize the same cost function. However, in practical applications these assumptions are never fulfilled, the different methods perform differently and generally it is not clear what approach (Kalman filtering or variational) is better. The fundamental questions also arise in application of the advanced DA techniques such as convergence of the computational method to the global minimum of the functional to be minimised. For instance, cost function or the set in which the solution is sought can be not convex. The 4DDA method which is currently most successful is hybrid incremental 4D-Var, where an ensemble is used to augment the climatological background error covariances at the start of the data assimilation time window, but the background error covariances are evolved during the time window by a simplified version of the NWP forecast model. This data assimilation method is used operationally at forecast centres such as the Met Office. Cost function The process of creating the analysis in data assimilation often involves minimization of a cost function. A typical cost function would be the sum of the squared deviations of the analysis values from the observations weighted by the accuracy of the observations, plus the sum of the squared deviations of the forecast fields and the analyzed fields weighted by the accuracy of the forecast. This has the effect of making sure that the analysis does not drift too far away from observations and forecasts that are known to usually be reliable. 3D-Var where denotes the background error covariance, the observational error covariance. 4D-Var provided that is a linear operator (matrix). Future development Factors driving the rapid development of data assimilation methods for NWP models include: Utilizing the observations currently offers promising improvement in forecast skill at a variety of spatial scales (from global to highly local) and time scales. The number of different kinds of available observations (sodars, radars, satellite) is rapidly growing. Other applications Monitoring water and energy transfers Data assimilation has been used, in the 1980s and 1990s, in several HAPEX (Hydrologic and Atmospheric Pilot Experiment) projects for monitoring energy transfers between the soil, vegetation and atmosphere. For instance: - HAPEX-MobilHy, HAPEX-Sahel, - the "Alpilles-ReSeDA" (Remote Sensing Data Assimilation) experiment, a European project in the FP4-ENV program which took place in the Alpilles region, South-East of France (1996–97). The Flow-chart diagram (right), excerpted from the final report of that project, shows how to infer variables of interest such as canopy state, radiative fluxes, environmental budget, production in quantity and quality, from remote sensing data and ancillary information. In that diagram, the small blue-green arrows indicate the direct way the models actually run. Other forecasting applications Data assimilation methods are currently also used in other environmental forecasting problems, e.g. in hydrological and hydrogeological forecasting. Bayesian networks may also be used in a data assimilation approach to assess natural hazards such as landslides. Given the abundance of spacecraft data for other planets in the solar system, data assimilation is now also applied beyond the Earth to obtain re-analyses of the atmospheric state of extraterrestrial planets. Mars is the only extraterrestrial planet to which data assimilation has been applied so far. Available spacecraft data include, in particular, retrievals of temperature and dust/water/ice optical thicknesses from the Thermal Emission Spectrometer onboard NASA's Mars Global Surveyor and the Mars Climate Sounder onboard NASA's Mars Reconnaissance Orbiter. Two methods of data assimilation have been applied to these datasets: an Analysis Correction scheme and two Ensemble Kalman Filter schemes, both using a global circulation model of the martian atmosphere as forward model. The Mars Analysis Correction Data Assimilation (MACDA) dataset is publicly available from the British Atmospheric Data Centre. Data assimilation is a part of the challenge for every forecasting problem. Dealing with biased data is a serious challenge in data assimilation. Further development of methods to deal with biases will be of particular use. If there are several instruments observing the same variable then intercomparing them using probability distribution functions can be instructive. The numerical forecast models are becoming of higher resolution due to the increase of computational power, with operational atmospheric models now running with horizontal resolutions of order of 1 km (e.g. at the German National Meteorological Service, the Deutscher Wetterdienst (DWD) and Met Office in the UK). This increase in horizontal resolutions is starting to allow to resolve more chaotic features of the non-linear models, e.g. to resolve convection on the grid scale, or clouds, in the atmospheric models. This increasing non-linearity in the models and observation operators poses a new problem in the data assimilation. The existing data assimilation methods such as many variants of ensemble Kalman filters and variational methods, well established with linear or near-linear models, are being assessed on non-linear models. Many new methods are being developed, e.g. particle filters for high-dimensional problems, and hybrid data assimilation methods. Other uses include trajectory estimation for the Apollo program, GPS, and atmospheric chemistry. See also Calibration References Further reading External links Examples of how variational assimilation is implemented weather forecasting at: Other examples of assimilation: CDACentral (an example analysis from Chemical Data Assimilation) PDFCentral (using PDFs to examine biases and representativeness) OpenDA – Open Source Data Assimilation package PDAF – open-source Parallel Data Assimilation Framework SANGOMA New Data Assimilation techniques Weather forecasting Numerical climate and weather models Estimation theory Control theory Bayesian statistics Climate and weather statistics Statistical forecasting
Data assimilation
[ "Physics", "Mathematics" ]
3,669
[ "Physical phenomena", "Weather", "Applied mathematics", "Control theory", "Climate and weather statistics", "Dynamical systems" ]
2,458,954
https://en.wikipedia.org/wiki/Muon%20spin%20spectroscopy
Muon spin spectroscopy, also known as μSR, is an experimental technique based on the implantation of spin-polarized muons in matter and on the detection of the influence of the atomic, molecular or crystalline surroundings on their spin motion. The motion of the muon spin is due to the magnetic field experienced by the particle and may provide information on its local environment in a very similar way to other magnetic resonance techniques, such as electron spin resonance (ESR or EPR) and, more closely, nuclear magnetic resonance (NMR). Introduction Muon spin spectroscopy is an atomic, molecular and condensed matter experimental technique that exploits nuclear detection methods. In analogy with the acronyms for the previously established spectroscopies NMR and ESR, muon spin spectroscopy is also known as μSR. The acronym stands for muon spin rotation, relaxation, or resonance, depending respectively on whether the muon spin motion is predominantly a rotation (more precisely a precession around a still magnetic field), a relaxation towards an equilibrium direction, or a more complex dynamic dictated by the addition of short radio frequency pulses. μSR does not require any radio-frequency technique to align the probing spin. More generally speaking, muon spin spectroscopy includes any study of the interactions of the muon's magnetic moment with its surroundings when implanted into any kind of matter. Its two most notable features are its ability to study local environments, due to the short effective range of muon interactions with matter, and the characteristic time-window (10−13 – 10−5 s) of the dynamical processes in atomic, molecular and condensed media. The closest parallel to μSR is "pulsed NMR", in which one observes time-dependent transverse nuclear polarization or the so-called "free induction decay" of the nuclear polarization. However, a key difference is that in μSR one uses a specifically implanted spin (the muon's) and does not rely on internal nuclear spins. Although particles are used as a probe, μSR is not a diffraction technique. A clear distinction between the μSR technique and those involving neutrons or X-rays is that scattering is not involved. Neutron diffraction techniques, for example, use the change in energy and/or momentum of a scattered neutron to deduce the sample properties. In contrast, the implanted muons are not diffracted but remain in a sample until they decay. Only a careful analysis of the decay product (i.e. a positron) provides information about the interaction between the implanted muon and its environment in the sample. As with many of the other nuclear methods, μSR relies on discoveries and developments made in the field of particle physics. Following the discovery of the muon by Seth Neddermeyer and Carl D. Anderson in 1936, pioneer experiments on its properties were performed with cosmic rays. Indeed, with one muon hitting each square centimeter of the earth's surface every minute, the muons constitute the foremost constituent of cosmic rays arriving at ground level. However, μSR experiments require muon fluxes of the order of muons per second per square centimeter. Such fluxes can only be obtained in high-energy particle accelerators which have been developed during the last 50 years. Muon production The collision of an accelerated proton beam (typical energy 600 MeV) with the nuclei of a production target produces positive pions () via the possible reactions: From the subsequent weak decay of the pions (MEAN lifetime = 26.03 ns) positive muons () are formed via the two body decay: Parity violation in the weak interactions implies that only left-handed neutrinos exist, with their spin antiparallel to their linear momentum (likewise only right-handed anti-neutrino are found in nature). Since the pion is spinless both the neutrino and the are ejected with spin antiparallel to their momentum in the pion rest frame. This is the key to provide spin-polarised muon beams. According to the value of the pion momentum different types of -beams are available for μSR measurements. Energy classes of muon beams Muon beams are classified into three types based on the energy of the muons being produced: high-energy, surface or "Arizona", and ultra-slow muon beams. High-energy muon beams are formed by the pions escaping the production target at high energies. They are collected over a certain solid angle by quadrupole magnets and directed onto a decay section consisting of a long superconducting solenoid with a field of several tesla. If the pion momentum is not too high, a large fraction of the pions will have decayed before they reach the end of the solenoid. In the laboratory frame the polarization of a high-energy muon beam is limited to about 80% and its energy is of the order of ~40-50MeV. Although such a high energy beam requires the use of suitable moderators and samples with sufficient thickness, it guarantees a homogeneous implantation of the muons in the sample volume. Such beams are also used to study specimens inside of recipients, e.g. samples inside pressure cells. Such muon beams are available at PSI, TRIUMF, J-PARC and RIKEN-RAL. The second type of muon beam is often called the surface or Arizona beam (recalling the pioneering work of Pifer et al. from the University of Arizona). In these beams, muons arise from pions decaying at rest inside but near the surface of the production target. Such muons are 100% polarized, ideally monochromatic, and have a very low momentum of 29.8 MeV/c (corresponding to a kinetic energy of 4.1 MeV). They have a range width in matter of the order of 180 mg/cm2. The paramount advantage of this type of beam is the ability to use relatively thin samples. Beams of this type are available at PSI (Swiss Muon Source SμS), TRIUMF, J-PARC, ISIS Neutron and Muon Source and RIKEN-RAL. Positive muon beams of even lower energy (ultra-slow muons with energy down to the eV-keV range) can be obtained by further reducing the energy of an Arizona beam by utilizing the energy-loss characteristics of large band gap solid moderators. This technique was pioneered by researchers at the TRIUMF cyclotron facility in Vancouver, B.C., Canada. It was christened with the acronym μSOL (muon separator on-line) and initially employed LiF as the moderating solid. The same 1986 paper also reported the observation of negative muonium ions (i.e., Mu− or μ+ e− e−) in vacuum. In 1987, the slow μ+ production rate was increased 100-fold using thin-film rare-gas solid moderators, producing a usable flux of low-energy positive muons. This production technique was subsequently adopted by PSI for their low-energy positive muon beam facility. The tunable energy range of such muon beams corresponds to implantation depths in solids of less than a nanometer up to several hundred nanometers. Therefore, the study of magnetic properties as a function of the distance from the surface of the sample is possible. At the present time, PSI is the only facility where such a low-energy muon beam is available on a regular basis. Technical developments have been also conducted at RIKEN-RAL, but with a strongly reduced low-energy muon rate. J-PARC is projecting the development of a high-intensity low-energy muon beam. Continuous vs. pulsed muon beams In addition to the above-mentioned classification based on energy, muon beams are also divided according to the time structure of the particle accelerator, i.e. continuous or pulsed. For continuous muon sources no dominating time structure is present. By selecting an appropriate incoming muon rate, muons are implanted into the sample one-by-one. The main advantage is that the time resolution is solely determined by the detector construction and the read-out electronics. There are two main limitations for this type of source, however: (i) unrejected charged particles accidentally hitting the detectors produce non-negligible random background counts; this compromises measurements after a few muon lifetimes, when the random background exceeds the true decay events; and (ii) the requirement to detect muons one at a time sets a maximum event rate. The background problem can be reduced by the use of electrostatic deflectors to ensure that no muons enter the sample before the decay of the previous muon. PSI and TRIUMF host the two continuous muon sources available for μSR experiments. At pulsed muon sources protons hitting the production target are bunched into short, intense, and widely separated pulses that provide a similar time structure in the secondary muon beam. An advantage of pulsed muon sources is that the event rate is only limited by detector construction. Furthermore, detectors are active only after the incoming muon pulse, strongly reducing the accidental background counts. The virtual absence of background allows the extension of the time window for measurements up to about ten times the muon mean lifetime. The principal downside is that the width of the muon pulse limits the time resolution. ISIS Neutron and Muon Source and J-PARC are the two pulsed muon sources available for μSR experiments. Spectroscopic technique Muon implantation The muons are implanted into the sample of interest where they lose energy very quickly. Fortunately, this deceleration process occurs in such a way that it does not jeopardize a μSR measurement. On one side it is very fast (much faster than 100 ps), which is much shorter than a typical μSR time window (up to 20 μs), and on the other side, all the processes involved during the deceleration are Coulombic (ionization of atoms, electron scattering, electron capture) in origin and do not interact with the muon spin, so that the muon is thermalized without any significant loss of polarization. The positive muons usually adopt interstitial sites of the crystallographic lattice, markedly distinguished by their electronic (charge) state. The spectroscopy of a muon chemically bound to an unpaired electron is remarkably different from that of all other muon states, which motivates the historical distinction in paramagnetic and diamagnetic states. Note that many diamagnetic muon states really behave like paramagnetic centers, according to the standard definition of a paramagnet. For example, in most metallic samples, which are Pauli paramagnets, the muon's positive charge is collectively screened by a cloud of conduction electrons. Thus, in metals, the muon is not bound to a single electron, hence it is in the so-called diamagnetic state and behaves like a free muon. In insulators or semiconductors a collective screening cannot take place and the muon will usually pick up one electron and form a so-called muonium (Mu=μ++e−), which has similar size (Bohr radius), reduced mass, and ionization energy to the hydrogen atom. This is the prototype of the so-called paramagnetic state. Detection of muon polarization The decay of the positive muon into a positron and two neutrinos occurs via the weak interaction process after a mean lifetime of τμ = 2.197034(21) μs: Parity violation in the weak interaction leads in this more complicated case (three body decay) to an anisotropic distribution of the positron emission with respect to the spin direction of the μ+ at the decay time. The positron emission probability is given by where is the angle between the positron trajectory and the μ+-spin, and is an intrinsic asymmetry parameter determined by the weak decay mechanism. This anisotropic emission constitutes in fact the basics for the μSR technique. The average asymmetry is measured over a statistical ensemble of implanted muons and it depends on further experimental parameters, such as the beam spin polarization , close to one, as already mentioned. Theoretically =1/3 is obtained if all emitted positrons are detected with the same efficiency, irrespective of their energy. Practically, values of ≈ 0.25 are routinely obtained. The muon spin motion may be measured over a time scale dictated by the muon decay, i.e. a few times τμ, roughly 10 μs. The asymmetry in the muon decay correlates the positron emission and the muon spin directions. The simplest example is when the spin direction of all muons remains constant in time after implantation (no motion). In this case the asymmetry shows up as an imbalance between the positron counts in two equivalent detectors placed in front and behind the sample, along the beam axis. Each of them records an exponentially decaying rate as a function of the time t elapsed from implantation, according to with for the detector looking towards and away from the spin arrow, respectively. Considering that the huge muon spin polarization is completely outside thermal equilibrium, a dynamical relaxation towards the equilibrium unpolarized state typically shows up in the count rate, as an additional decay factor in front of the experimental asymmetry parameter, A. A magnetic field parallel to the initial muon spin direction probes the dynamical relaxation rate as a function of the additional muon Zeeman energy, without introducing additional coherent spin dynamics. This experimental arrangement is called Longitudinal Field (LF) μSR. A special case of LF μSR is Zero Field (ZF) μSR, when the external magnetic field is zero. This experimental condition is particularly important since it allows to probe any internal quasi-static (i.e. static on the muon time-scale) magnetic field of field distribution at the muon site. Internal quasi-static fields may appear spontaneously, not induced by the magnetic response of the sample to an external field They are produced by disordered nuclear magnetic moments or, more importantly, by ordered electron magnetic moments and orbital currents. Another simple type of μSR experiment is when implanted all muon spins precess coherently around the external magnetic field of modulus , perpendicular to the beam axis, causing the count unbalance to oscillate at the corresponding Larmor frequency between the same two detectors, according to Since the Larmor frequency is , with a gyromagnetic ratio Mrad(sT)−1, the frequency spectrum obtained by means of this experimental arrangement provides a direct measure of the internal magnetic field intensity distribution. The distribution produces an additional decay factor of the experimental asymmetry A. This method is usually referred to as Transverse Field (TF) μSR. A more general case is when the initial muon spin direction (coinciding with the detector axis) forms an angle with the field direction. In this case the muon spin precession describes a cone which results in both a longitudinal component, , and a transverse precessing component, , of the total asymmetry. ZF μSR experiments in the presence of a spontaneous internal field fall into this category as well. Applications Muon spin rotation and relaxation are mostly performed with positive muons. They are well suited to the study of magnetic fields at the atomic scale inside matter, such as those produced by various kinds of magnetism and/or superconductivity encountered in compounds occurring in nature or artificially produced by modern material science. The London penetration depth is one of the most important parameters characterizing a superconductor because its inverse square provides a measure of the density ns of Cooper pairs. The dependence of ns on temperature and magnetic field directly indicates the symmetry of the superconducting gap. Muon spin spectroscopy provides a way to measure the penetration depth, and so has been used to study high-temperature cuprate superconductors since their discovery in 1986. Other important fields of application of μSR exploit the fact that positive muons capture electrons to form muonium atoms which behave chemically as light isotopes of the hydrogen atom. This allows investigation of the largest known kinetic isotope effect in some of the simplest types of chemical reactions, as well as the early stages of formation of radicals in organic chemicals. Muonium is also studied as an analogue of hydrogen in semiconductors, where hydrogen is one of the most ubiquitous impurities. Facilities μSR requires a particle accelerator for the production of a muon beam. This is presently achieved at few large scale facilities in the world: the CMMS continuous source at TRIUMF in Vancouver, Canada; the SμS continuous source at the Paul Scherrer Institut (PSI) in Villigen, Switzerland; the ISIS Neutron and Muon Source and RIKEN-RAL pulsed sources at the Rutherford Appleton Laboratory in Chilton, United Kingdom; and the J-PARC facility in Tokai, Japan, where a new pulsed source is being built to replace that at KEK in Tsukuba, Japan. Muon beams are also available at the Laboratory of Nuclear Problems, Joint Institute for Nuclear Research (JINR) in Dubna, Russia. The International Society for μSR Spectroscopy (ISMS) exists to promote the worldwide advancement of μSR. Membership in the society is open free of charge to all individuals in academia, government laboratories and industry who have an interest in the society's goals. See also Muon Muonium Nuclear magnetic resonance Perturbed angular correlation Notes References External links μSR basic literature Integrated Infrastructure Initiative for Neutron Scattering and Muon Spectroscopy (NMI3) The NMI3 Muon Joint Research Activity Video - What are muons and how are they produced? Spectroscopy Scientific techniques
Muon spin spectroscopy
[ "Physics", "Chemistry" ]
3,707
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
2,459,057
https://en.wikipedia.org/wiki/Spin%20polarization
In particle physics, spin polarization is the degree to which the spin, i.e., the intrinsic angular momentum of elementary particles, is aligned with a given direction. This property may pertain to the spin, hence to the magnetic moment, of conduction electrons in ferromagnetic metals, such as iron, giving rise to spin-polarized currents. It may refer to (static) spin waves, preferential correlation of spin orientation with ordered lattices (semiconductors or insulators). It may also pertain to beams of particles, produced for particular aims, such as polarized neutron scattering or muon spin spectroscopy. Spin polarization of electrons or of nuclei, often called simply magnetization, is also produced by the application of a magnetic field. Curie law is used to produce an induction signal in electron spin resonance (ESR or EPR) and in nuclear magnetic resonance (NMR). Spin polarization is also important for spintronics, a branch of electronics. Magnetic semiconductors are being researched as possible spintronic materials. The spin of free electrons is measured either by a LEED image from a clean wolfram-crystal (SPLEED) or by an electron microscope composed purely of electrostatic lenses and a gold foil as a sample. Back scattered electrons are decelerated by annular optics and focused onto a ring shaped electron multiplier at about 15°. The position on the ring is recorded. This whole device is called a Mott-detector. Depending on their spin the electrons have the chance to hit the ring at different positions. 1% of the electrons are scattered in the foil. Of these 1% are collected by the detector and then about 30% of the electrons hit the detector at the wrong position. Both devices work due to spin orbit coupling. The circular polarization of electromagnetic fields is due to spin polarization of their constituent photons. In the most generic context, spin polarization is any alignment of the components of a non-scalar (vectorial, tensorial, spinor) field with its arguments, i.e., with the nonrelativistic three spatial or relativistic four spatiotemporal regions over which it is defined. In this sense, it also includes gravitational waves and any field theory that couples its constituents with the differential operators of vector analysis. See also Photon polarization Spin angular momentum of light Magnetization References Spectroscopy Spintronics Polarization (waves)
Spin polarization
[ "Physics", "Chemistry", "Materials_science" ]
504
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Spintronics", "Astrophysics", "Spectroscopy", "Condensed matter physics", "Polarization (waves)" ]
10,172,238
https://en.wikipedia.org/wiki/Nanofluidics
Nanofluidics is the study of the behavior, manipulation, and control of fluids that are confined to structures of nanometer (typically 1–100 nm) characteristic dimensions (1 nm = 10−9 m). Fluids confined in these structures exhibit physical behaviors not observed in larger structures, such as those of micrometer dimensions and above, because the characteristic physical scaling lengths of the fluid, (e.g. Debye length, hydrodynamic radius) very closely coincide with the dimensions of the nanostructure itself. When structures approach the size regime corresponding to molecular scaling lengths, new physical constraints are placed on the behavior of the fluid. For example, these physical constraints induce regions of the fluid to exhibit new properties not observed in bulk, e.g. vastly increased viscosity near the pore wall; they may effect changes in thermodynamic properties and may also alter the chemical reactivity of species at the fluid-solid interface. A particularly relevant and useful example is displayed by electrolyte solutions confined in nanopores that contain surface charges, i.e. at electrified interfaces, as shown in the nanocapillary array membrane (NCAM) in the accompanying figure. All electrified interfaces induce an organized charge distribution near the surface known as the electrical double layer. In pores of nanometer dimensions the electrical double layer may completely span the width of the nanopore, resulting in dramatic changes in the composition of the fluid and the related properties of fluid motion in the structure. For example, the drastically enhanced surface-to-volume ratio of the pore results in a preponderance of counter-ions (i.e. ions charged oppositely to the static wall charges) over co-ions (possessing the same sign as the wall charges), in many cases to the near-complete exclusion of co-ions, such that only one ionic species exists in the pore. This can be used for manipulation of species with selective polarity along the pore length to achieve unusual fluidic manipulation schemes not possible in micrometer and larger structures. Theory In 1965, Rice and Whitehead published the seminal contribution to the theory of the transport of electrolyte solutions in long (ideally infinite) nanometer-diameter capillaries. Briefly, the potential, ϕ, at a radial distance, r, is given by the Poisson-Boltzmann equation, where κ is the inverse Debye length, determined by the ion number density, n, the dielectric constant, ε, the Boltzmann constant, k, and the temperature, T. Knowing the potential, φ(r), the charge density can then be recovered from the Poisson equation, whose solution may be expressed as a modified Bessel function of the first kind, I0, and scaled to the capillary radius, a. An equation of motion under combined pressure and electrically-driven flow can then be written, where η is the viscosity, dp/dz is the pressure gradient, and Fz is the body force driven by the action of the applied electric field, Ez, on the net charge density in the double layer. When there is no applied pressure, the radial distribution of the velocity is given by, From the equation above, it follows that fluid flow in nanocapillaries is governed by the κa product, that is, the relative sizes of the Debye length and the pore radius. By adjusting these two parameters and the surface charge density of the nanopores, fluid flow can be manipulated as desired. Fabrication Nanostructures can be fabricated as single cylindrical channels, nanoslits, or nanochannel arrays from materials such as silicon, glass, polymers (e.g. PMMA, PDMS, PCTE) and synthetic vesicles. Standard photolithography, bulk or surface micromachining, replication techniques (embossing, printing, casting and injection molding), and nuclear track or chemical etching, are commonly used to fabricate structures which exhibit characteristic nanofluidic behavior. Applications Because of the small size of the fluidic conduits, nanofluidic structures are naturally applied in situations demanding that samples be handled in exceedingly small quantities, including Coulter counting, analytical separations and determinations of biomolecules, such as proteins and DNA, and facile handling of mass-limited samples. One of the more promising areas of nanofluidics is its potential for integration into microfluidic systems, i.e. micrototal analytical systems or lab-on-a-chip structures. For instance, NCAMs, when incorporated into microfluidic devices, can reproducibly perform digital switching, allowing transfer of fluid from one microfluidic channel to another, selectivity separate and transfer analytes by size and mass, mix reactants efficiently, and separate fluids with disparate characteristics. In addition, there is a natural analogy between the fluid handling capabilities of nanofluidic structures and the ability of electronic components to control the flow of electrons and holes. This analogy has been used to realize active electronic functions such as rectification and field-effect and bipolar transistor action with ionic currents. Application of nanofluidics is also to nano-optics for producing tuneable microlens array Nanofluidics have had a significant impact in biotechnology, medicine and clinical diagnostics with the development of lab-on-a-chip devices for PCR and related techniques. Attempts have been made to understand the behaviour of flowfields around nanoparticles in terms of fluid forces as a function of Reynolds and Knudsen number using computational fluid dynamics. The relationship between lift, drag and Reynolds number has been shown to differ dramatically at the nanoscale compared with macroscale fluid dynamics. Challenges There are a variety of challenges associated with the flow of liquids through carbon nanotubes and nanopipes. A common occurrence is channel blocking due to large macromolecules in the liquid. Also, any insoluble debris in the liquid can easily clog the tube. A solution for this researchers are hoping to find is a low friction coating or channel materials that help reduce the blocking of the tubes. Also, large polymers, including biologically relevant molecules such as DNA, often fold in vivo, causing blockages. Typical DNA molecules from a virus have lengths of approx. 100–200 kilobases and will form a random coil of the radius some 700 nm in aqueous solution at 20%. This is also several times greater than the pore diameter of even large carbon pipes and two orders of magnitude the diameter of a single walled carbon nanotube. See also Nanomechanics Nanotechnology Microfluidics Nanofluidic circuitry References Nanotechnology Fluid dynamics Analytical chemistry Surface science Materials science ja:流体素子
Nanofluidics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,405
[ "Applied and interdisciplinary physics", "Chemical engineering", "Materials science", "Surface science", "Condensed matter physics", "nan", "Piping", "Nanotechnology", "Fluid dynamics" ]
10,173,651
https://en.wikipedia.org/wiki/Slosh%20dynamics
In fluid dynamics, slosh refers to the movement of liquid inside another object (which is, typically, also undergoing motion). Strictly speaking, the liquid must have a free surface to constitute a slosh dynamics problem, where the dynamics of the liquid can interact with the container to alter the system dynamics significantly. Important examples include propellant slosh in spacecraft tanks and rockets (especially upper stages), and the free surface effect (cargo slosh) in ships and trucks transporting liquids (for example oil and gasoline). However, it has become common to refer to liquid motion in a completely filled tank, i.e. without a free surface, as "fuel slosh". Such motion is characterized by "inertial waves" and can be an important effect in spinning spacecraft dynamics. Extensive mathematical and empirical relationships have been derived to describe liquid slosh. These types of analyses are typically undertaken using computational fluid dynamics and finite element methods to solve the fluid-structure interaction problem, especially if the solid container is flexible. Relevant fluid dynamics non-dimensional parameters include the Bond number, the Weber number, and the Reynolds number. Slosh is an important effect for spacecraft, ships, some land vehicles and some aircraft. Slosh was a factor in the Falcon 1 second test flight anomaly, and has been implicated in various other spacecraft anomalies, including a near-disaster with the Near Earth Asteroid Rendezvous (NEAR Shoemaker) satellite. Spacecraft effects Liquid slosh in microgravity is relevant to spacecraft, most commonly Earth-orbiting satellites, and must take account of liquid surface tension which can alter the shape (and thus the eigenvalues) of the liquid slug. Typically, a large fraction of the mass of a satellite is liquid propellant at/near Beginning of Life (BOL), and slosh can adversely affect satellite performance in a number of ways. For example, propellant slosh can introduce uncertainty in spacecraft attitude (pointing) which is often called jitter. Similar phenomena can cause pogo oscillation and can result in structural failure of a space vehicle. Another example is problematic interaction with the spacecraft's Attitude Control System (ACS), especially for spinning satellites which can suffer resonance between slosh and nutation, or adverse changes to the rotational inertia. Because of these types of risk, in the 1960s the National Aeronautics and Space Administration (NASA) extensively studied liquid slosh in spacecraft tanks, and in the 1990s NASA undertook the Middeck 0-Gravity Dynamics Experiment on the Space Shuttle. The European Space Agency has advanced these investigations with the launch of SLOSHSAT. Most spinning spacecraft since 1980 have been tested at the Applied Dynamics Laboratories drop tower using sub-scale models. Extensive contributions have also been made by the Southwest Research Institute, but research is widespread in academia and industry. Research is continuing into slosh effects on in-space propellant depots. In October 2009, the United States Air Force and United Launch Alliance (ULA) performed an experimental on-orbit demonstration on a modified Centaur upper stage on the DMSP-18 satellite launch in order to improve "understanding of propellant settling and slosh", "The light weight of DMSP-18 allowed of remaining LO2 and LH2 propellant, 28% of Centaur’s capacity", for the on-orbit tests. The post-spacecraft mission extension ran 2.4 hours before the planned deorbit burn was executed. NASA's Launch Services Program is working on two on-going slosh fluid dynamics experiments with partners: CRYOTE and SPHERES-Slosh. ULA has additional small-scale demonstrations of cryogenic fluid management are planned with project CRYOTE in 2012–2014 leading to a ULA large-scale cryo-sat propellant depot test under the NASA flagship technology demonstrations program in 2015. SPHERES-Slosh with Florida Institute of Technology and Massachusetts Institute of Technology will examine how liquids move around inside containers in microgravity with the SPHERES Testbed on the International Space Station. Sloshing in road tank vehicles Liquid sloshing strongly influences the directional dynamics and safety performance of highway tank vehicles in a highly adverse manner. Hydrodynamic forces and moments arising from liquid cargo oscillations in the tank under steering and/or braking maneuvers reduce the stability limit and controllability of partially-filled tank vehicles. Anti-slosh devices such as baffles are widely used in order to limit the adverse liquid slosh effect on directional performance and stability of the tank vehicles. Since most of the time, tankers are carrying dangerous liquid contents such as ammonia, gasoline and fuel oils, stability of partially-filled liquid cargo vehicles is very important. Optimizations and sloshing reduction techniques in fuel tanks such as elliptical tank, rectangular, modified oval and generic tank shape have been performed in different filling levels using numerical, analytical and analogical analyses. Most of these studies concentrate on effects of baffles on sloshing while the influence of cross-section is completely ignored. The Bloodhound LSR 1,000 mph project car utilizes a liquid-fuelled rocket that requires a specially-baffled oxidizer tank to prevent directional instability, rocket thrust variations and even oxidizer tank damage. Practical effects Sloshing or shifting cargo, water ballast, or other liquid (e.g., from leaks or fire fighting) can cause disastrous capsizing in ships due to free surface effect; this can also affect trucks and aircraft. The effect of slosh is used to limit the bounce of a roller hockey ball. Water slosh can significantly reduce the rebound height of a ball but some amounts of liquid seem to lead to a resonance effect. Many of the balls for roller hockey commonly available contain water to reduce the bounce height. See also Seiche, a phenomenon affecting lakes and other constrained bodies of water Splash (fluid mechanics), other free surface phenomena Succussion splash, audible medical sign References Other references NASA (1969), Slosh suppression , May 1969, PDF, 36p NASA (1966), Dynamic behavior of liquids in moving containers with applications to propellants in space vehicle fuel tanks, Jan 1, 1966, PDF, 464 p Fluid mechanics Fluid dynamics
Slosh dynamics
[ "Chemistry", "Engineering" ]
1,260
[ "Chemical engineering", "Civil engineering", "Piping", "Fluid mechanics", "Fluid dynamics" ]
10,178,335
https://en.wikipedia.org/wiki/Partial%20oxidation
Partial oxidation (POX) is a type of chemical reaction. It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use, for example in a fuel cell. A distinction is made between thermal partial oxidation (TPOX) and catalytic partial oxidation (CPOX). Principle Partial oxidation is a technically mature process in which natural gas or a heavy hydrocarbon fuel (heating oil) is mixed with a limited amount of oxygen in an exothermic process. General reaction: {C_\mathit{n}H_\mathit{m}} + \frac\mathit{n}{2}{O2} -> \mathit{n}{CO} + \frac\mathit{m}{2}H2 Idealized reaction for heating oil: {C12H24} + 6O2 -> {12CO} + 12H2 Idealized reaction for coal: {C24H12} + 12O2 -> {24CO} + 6H2 The formulas given for coal and heating oil show only a typical representative of these complex fuels. Water may be added to lower the combustion temperature and reduce soot formation. Yields are below stoichiometric due to some fuel being fully combusted to carbon dioxide and water. TPOX TPOX (thermal partial oxidation) reaction temperatures are dependent on the air-fuel ratio or oxygen-fuel ratio. Typical reaction temperatures are 1200°C and above. CPOX In CPOX (catalytic partial oxidation) the use of a catalyst reduces the required temperature to around 800°C – 900°C. The choice of reforming technique depends on the sulfur content of the fuel being used. CPOX can be employed if the sulfur content is below 50 ppm. A higher sulfur content can poison the catalyst, so the TPOX procedure is used for such fuels. However, recent research shows that CPOX is possible with sulfur contents up to 400ppm. History 1926 – Vandeveer and Parr at the University of Illinois used oxygen to replace air. See also Hydrogen production Industrial gas PROX Small stationary reformer Glossary of fuel cell terms Timeline of hydrogen technologies References Chemical reactions Hydrogen production Industrial gases Redox
Partial oxidation
[ "Chemistry" ]
478
[ "Redox", "Electrochemistry", "Industrial gases", "nan", "Chemical process engineering" ]
10,179,436
https://en.wikipedia.org/wiki/Tie%20line%20%28telephony%29
A tie line, also known as a tie trunk, is a telecommunication circuit between two telephone exchanges or two extensions of a private telephone system. See also Private branch exchange Circuit ID Leased line Private line References Communication circuits
Tie line (telephony)
[ "Engineering" ]
44
[ "Telecommunications engineering", "Communication circuits" ]
7,885,048
https://en.wikipedia.org/wiki/Couple%20%28mechanics%29
In physics, a couple is a system of forces with a resultant (a.k.a. net or sum) moment of force but no resultant force. A more descriptive term is force couple or pure moment. Its effect is to impart angular momentum but no linear momentum. In rigid body dynamics, force couples are free vectors, meaning their effects on a body are independent of the point of application. The resultant moment of a couple is a special case of moment. A couple has the property that it is independent of reference point. Simple couple Definition A couple is a pair of forces, equal in magnitude, oppositely directed, and displaced by perpendicular distance or moment. The simplest kind of couple consists of two equal and opposite forces whose lines of action do not coincide. This is called a "simple couple". The forces have a turning effect or moment called a torque about an axis which is normal (perpendicular) to the plane of the forces. The SI unit for the torque of the couple is newton metre. If the two forces are and , then the magnitude of the torque is given by the following formula: where is the moment of couple is the magnitude of the force is the perpendicular distance (moment) between the two parallel forces The magnitude of the torque is equal to , with the direction of the torque given by the unit vector , which is perpendicular to the plane containing the two forces and positive being a counter-clockwise couple. When is taken as a vector between the points of action of the forces, then the torque is the cross product of and , i.e. Independence of reference point The moment of a force is only defined with respect to a certain point (it is said to be the "moment about ") and, in general, when is changed, the moment changes. However, the moment (torque) of a couple is independent of the reference point : Any point will give the same moment. In other words, a couple, unlike any more general moments, is a "free vector". (This fact is called Varignon's Second Moment Theorem.) The proof of this claim is as follows: Suppose there are a set of force vectors , , etc. that form a couple, with position vectors (about some origin ), , , etc., respectively. The moment about is Now we pick a new reference point that differs from by the vector . The new moment is Now the distributive property of the cross product implies However, the definition of a force couple means that Therefore, This proves that the moment is independent of reference point, which is proof that a couple is a free vector. Forces and couples A force F applied to a rigid body at a distance d from the center of mass has the same effect as the same force applied directly to the center of mass and a couple Cℓ = Fd. The couple produces an angular acceleration of the rigid body at right angles to the plane of the couple. The force at the center of mass accelerates the body in the direction of the force without change in orientation. The general theorems are: A single force acting at any point O′ of a rigid body can be replaced by an equal and parallel force F acting at any given point O and a couple with forces parallel to F whose moment is M = Fd, d being the separation of O and O′. Conversely, a couple and a force in the plane of the couple can be replaced by a single force, appropriately located. Any couple can be replaced by another in the same plane of the same direction and moment, having any desired force or any desired arm. Applications Couples are very important in engineering and the physical sciences. A few examples are: The forces exerted by one's hand on a screw-driver The forces exerted by the tip of a screwdriver on the head of a screw Drag forces acting on a spinning propeller Forces on an electric dipole in a uniform electric field The reaction control system on a spacecraft Force exerted by hands on steering wheel 'Rocking couples' are a regular imbalance giving rise to vibration See also Traction (engineering) Torque Moment (physics) Force References H.F. Girvin (1938) Applied Mechanics, §28 Couples, pp 33,4, Scranton Pennsylvania: International Textbook Company. Physical quantities Mechanics
Couple (mechanics)
[ "Physics", "Mathematics", "Engineering" ]
869
[ "Physical phenomena", "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Physical properties" ]
7,886,048
https://en.wikipedia.org/wiki/Scanning%20probe%20lithography
Scanning probe lithography (SPL) describes a set of nanolithographic methods to pattern material on the nanoscale using scanning probes. It is a direct-write, mask-less approach which bypasses the diffraction limit and can reach resolutions below 10 nm. It is considered an alternative lithographic technology often used in academic and research environments. The term scanning probe lithography was coined after the first patterning experiments with scanning probe microscopes (SPM) in the late 1980s. Classification The different approaches towards SPL can be classified by their goal to either add or remove material, by the general nature of the process either chemical or physical, or according to the driving mechanisms of the probe-surface interaction used in the patterning process: mechanical, thermal, diffusive and electrical. Overview Mechanical/thermo-mechanical Mechanical scanning probe lithography (m-SPL) is a nanomachining or nano-scratching top-down approach without the application of heat. Thermo-mechanical SPL applies heat together with a mechanical force, e.g. indenting of polymers in the Millipede memory. Thermal Thermal scanning probe lithography (t-SPL) uses a heatable scanning probe in order to efficiently remove material from a surface without the application of significant mechanical forces. The patterning depth can be controlled to create high-resolution 3D structures. Thermo-chemical Thermochemical scanning probe lithography (tc-SPL) or thermochemical nanolithography (TCNL) employs the scanning probe tips to induce thermally activated chemical reactions to change the chemical functionality or the phase of surfaces. Such thermally activated reactions have been shown in proteins, organic semiconductors, electroluminescent conjugated polymers, and nanoribbon resistors. Furthermore, deprotection of functional groups (sometimes involving a temperature gradients), reduction of oxides, and the crystallization of piezoelectric/ferroelectric ceramics has been demonstrated. Dip-pen/thermal dip-pen Dip-pen scanning probe lithography (dp-SPL) or dip-pen nanolithography (DPN) is a scanning probe lithography technique based on diffusion, where the tip is employed to create patterns on a range of substances by deposition of a variety of liquid inks. Thermal dip-pen scanning probe lithography or thermal dip-pen nanolithography (TDPN) extends the usable inks to solids, which can be deposited in their liquid form when the probes are pre-heated. Oxidation Oxidation scanning probe lithography (o-SPL), also called local oxidation nanolithography (LON), scanning probe oxidation, nano-oxidation, local anodic oxidation, AFM oxidation lithography is based on the spatial confinement of an oxidation reaction. Bias induced Bias-induced scanning probe lithography (b-SPL) uses the high electrical fields created at the apex of a probe tip when voltages are applied between tip and sample to facilitate and confining a variety of chemical reactions to decompose gases or liquids in order to locally deposit and grow materials on surfaces. Current induced In current induced scanning probe lithography (c-SPL) in addition to the high electrical fields of b-SPL, also a focused electron current which emanates from the SPM tip is used to create nanopatterns, e.g. in polymers and molecular glasses. Magnetic Various scanning probe techniques have been developed to write magnetization patterns into ferromagnetic structures which are often described as magnetic SPL techniques. Thermally-assisted magnetic scanning probe lithography (tam-SPL) operates by employing a heatable scanning probe to locally heat and cool regions of an exchange-biased ferromagnetic layer in the presence of an external magnetic field. This causes a shift in the hysteresis loop of exposed regions, pinning the magnetization in a different orientation compared to unexposed regions. The pinned regions become stable even in the presence of external fields after cooling, allowing arbitrary nanopatterns to be written into the magnetization of the ferromagnetic layer. In arrays of interacting ferromagnetic nano-islands such as artificial spin ice, scanning probe techniques have been used to write arbitrary magnetic patterns by locally reversing the magnetization of individual islands. Topological defect-driven magnetic writing (TMW) uses the dipolar field of a magnetized scanning probe to induce topological defects in the magnetization field of individual ferromagnetic islands. These topological defects interact with the island edges and annihilate, leaving the magnetization reversed. Another way of writing such magnetic patterns is field-assisted magnetic force microscopy patterning, where an external magnetic field a little below the switching field of the nano-islands is applied and a magnetized scanning probe is used to locally raise the field strength above that required to reverse the magnetization of selected islands. In magnetic systems where interfacial Dzyaloshinskii–Moriya interactions stabilize magnetic textures known as magnetic skyrmions, scanning-probe magnetic nanolithography has been employed for the direct writing of skyrmions and skyrmion lattices. Comparison to other lithographic techniques Being a serial technology, SPL is inherently slower than e.g. photolithography or nanoimprint lithography, while parallelization as required for mass-fabrication is considered a large systems engineering effort (see also Millipede memory). As for resolution, SPL methods bypass the optical diffraction limit due to their use of scanning probes compared with photolithographic methods. Some probes have integrated in-situ metrology capabilities, allowing for feedback control during the write process. SPL works under ambient atmospheric conditions, without the need for ultra high vacuum (UHV), unlike e-beam or EUV lithography. References Lithography (microfabrication) Nanotechnology Scanning probe microscopy
Scanning probe lithography
[ "Chemistry", "Materials_science", "Engineering" ]
1,245
[ "Microtechnology", "Materials science", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Lithography (microfabrication)" ]
7,886,807
https://en.wikipedia.org/wiki/HurriQuake
The HurriQuake nail was a construction nail designed by Ed Sutt for Bostitch, a division of Stanley Works, and patented in 2004. The nail was designed primarily to provide more structural integrity for a building, especially against the forces of hurricanes and earthquakes. The Hurriquake nail won the Popular Science Best of What's New 2006 award for Home Technology and Best Innovation of the Year. The Hurriquake nail was discontinued in 2011. Features Starting at the bottom of the nail, the ring shanks on the lower half of it are enhanced with angular barbs, which, if being pulled out, add resistance to the nail. The middle of the nail has no extra features, which leaves the section most likely to be damaged during an earthquake thicker and less prone to damage. The area directly below the head of the nail features a spiral-style shank used to enhance the strength of the nail in holding boards together. This enhancement keeps boards that are nailed together from moving around under forces of nature and overall weakening the joints. The final feature of this nail is the nail head, which is 25% larger than the average, allowing it to be more resistant to being pulled completely through attached pieces of wood. Design Pre-development The nail's design began when its inventor, civil engineer Ed Sutt, traveled to the Caribbean in the wake of Hurricane Marilyn. Sutt's trip to the Caribbean was part of a team examining the wreckage of the 80% of the island's homes and business that had been destroyed in the hurricane's winds of . The finding among the homes that had been destroyed was that wood failure was not the cause of destruction; instead, the findings showed that the nails holding the wood together had failed, leading to the buildings' ultimate collapse. Sutt's research began after taking a research assistantship program at the Clemson Wind Load Test Facility, which had received funding through a grant from FEMA. The grant was used to research wooden-framed structures and the relationship of their failure to wind velocity. The outcome of the project's research showed that the best way to strengthen a structure was to improve the fasteners that held the roofing and wall sheathing to the internal frame, and with this information, Sutt signed on as a fastener engineer for the Stanley subsidiary, Bostitch. Development As development began on the nail, there were three major causes of failure to be overcome. These were having the nail — head and all — rip through the sheathing, having the entire nail pull out of the frame, and having the nail's midsection snap under stress. Early research showed that the larger the head, the less chance there was of the nail being ripped through the sheathing. However, the difficult task was increasing the nail's head while still making it compatible with popular nail gun models. After finding the ideal nail head size, the next task to conquer was preventing the entire nail from pulling out of the frame. This was overcome by adding barbed ring shanks around the lower portion of the nail. During the testing of the shanks, it was noted that above a certain point the ridges no longer strengthened the nail but instead weakened the nail by making it more susceptible to shearing. The final touch to the original prototype was a special high-carbon alloy designed by a metallurgist that had the perfect combination of stiffness and pliability, giving it the highest possible strength. Sheather plus and beyond After analyzing hundreds of designs, and finally coming up with what was believed to be the best design, Bostich finally released the nail in 2005 and labeled it the Sheather Plus. Even though the new nail was stronger than most nails, the barbs, which added the much needed strength to the nail's holding power, weakened the strength of the joint by opening the hole too far. This caused the joint to be sloppy and wobbly, so the team went back to the drawing board where the final feature was added to the nail. To compensate for the extra width of the nail hole, due to the ring shanks, Sutt decided to add a thicker screw-shank to the portion of the nail directly under the head. This addition thickened the top portion of the nail giving it a tighter joint, as well as enhancing its overall holding power. Testing Independent tests of the nail's stretch were conducted by several organizations, including Florida International University and the International Code Council. Those tests confirmed the claims by the researchers at Bostitch that they had created a better nail. Among all of the different tests, it was found that the new nail had twice the "uplift capacity" of other power driven nails, as well as increasing a home's resistance to wind and increasing earthquake resistance by up to 50%. For further testing, Sutt asked for the assistance of Scott Schiff, the coordinator of the graduate program in civil engineering and engineering mechanics at Clemson University. Tests at the Clemson Wind Load Test Facility confirmed what had already been stated. With equipment to simulate the force of winds, roofs attached with traditional nails were pulled apart at around 13,500 pounds of force (60.1 kN). At forces up to 16,000 pounds (71 kN), walls built with the HurriQuake environment nail showed minimal wall movement. As the pressure increased to 17,000 pounds (75 kN), then 18,000 (80 kN), then 19,000 (85 kN), the walls began to make creaking and groaning noises but they still stayed attached. As the test rig pushed 20,000 pounds (89 kN), the maximum it was capable of testing, it gave out, showing that the HurriQuake environment nail sustained 20,000 pounds of force (89 kN) and still was not sheared or completely pulled out. References External links The HurriQuake nail was the lead segment in a 2006 episode of the Voice of America's science and technology program, "Our World," hosted by Art Chimes. Fasteners Hardware (mechanical) Tropical cyclone preparedness Woodworking Earthquake engineering
HurriQuake
[ "Physics", "Technology", "Engineering" ]
1,240
[ "Structural engineering", "Machines", "Fasteners", "Physical systems", "Construction", "Civil engineering", "Earthquake engineering", "Hardware (mechanical)" ]
15,324,368
https://en.wikipedia.org/wiki/Nephelauxetic%20effect
The nephelauxetic effect is a term used in the inorganic chemistry of transition metals. It refers to a decrease in the Racah interelectronic repulsion parameter, given the symbol B, that occurs when a transition-metal free ion forms a complex with ligands. The name "nephelauxetic" comes from the Greek for cloud-expanding and was proposed by the Danish inorganic chemist C. K. Jorgensen. The presence of this effect highlights the disadvantages of crystal field theory, which treats metal-ligand interactions as purely electrostatic, since the nephelauxetic effect reveals the covalent character in the metal-ligand interaction. Racah parameter The decrease in the Racah parameter B indicates that in a complex there is less repulsion between the two electrons in a given doubly occupied metal d-orbital than there is in the respective Mn+ gaseous metal ion, which in turn implies that the size of the orbital is larger in the complex. This electron cloud expansion effect may occur for one (or both) of two reasons. One is that the effective positive charge on the metal has decreased. Because the positive charge of the metal is reduced by any negative charge on the ligands, the d-orbitals can expand slightly. The second is the act of overlapping with ligand orbitals and forming covalent bonds increases orbital size, because the resulting molecular orbital is formed from two atomic orbitals. The reduction of B from its free ion value is normally reported in terms of the nephelauxetic parameter β: Experimentally, it is observed that size of the nephelauxetic parameter always follows a certain trend with respect to the nature of the ligands present. Ligands The list shown below enlists some common ligands (showing increasing nephelauxetic effect): F− < H2O < NH3 < en < [NCS - N]− < Cl− < [CN]− < Br− < N3− < I− Although parts of this series may seem quite similar to the spectrochemical series of ligands - for example, cyanide, ethylenediamine, and fluoride seem to occupy similar positions in the two - others such as chloride, iodide and bromide (amongst others), occupy very different positions. The ordering roughly reflects the ability of the ligands to form good covalent bonds with metals - those that have a small effect are at the start of the series, whereas those that have a large effect are at the end of the series. Central metal ion The nephelauxetic effect does not only depend upon the ligand type, but also upon the central metal ion. These too can be arranged in order of increasing nephelauxetic effect as follows: Mn(II) < Ni(II) ≈ Co(II) < Mo(II) < Re(IV) < Fe(III) < Ir(III) < Co(III) < Mn(IV) See also Spectrochemical series Complex (chemistry) References Further reading Housecroft C.E. and Sharpe A.G., Inorganic Chemistry, 2nd Edition, England, Pearson Education Limited, 2005. p. 578. Shriver D.F and Atkins P.W, Inorganic Chemistry, 4th Edition, England, Oxford University Press, 2006. p. 483. Coordination chemistry Spectroscopy
Nephelauxetic effect
[ "Physics", "Chemistry" ]
692
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Coordination chemistry", "Spectroscopy" ]
15,330,149
https://en.wikipedia.org/wiki/Protein%20Structure%20Initiative
The Protein Structure Initiative (PSI) was a USA based project that aimed at accelerating discovery in structural genomics and contribute to understanding biological function. Funded by the U.S. National Institute of General Medical Sciences (NIGMS) between 2000 and 2015, its aim was to reduce the cost and time required to determine three-dimensional protein structures and to develop techniques for solving challenging problems in structural biology, including membrane proteins. Over a dozen research centers have been supported by the PSI for work in building and maintaining high-throughput structural genomics pipelines, developing computational protein structure prediction methods, organizing and disseminating information generated by the PSI, and applying high-throughput structure determination to study a broad range of important biological and biomedical problems. The project has been organized into three separate phases. The first phase of the Protein Structure Initiative (PSI-1) spanned from 2000 to 2005, and was dedicated to demonstrating the feasibility of high-throughput structure determination, solving unique protein structures, and preparing for a subsequent production phase. The second phase, PSI-2, focused on implementing the high-throughput structure determination methods developed in PSI-1, as well as homology modeling and addressing bottlenecks like modeling membrane proteins. The third phase, PSI:Biology, began in 2010 and consisted of networks of investigators applying high-throughput structure determination to study a broad range of biological and biomedical problems. PSI program ended on 7/1/2015, even that some of the PSI centers continue structure determination supported by other funding mechanisms. Phase 1 The first phase of the Protein Structure Initiative (PSI-1) lasted from June 2000 until September 2005, and had a budget of $270 million funded primarily by NIGMS with support from the National Institute of Allergy and Infectious Diseases. PSI-1 saw the establishment of nine pilot centers focusing on structural genomics studies of a range of organisms, including Arabidopsis thaliana, Caenorhabditis elegans and Mycobacterium tuberculosis. During this five-year period over 1,100 protein structures were determined, over 700 of which were classified as "unique" due to their < 30% sequence similarity with other known protein structures. The primary goal of PSI-1, to develop methods to streamline the structure determination process, resulted in an array of technical advances. Several methods developed during PSI-1 enhanced expression of recombinant proteins in systems like Escherichia coli, Pichia pastoris and insect cell lines. New streamlined approaches to cell cloning, expression and protein purification were also introduced, in which robotics and software platforms were integrated into the protein production pipeline to minimize required manpower, increase speed, and lower costs. Phase 2 The second phase of the Protein Structure Initiative (PSI-2) lasted from July 2005 to June 2010. Its goal was to use methods introduced in PSI-1 to determine a large number of proteins and continue development in streamlining the structural genomics pipeline. PSI-2 had a five-year budget of $325 million provided by NIGMS with support from the National Center for Research Resources. By the end of this phase, the Protein Structure Initiative had solved over 4,800 protein structures; over 4,100 of these were unique. The number of sponsored research centers grew to 14 during PSI-2. Four centers were selected as Large Scale centers, with a mandate to place 15% effort on targets nominated by the broader research community, 15% on targets of biomedical relevance, and 70% on broad structural coverage; these centers were the Joint Center for Structural Genomics (JCSG), the Midwest Center for Structural Genomics (MCSG), the Northeast Structural Genomics Consortium (NESG), and the New York SGX Research Center for Structural Genomics (NYSGXRC). The new centers participating in PSI-2 included four specialized centers: Accelerated Technologies Center for Gene to 3D Structure (ATCG3D), the Center for Eukaryotic Structural Genomics (CESG), the Center for High-Throughput Structural Biology (CHTSB), a branch of the Structural Genomics of Pathogenic Protozoa Consortium taking that institution's place), the Center for Structures of Membrane Proteins (CSMP), and the New York Consortium on Membrane Protein Structure (NYCOMPS). Two homology modeling centers, the Joint Center for Molecular Modeling (JCMM) and New Methods for High-Resolution Comparative Modeling (NMHRCM) were also added, as well as two resource centers, the PSI Materials Repository (PSI-MR) and the PSI Structural Biology Knowledgebase (SBKB). The TB Structural Genomics Consortium was removed from the roster of supported research centers in the transition from PSI-1 to PSI-2. Originally launched in February 2008, the SBKB is a free resource that provides information on protein sequence and keyword searching, as well as modules describing target selection, experimental protocols, structure models, functional annotation, metrics on overall progress, and updates on structure determination technology. Like the PDB, it is directed by Dr. Helen M. Berman and hosted at Rutgers University. The PSI Materials Repository, established in 2006 at the Harvard Institute of Proteomics, stores and ships PSI-generated plasmid clones. Clones are sequence-verified, annotated and stored in the DNASU Plasmid Repository, currently located at the Biodesign Institute at Arizona State University. As of September 2011, there are over 50,000 PSI-generated plasmid clones and empty vectors available for request through DNASU in addition to over 147,000 clones generated from non-PSI sources. Plasmids are distributed to researchers worldwide. Now called the PSI:Biology Materials Repository, this resource has a five-year budget of $5.4 million and is under the direction of Dr. Joshua LaBaer, who moved to Arizona State University in the middle of 2009, taking the PSI:Biology-MR with him. Phase 3 The third phase of the PSI was called PSI:Biology and was intended to reflect the emphasis on the biological relevance of the work. During this phase, highly organized networks of investigators were applying the new paradigm of high-throughput structure determination, which was successfully developed during the earlier phases of the PSI, to study a broad range of important biological and biomedical problems. The network included centers for high-throughput structure determination, centers for membrane protein structure determination, consortia for high-throughput-enabled structural biology partnerships, the SBKB and the PSI-MR. In September 2013 NIH announced that PSI would not be renewed after its third phase would end in 2015. Impact As of January 2006, about two thirds of worldwide structural genomics (SG) output was made by PSI centers. Of these PSI contributions over 20% represented new Pfam families, compared to the non-SG average of 5%. Pfam families represent structurally distinct groups of proteins as predicted from sequenced genomes. Not targeting homologs of known structure was accomplished by using sequence comparison tools like BLAST and PSI-BLAST. Like the difference in novelty as determined by discovery of new Pfam families, the PSI also discovered more SCOP folds and superfamilies than non-SG efforts. In 2006, 16% of structures solved by the PSI represented new SCOP folds and superfamilies, while the non-SG average was 4%. Solving such novel structures reflects increased coverage of protein fold space, one of the PSI's main goals. Determining the structure a novel protein allows homology modeling to more accurately predict the fold of other proteins in the same structural family. While most of the structures solved by the four large-scale PSI centers lack functional annotation, many of the remaining PSI centers determine structures for proteins with known biological function. The TB Structural Genomics Consortium, for example, focused exclusively on functionally characterized proteins. During its term in PSI-1, it deposited structures for over 70 unique proteins from Mycobacterium tuberculosis, which represented more than 35% of total unique M. tuberculosis structures solved through 2007. In following with its biomedical theme to increase coverage of phosphotomes, the NYSGXRC has determined structures for about 10% of all human phosphatases. The PSI consortia have provided the overwhelming majority of targets for the Critical Assessment of Techniques for Protein Structure Prediction (CASP), a community-wide, biannual experiment to determine the state and progress of protein structure prediction. A major goal during the PSI:Biology phase is to utilize the high-throughput methods developed during the initiative's first decade to generate protein structures for functional studies, broadening the PSI's biomedical impact. It is also expected to advance knowledge and understanding of membrane proteins. Criticism The PSI has received notable criticism from the structural biology community. Among these charges is that the main product of the PSI – PDB files of proteins' atomic coordinates as determined by X-ray crystallography or NMR spectroscopy – are not useful enough to biologists to justify the project's $764 million cost. Critics note that money currently spent on the PSI could have otherwise funded what they consider worthier causes: A short response to this was published: In October 2008 the NIGMS hosted a meeting concerning the future of structural genomics efforts and invited speakers from the PSI Advisory Committee, members of the NIGMS Advisory Council, and interested scientists who had no previous involvement with the PSI. Representatives of other genomics, proteomics, and structural genomics initiatives, as well as scientists from academia, government, and industry were also included. Based on this meeting and the subsequent recommendations from the PSI Advisory Committee, a concept-clearance document was released in January 2009 describing what a third phase of the PSI might entail. Most notable was a large emphasis on partnerships and collaborations to ensure that the majority of PSI research is focused on proteins of interest to the broader research community as well as efforts to make PSI products more accessible to the research community. Grant applications for PSI:Biology were submitted by October 29, 2009. See Phase 3 section above. External links Protein Structure Initiative (PSI) PSI:Biology Funded Centers and Grants Structural Biology Knowledgebase PSI:Biology-Materials Repository Open Protein Structure Annotation Network (TOPSAN), a wiki for annotation of protein structures determined by the PSI References Protein structure Genome projects
Protein Structure Initiative
[ "Chemistry", "Biology" ]
2,119
[ "Protein structure", "Genome projects", "Structural biology" ]
225,721
https://en.wikipedia.org/wiki/Honeywell
Honeywell International Inc. is an American publicly traded, multinational conglomerate corporation headquartered in Charlotte, North Carolina. It primarily operates in four areas of business: aerospace, building automation, industrial automation, and energy and sustainability solutions (ESS). Honeywell is a Fortune 500 company, ranked 115th in 2023. In 2023, the corporation had a global workforce of approximately 95,000 employees. As of 2020, the current chairman & chief executive officer (CEO) is Vimal Kapur. The corporation's name, Honeywell International Inc., is a product of the merger of Honeywell Inc. and AlliedSignal in 1999. The corporation headquarters were consolidated with AlliedSignal's headquarters in Morristown, New Jersey. The combined company chose the name "Honeywell" because of the considerable brand recognition. Honeywell was a component of the Dow Jones Industrial Average index from 1999 to 2008. Prior to 1999, its corporate predecessors were included dating back to 1925, including early entrants in the computing and thermostat industries. In 2020, Honeywell rejoined the Dow Jones Industrial Average index. In 2021, it moved its stock listing from the New York Stock Exchange to the Nasdaq. History The Butz Thermo-Electric Regulator Company was founded in 1885 when the Swiss-born Albert Butz invented the damper-flapper, a thermostat used to control coal furnaces, bringing automated heating system regulation into homes. In 1886, he founded the Butz Thermo-Electric Regulator Company. In 1888, after a falling out with his investors, Butz left the company and transferred the patents to the legal firm Paul, Sanford, and Merwin, who renamed the company the Consolidated Temperature Controlling Company. As the years passed, CTCC struggled with debt, and the company underwent several name changes. After it was renamed the Electric Heat Regulator Company in 1893, W.R. Sweatt, a stockholder in the company, was sold "an extensive list of patents" and named secretary-treasurer. By 1900, Sweatt had bought out the remaining shares of the company from the other stockholders. 1906 Honeywell Heating Specialty Company founded In 1906, Mark Honeywell founded the Honeywell Heating Specialty Company in Wabash, Indiana, to manufacture and market his invention, the mercury seal generator. 1922–1934 Mergers and acquisitions As Honeywell's company grew, thanks in part to the acquisition of Jewell Manufacturing Company in 1922 to better automate his heating system, it began to clash with the Electric Heat Regulator Company now-renamed Minneapolis Heat Regulator Company. In 1927, this led to the merging of both companies into the publicly-held Minneapolis-Honeywell Regulator Company. Honeywell was named the company's first president, alongside W.R. Sweatt as its first chairman. In 1929, combined assets were valued at over $3.5 million, with less than $1 million in liabilities just months before Black Monday. In 1931, Minneapolis-Honeywell began a period of expansion and acquisition when they purchased the Time-O-Stat Controls Company, giving the company access to a greater number of patents for their controls systems. W.R. Sweatt and his son Harold provided 75 years of uninterrupted leadership for the company. W.R. Sweatt survived rough spots and turned an innovative idea – thermostatic heating control – into a thriving business. 1934–1941 International growth Harold took over in 1934, leading Honeywell through a period of growth and global expansion that set the stage for Honeywell to become a global technology leader. The merger into the Minneapolis-Honeywell Regulator Company proved to be a saving grace for the corporation. 1934 marked Minneapolis-Honeywell's first foray into the international market, when they acquired the Brown Instrument Company and inherited their relationship with the Yamatake Company of Tokyo, a Japan-based distributor. Later in 1934, Minneapolis-Honeywell started distributorships across Canada, as well as one in the Netherlands, their first European office. This expansion into international markets continued in 1936, with their first distributorship in London, as well as their first foreign assembly facility being established in Canada. By 1937, ten years after the merger, Minneapolis-Honeywell had over 3,000 employees, with $16 million in annual revenue. World War II With the outbreak of World War II, Minneapolis-Honeywell was approached by the US military for engineering and manufacturing projects. In 1941, Minneapolis-Honeywell developed a superior tank periscope, camera stabilizers, and the C-1 autopilot. The C-1 revolutionized precision bombing and was ultimately used on the two B-29 bombers that dropped atomic bombs on Japan in 1945. The success of these projects led Minneapolis-Honeywell to open an Aero division in Chicago on October 5, 1942. This division was responsible for the development of the formation stick to control autopilots, more accurate fuel quantity indicators for aircraft, and the turbo supercharger. In 1950, Minneapolis-Honeywell's Aero division was contracted for the controls on the first US nuclear submarine, USS Nautilus. In 1951, the company acquired Intervox Company for their sonar, ultrasonic, and telemetry technologies. Honeywell also helped develop and manufacture the RUR-5 ASROC for the US Navy. 1950–1970s In 1953, in cooperation with the USAF Wright-Air Development Center, Honeywell developed an automated control unit, that could control an aircraft through various stages of a flight, from taxiing to takeoff to the point where the aircraft neared its destination and the pilot took over for landing. Called the Automatic Master Sequence Selector, the onboard control operated similarly to a player piano to relay instructions to the aircraft's autopilot at certain way points during the flight, significantly reducing the pilot's workload. Technologically, this effort had parallels to contemporary efforts in missile guidance and numerical control. Honeywell also developed the Wagtail missile with the USAF. From the 1950s until the mid-1970s, Honeywell was the United States' importer of Japanese company Asahi Optical's Pentax cameras and photographic equipment. These products were labeled "Heiland Pentax" and "Honeywell Pentax" in the U.S. In 1953, Honeywell introduced their most famous product, the T-86 Round thermostat. In 1961, James H. Binger became Honeywell's president and in 1965 its chairman. Binger revamped the company sales approach, placing emphasis on profits rather than on volume. He stepped up the company's international expansion – it had six plants producing 12% of the company's revenue. He officially changed the company's corporate name from "Minneapolis-Honeywell Regulator Co." to "Honeywell", to better represent their colloquial name. Throughout the 1960s, Honeywell continued to acquire other businesses, including Security Burglar Alarm Company in 1969. In the 1970s, after one member of a group called FREE on the Minneapolis campus (U of M) of the University of Minnesota asked five major companies with local offices to explain their attitudes toward gay men and women, three responded quickly, insisting that they did not discriminate against gay people in their hiring policies. Only Honeywell objected to hiring gay people. Later in the 1970s, when faced with a denial of access to students, Honeywell "quietly [reversed] its hiring policy". The beginning of the 1970s saw Honeywell focus on process controls, with Honeywell merging their computer operations with GE's information systems in 1970, and later acquiring GE's process control business. With the acquisition, Honeywell took over responsibility for GE's ongoing Multics operating system project. The design and features of Multics greatly influenced the Unix operating system. Multics influenced many of the features of Honeywell/GE's GECOS and GCOS8 General Comprehensive Operating System operating systems. Honeywell, Groupe Bull, and Control Data Corporation formed a joint venture in Magnetic Peripherals Inc. which became a major player in the hard disk drive market. Honeywell was the worldwide leader in 14-inch disk drive technology in the OEM marketplace in the 1970s and early 1980s, especially with its SMD (Storage Module Drive) and CMD (Cartridge Module Drive). In the second half of the 1970s, Honeywell started to look to international markets again, acquiring the French Compagnie Internationale pour l’Informatique in 1976. In 1984, Honeywell formed Honeywell High Tech Trading to lease their foreign marketing and distribution to other companies abroad, in order to establish a better position in those markets. Under Binger's stewardship from 1961 to 1978 he expanded the company into such fields as defense, aerospace, and computing. During and after the Vietnam Era, Honeywell's defense division produced a number of products, including cluster bombs, missile guidance systems, napalm, and land mines. Minnesota-Honeywell Corporation completed flight tests on an inertia guidance sub-system for the X-20 project at Eglin Air Force Base, Florida, utilizing an NF-101B Voodoo by August 1963. The X-20 project was canceled in December 1963. The Honeywell project, founded in 1968, organized protests against the company to persuade it to abandon weapons production In 1980, Honeywell bought Incoterm Corporation to compete in both the airline reservations system networks and bank teller markets. Honeywell Information Systems In April 1955, Minneapolis-Honeywell started a joint venture with Raytheon called Datamatic to enter the computer market and compete with IBM. In 1957, their first computer, the DATAmatic 1000, was sold and installed. In 1960, just five years after embarking on this venture with Raytheon, Minneapolis-Honeywell bought Raytheon's interest in Datamatic and turned it into the Electronic Data Processing division, later Honeywell Information Systems (HIS) of Minneapolis-Honeywell. Honeywell purchased minicomputer pioneer Computer Control Corporation (3C's) in 1966, renaming it as Honeywell's Computer Control Division. Through most of the 1960s, Honeywell was one of the "Snow White and the Seven Dwarfs" of computing. IBM was "Snow White", while the dwarfs were the seven significantly smaller computer companies: Burroughs, Control Data Corporation, General Electric, Honeywell, NCR, RCA, and UNIVAC. Later, when their number had been reduced to five, they were known as "The BUNCH", after their initials: Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell. In 1970, Honeywell acquired GE's computer business, rebadging General Electric's 600-series mainframes to Honeywell 6000 series computers, supporting GCOS, Multics, and CP-6, while forming Honeywell Information Systems. In 1973, they shipped a high speed non-impact printer called the Honeywell Page Printing System. In 1975, it purchased Xerox Data Systems, whose Sigma computers had a small but loyal customer base. Some of Honeywell's systems were minicomputers, such as their Series 60 Model 6 and Model 62 and their Honeywell 200. The latter was an attempt to penetrate the IBM 1401 market. In 1987, HIS merged with Groupe Bull, a global joint venture with Compagnie des Machines Bull of France and NEC Corporation of Japan to become Honeywell Bull. In 1988 Honeywell Bull was consolidated into Groupe Bull and in 1989 renamed to Bull, a Worldwide Information Systems Company. By 1991, Honeywell was no longer involved in the computer business. 1985–1999 integrations Aerospace and defense 1986 marked a new direction for Honeywell, beginning with the acquisition of the Sperry Aerospace Group from the Unisys Corporation. In 1990, Honeywell spun off their Defense and Marine Systems business into Alliant Techsystems, as well as their Test Instruments division and Signal Analysis Center to streamline the company's focus. Honeywell continues to supply aerospace products including electronic guidance systems, cockpit instrumentation, lighting, and primary propulsion and secondary power turbine engines. In 1996, Honeywell acquired Duracraft and began marketing its products in the home comfort sector. Honeywell is in the consortium that runs the Pantex Plant that assembles all of the nuclear bombs in the United States arsenal. Honeywell Federal Manufacturing & Technologies, successor to the defense products of AlliedSignal, operates the Kansas City Plant which produces and assembles 85 percent of the non-nuclear components of the bombs. Home and building controls Honeywell began the SmartHouse project, to combine heating, cooling, security, lighting, and appliances into one easily controlled system. They continued the trend in 1987 by releasing new security systems, and fire and radon detectors. In 1992, in another streamlining effort, Honeywell combined their Residential Controls, Commercial Systems, and Protections Services divisions into Home and Building Control, which then acquired the Enviracare air cleaner business. By 1995, Honeywell had condensed into three divisions: Space and Aviation Control, Home and Building Control, and Industrial Control. Industrial control Honeywell dissolved its partnership with Yamatake Company and consolidated its Process Control Products Division, Process Management System Division, and Micro Switch Division into one Industrial Control Group in 1998. It has further acquired Measurex System and Leeds & Northrup to strengthen its portfolio in 1997. 1999–2002 merger, takeovers AlliedSignal and Pittway On June 7, 1999, Honeywell was acquired by AlliedSignal, who elected to retain the Honeywell name for its brand recognition. The former Honeywell moved their headquarters of 114 years to AlliedSignal's in Morristown, New Jersey. While "technically, the deal looks more like an acquisition than a merger...from a strategic standpoint, it is a merger of equals." AlliedSignal's 1998 revenue was reported at $15.1 billion to Honeywell's $8.4 billion, but together the companies share huge business interests in aerospace, chemical products, automotive parts, and building controls. The corporate headquarters were consolidated to AlliedSignal's headquarters in Morristown, New Jersey, rather than Honeywell's former headquarters in Minneapolis, Minnesota. When Honeywell closed its corporate headquarters in Minneapolis, over one thousand employees lost their jobs. A few moved to Morristown or other company locations, but the majority were forced to find new jobs or retire. Soon after the merger, the company's stock fell significantly, and did not return to its pre-merger level until 2007. In 2000, the new Honeywell acquired Pittway for $2.2 billion to gain a greater share of the fire-protection and security systems market, and merged it into their Home and Building Control division, taking on Pittway's $167 million in debt. Analyst David Jarrett commented that "while Honeywell offered a hefty premium, it's still getting Pittway for a bargain" at $45.50 per share, despite closing at $29 the week before. Pittway's Ademco products complemented Honeywell's existing unified controls systems. General Electric Company In October 2000, Honeywell, then valued at over $21 billion, accepted a takeover bid from then-CEO Jack Welch of General Electric. The American Department of Justice cleared the merger, while "GE teams swooped down on Honeywell" and "GE executives took over budget planning and employee reviews." However, on July 3, 2001, the European Commission's competition commissioner, Mario Monti, blocked the move. This decision was taken on the grounds that with GE's dominance of the large jet engine market, led by the General Electric CF34 turbofan engine, its leasing services (GECAS), and Honeywell's portfolio of regional jet engines and avionics, the new company would be able to "bundle" products and stifle competition through the creation of a horizontal monopoly. US regulators disagreed, finding that the merger would improve competition and reduce prices; United States Assistant Attorney General Charles James called the EU's decision "antithetical to the goals of antitrust law enforcement." This led to a drop in morale and general tumult throughout Honeywell. The then-CEO Michael Bonsignore was fired as Honeywell looked to turn their business around. 2002–2014 acquisitions and further expansion In January 2002, Knorr-Bremse —who had been operating in a joint venture with Honeywell International Inc. —assumed full ownership of its ventures in Europe, Brazil, and the USA. Bendix Commercial Vehicle Systems became a subsidiary of Knorr-Bremse AG. In February 2002, Honeywell's board appointed their next CEO and chairman, David M. Cote. Since 2002, Honeywell has made more than 80 acquisitions and 60 divestures, and increasing its labor force to 131,000 as a result of these acquisitions. Honeywell's stock nearly tripled from $35.23 in April 2002 to $99.39 in January 2015. Honeywell made a £1.2bn ($2.3bn) bid for Novar plc in December 2004. The acquisition was finalized in March 2005. In October 2005, Honeywell bought out Dow's 50% stake in UOP for $825 million, giving them complete control over the joint venture in petrochemical and refining technology. In May 2010, Honeywell outbid UK-based Cinven and acquired the French company Sperian Protection for $1.4 billion, which was then incorporated into its automation and controls safety unit. 2015–present In 2015, the headquarters were moved to Morris Plains, New Jersey. The headquarters in Morris Plains included a 475,000-square-foot building on 40 acres. In December 2015, Honeywell acquired Elster for US$5.1B, entering the space of gas, electricity, and water meters with a specific focus on smart meters. Honeywell International Inc. then acquired the 30% stake in UOP Russell LLC it didn't own already for roughly $240 million in January 2016. In April 2016, Honeywell acquired Xtralis, a provider of aspirating smoke detection, perimeter security technologies, and video analytics software, for $480 million, from funds advised by Pacific Equity Partners and Blum Capital Partners. In May 2016, Honeywell International Inc. settled its patent dispute regarding Google subsidiary Nest Labs, whose thermostats Honeywell claimed infringed on several of its patents. Google parent Alphabet Inc. and Honeywell said they reached a "patent cross-license" agreement that "fully resolves" the long-standing dispute. Honeywell sued Nest Labs in 2012. In 2017, Honeywell opened a new software center in Atlanta, Georgia. David Cote stepped down as CEO on April 1, 2017, and was succeeded by Darius Adamczyk, who had been promoted to president and chief operating officer (COO) in 2016. Cote served as executive chairman until April 2018. In October 2017, Honeywell announced plans to spin off its Homes, ADI Global Distribution, and Transportation Systems businesses into two separate, publicly traded companies by the end of 2018. In 2018, Honeywell spun off both Honeywell Turbo Technologies, now Garrett Advancing Motion, and its consumer products business, Resideo. Both companies are publicly traded on the New York Stock Exchange. For the fiscal year 2019, Honeywell reported net income of US$6.230 billion, with an annual revenue of US$36.709 billion, a decrease of 19.11% over the previous fiscal cycle. Honeywell's market capitalization was valued at over US$113.25 billion in September 2020. Honeywell relocated its corporate headquarters in October 2019 to Charlotte, North Carolina. In July 2019, Honeywell moved employees into a temporary headquarters building in Charlotte before their new building was complete. In 2020, Honeywell Forge launched as an analytics platform software for industrial and commercial applications such as aircraft, building, industrial, worker and cyber-security. In collaboration with Carnegie Mellon University National Robotics Engineering Center, the Honeywell Robotics was created in Pittsburgh to focus on supply chain transformation. The Honeywell robotic unloader grabs packages in tractor-trailers then places them on conveyor belts for handlers to sort. In May 2019, GoDirect Trade launched as an online marketplace for surplus aircraft parts such as engines, electronics, and APU parts. In March 2020, Honeywell announced that its quantum computer is based on trapped ions. Its expected quantum volume is at least 64, which Honeywell's CEO called the world's most powerful quantum computer. In November 2021, Honeywell announced the spinoff of its quantum division into a separate company named "Quantinuum". In March 2023, Honeywell announced Vimal Kapur as its next CEO, effective June 1, 2023. In December 2023, Honeywell acquired Carrier Global's security business for nearly $5 billion to boost its automation portfolio. In February 2024, Honeywell filed a lawsuit against Lone Star Aerospace, Inc., alleging that their software products infringe on five patents. On October 1, 2024, Honeywell partnered with Google to integrate data with generative AI with an aim to streamline autonomous operations for its customers. COVID-19 pandemic In response to the COVID-19 pandemic, Honeywell converted some of its manufacturing facilities in Rhode Island, Arizona, Michigan and Germany to produce supplies of personal protective equipment for healthcare workers. In April 2020, Honeywell began production of N95 masks at the company's factories in Smithfield and Phoenix, aiming to produce 20 million masks a month. Honeywell's facilities in Muskegon and Germany were converted to produce hand sanitiser for government agencies. Several state governments contracted Honeywell to produce N95 particulate-filtering face masks during the pandemic. The North Carolina Task Force for Emergency Repurposing of Manufacturing (TFERM) awarded Honeywell a contract for the monthly delivery of 100,000 N95 masks. In April 2020, Los Angeles Mayor Eric Garcetti announced a deal with Honeywell to produce 24 million N95 masks to distribute to healthcare workers and first responders. In May 2020, United States President Donald Trump visited the Honeywell Aerospace Technologies facility in Phoenix, where he acknowledged the "incredibly patriotic and hard-working men and women of Honeywell" for making N95 masks and referred to the company's production as a "miraculous achievement". In April 2021, Will.i.am and Honeywell collaborated on Xupermask, a mask made of silicon and athletic mesh fabric that has LED lights, 3-speed fans and noise-canceling headphones in the mask. In November 2024, Honeywell announced its intention to sell its personal protective equipment business to Protective Industrial Products for almost $1.33 billion in cash. The sale of this PPE business is expected to close by the first half of 2025. After the divestment of PPE business, the company is planning to retain its gas detection portfolio. Business groups The company operates four business groups – Honeywell Aerospace Technologies, Building Automation, Safety and Productivity Solutions (SPS), and Performance Materials and Technologies (PMT). Business units within the company are as follows: Honeywell Aerospace Technologies provides avionics, aircraft engines, flight management systems, and service solutions to manufacturers, airlines, airport operations, militaries, and space programs. It comprises Commercial Aviation, Defense & Space, and Business & General Aviation. In January 2014, Honeywell Aerospace Technologies launched its SmartPath Precision Landing System at Malaga-Costa del Sol Airport in Spain, which augments GPS signals to make them suitable for precision approach and landing, before broadcasting the data to approaching aircraft. In July 2014, Honeywell's Transportation Systems merged with the Aerospace division due to similarities between the businesses. In April 2018, Honeywell announced to develop laser communication products for satellite communication in collaboration with Ball Aerospace and plans future volume production. In June 2018 Honeywell spun off and rebranded its Transportation Systems as Garrett. Building Automation and Honeywell Safety and Productivity Solutions were created when Automation and Control Solutions was split into two in July 2016. Building Automation comprises Honeywell Building Solutions, Environmental and Energy Solutions, and Honeywell Security and Fire. In December 2017, Honeywell announced that it had acquired SCAME, an Italy-based company, to add new fire and gas safety capabilities to its portfolio. Honeywell Safety and Productivity Solutions comprises Scanning & Mobility, Sensing and Internet of Things, and Industrial safety. Honeywell Performance Materials and Technologies comprises six business units: Honeywell UOP, Honeywell Process Solutions, Fluorine Products, Electronic Materials, Resins & Chemicals, and Specialty Materials. Products include process technology for oil and gas processing, fuels, films and additives, special chemicals, electronic materials, and renewable transport fuels. Corporate governance Honeywell's current chief executive officer is Vimal Kapur. , the members of the board are: Acquisitions since 2002 Honeywell's acquisitions have consisted largely of businesses aligned with the company's existing technologies. The acquired companies are integrated into one of Honeywell's four business groups (Aerospace Technologies (AT), Building Automation (BA), Safety and Productivity Solutions (SPS), or Performance Materials and Technologies (PMT)) but retain their original brand name. Environmental issues The United States Environmental Protection Agency states that no corporation has been linked to a greater number of Superfund toxic waste sites than Honeywell. In 2007, Honeywell ranked 44th in a list of US corporations most responsible for air pollution, releasing more than 4.25 million kg (9.4 million pounds) of toxins per year into the air. In 2001, Honeywell agreed to pay $150,000 in civil penalties and to perform $772,000 worth of reparations for environmental violations involving: failure to prevent or repair leaks of hazardous organic pollutants into the air failure to repair or report refrigeration equipment containing chlorofluorocarbons inadequate reporting of benzene, ammonia, nitrogen oxide, dichlorodifluoromethane, sulfuric acid, sulfur dioxide, and caprolactam emissions In 2003, a federal judge in Newark, New Jersey, ordered the company to perform an estimated $400 million environmental remediation of chromium waste, citing "a substantial risk of imminent damage to public health and safety and imminent and severe damage to the environment." In 2003, Honeywell paid $3.6 million to avoid a federal trial regarding its responsibility for trichloroethylene contamination in Lisle, Illinois. In 2004, the State of New York announced that it would require Honeywell to complete an estimated $448 million cleanup of more than 74,000 kg (165,000 lbs) of mercury and other toxic waste dumped into Onondaga Lake in Syracuse, New York, from a former Allied Chemical property. Honeywell established three water treatment plants by November 2014. The chemicals cleanup site removed 7 tons of mercury. In November 2015, Audubon New York gave the Thomas W. Keesee Jr. Conservation Award to Honeywell for its cleanup efforts in “one of the most ambitious environmental reclamation projects in the United States.” By December 2017, Honeywell completed dredging the lake. Later in December, the Department of Justice filed a settlement requiring Honeywell to pay a separate $9.5 million in damages, as well build 20 restoration projects on the shore to help repair the greater area surrounding the lake. In 2005, the state of New Jersey sued Honeywell, Occidental Petroleum, and PPG to compel cleanup of more than 100 sites contaminated with chromium, a metal linked to lung cancer, ulcers, and dermatitis. In 2008, the state of Arizona made a settlement with Honeywell to pay a $5 million fine and contribute $1 million to a local air-quality cleanup project, after allegations of breaking water-quality and hazardous-waste laws on hundreds of occasions between 1974 and 2004. In 2006, Honeywell announced that its decision to stop manufacturing mercury switches had resulted in reductions of more than 11,300 kg (24,900 lb) of mercury, 2,800 kg (6,200 lb) of lead, and 1,500 kg (3,300 lb) of chromic acid usage. The largest reduction represents 5% of mercury use in the United States. The EPA acknowledged Honeywell's leadership in reducing mercury use through a 2006 National Partnership for Environmental Priorities (NPEP) Achievement Award for discontinuing the manufacturing of mercury switches. Carbon footprint Honeywell reported Total CO2e emissions (Direct + Indirect) for the twelve months ending 31 December 2020 at 2,248 Kt (-89 /-3.8% y-o-y). Honeywell aims to reach net zero emissions by 2035. Criticism On March 10, 2013, The Wall Street Journal reported that Honeywell was one of sixty companies that shielded annual profits from U.S. taxes. In December 2011, the non-partisan organization Public Campaign criticized Honeywell International for spending $18.3 million on lobbying and not paying any taxes during 2008–2010, instead getting $34 million in tax rebates, despite making a profit of $4.9 billion, laying off 968 workers since 2008, and increasing executive pay by 15% to $54.2 million in 2010 for its top five executives. Honeywell has also been criticized in the past for its manufacture of deadly and maiming weapons, such as cluster bombs. Allegations of involvement in Gaza In June 2024, investigative reports from various sources alleged that Honeywell's manufactured components were used in a missile that targeted a school in Gaza. Al Jazeera’s investigation traced the part’s serial numbers back to Honeywell, raising concerns about U.S. involvement in these military operations. This attack resulted in numerous civilian casualties, sparking international condemnation. Honeywell has not provided a detailed response regarding these claims. See also List of Honeywell products and services Top 100 US Federal Contractors Explanatory notes References External links 1906 establishments in Minnesota Aerospace companies of the United States Aircraft component manufacturers of the United States Aircraft engine manufacturers of the United States Auto parts suppliers of the United States Avionics companies Companies based in Charlotte, North Carolina Companies formerly listed on the New York Stock Exchange Companies in the Dow Jones Industrial Average Companies listed on the Nasdaq Computer hardware companies Conglomerate companies established in 1906 Conglomerate companies of the United States Defense companies of the United States Computer companies of the United States Electrical wiring and construction supplies manufacturers Electronics companies established in 1906 Engineering companies of the United States Electronic design Home automation companies Instrument-making corporations Manufacturing companies based in North Carolina Manufacturing companies established in 1906 Technology companies established in 1906 Electronics companies of the United States
Honeywell
[ "Technology", "Engineering" ]
6,319
[ "Home automation", "Home automation companies", "Electronic design", "Computer hardware companies", "Electronic engineering", "Design", "Computers" ]
225,779
https://en.wikipedia.org/wiki/Program%20optimization
In computer science, program optimization, code optimization, or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power. Overview Although the term "optimization" is derived from "optimum", achieving a truly optimal system is rare in practice, which is referred to as superoptimization. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One popular example is space-time tradeoff, reducing a program’s execution time by increasing its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a slower algorithm to conserve space. There is rarely a single design that can excel in all situations, requiring engineers to prioritize attributes most relevant to the application at hand. Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually stop once sufficient improvements are achieved, without striving for perfection. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reaching diminishing returns. Levels of optimization Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project though this varies significantly but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly. As performance is part of the specification of a program a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow often by an order of magnitude or more and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as the Intel 432 (1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which only achieved acceptable performance with HotSpot (1999). The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk. Design level At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in a push protocol) rather than multiple roundtrips. Choice of design depends on the goals: when designing a compiler, if fast compilation is the key priority, a one-pass compiler is faster than a multi-pass compiler (assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component for example, a Python program may rewrite performance-critical sections in C. In a distributed system, choice of architecture (client-server, peer-to-peer, etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients). Algorithms and data structures Given an overall design, a good choice of efficient algorithms and data structures, and efficient implementation of these algorithms and data structures comes next. After design, the choice of algorithms and data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use of abstract data types in function definitions, and keeping the concrete data structure definitions restricted to a few places. For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(log n), linear O(n), or in some cases log-linear O(n log n) in the input (both in space and time). Algorithms with quadratic complexity O(n2) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible. Beyond asymptotic order of growth, the constant factors matter: an asymptotically slower algorithm may be faster or smaller (because simpler) than an asymptotically faster algorithm when they are both faced with small input, which may be the case that occurs in reality. Often a hybrid algorithm will provide the best performance, due to this tradeoff changing with size. A general technique to improve performance is to avoid work. A good example is the use of a fast path for common cases, improving performance by avoiding unnecessary work. For example, using a simple text layout algorithm for Latin text, only switching to a complex layout algorithm for complex scripts, such as Devanagari. Another important technique is caching, particularly memoization, which avoids redundant computations. Because of the importance of caching, there are often many levels of caching in a system, which can cause problems from memory use, and correctness issues from stale caches. Source code level Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference. For example, on early C compilers, while(1) was slower than for(;;) for an unconditional loop, because while(1) evaluated 1 and then had a conditional jump which tested if it was true, while for (;;) had an unconditional jump . Some optimizations (such as this one) can nowadays be performed by optimizing compilers. This depends on the source language, the target machine language, and the compiler, and can be both difficult to understand or predict and changes over time; this is a key place where understanding of compilers and machine code can improve performance. Loop-invariant code motion and return value optimization are examples of optimizations that reduce the need for auxiliary variables and can even result in faster performance by avoiding round-about optimizations. Build level Between the source and compile level, directives and build flags can be used to tune performance options in the source code and compiler respectively, such as using preprocessor defines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predicting branching, for instance. Source-based software distribution systems such as BSD's Ports and Gentoo's Portage can take advantage of this form of optimization. Compile level Use of an optimizing compiler tends to ensure that the executable program is optimized at least as much as the compiler can predict. Assembly level At the lowest level, writing code using an assembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire of machine instructions. Many operating systems used on embedded systems have been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language. With more modern optimizing compilers and the greater complexity of recent CPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step. Much of the code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code. Typically today rather than writing in assembly language, programmers will use a disassembler to analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient. Run time Just-in-time compilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliest regular expression engines, and has become widespread with Java HotSpot and V8 for JavaScript. In some cases adaptive optimization may be able to perform run time optimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors. Profile-guided optimization is an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization. Self-modifying code can alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs. Some CPU designs can perform some optimizations at run time. Some examples include out-of-order execution, speculative execution, instruction pipelines, and branch predictors. Compilers can help the program take advantage of these CPU features, for example through instruction scheduling. Platform dependent and independent optimizations Code optimization can be also broadly categorized as platform-dependent and platform-independent techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor. Writing or producing different versions of the same code for different processors might therefore be needed. For instance, in the case of compile-level optimization, platform-independent techniques are generic techniques (such as loop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. A great example of platform-independent optimization has been shown with inner for loop, where it was observed that a loop with an inner for loop performs more computations per unit time than a loop without it or one with an inner while loop. Generally, these serve to reduce the total instruction path length required to complete the program and/or reduce total memory usage during the process. On the other hand, platform-dependent techniques involve instruction scheduling, instruction-level parallelism, data-level parallelism, cache optimization techniques (i.e., parameters that differ among various platforms) and the optimal instruction scheduling might be different even on different processors of the same architecture. Strength reduction Computational tasks can be performed in several different ways with varying efficiency. A more efficient version with equivalent functionality is known as a strength reduction. For example, consider the following C code snippet whose intention is to obtain the sum of all integers from 1 to : int i, sum = 0; for (i = 1; i <= N; ++i) { sum += i; } printf("sum: %d\n", sum); This code can (assuming no arithmetic overflow) be rewritten using a mathematical formula like: int sum = N * (1 + N) / 2; printf("sum: %d\n", sum); The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality. See algorithmic efficiency for a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by removing extraneous functionality. Optimization is not always an obvious or intuitive process. In the example above, the "optimized" version might actually be slower than the original version if were sufficiently small and the particular hardware happens to be much faster at performing addition and looping operations than multiplication and division. Trade-offs In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain more faults than unoptimized versions. Beyond eliminating obvious antipatterns, some code level optimizations decrease maintainability. Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off where one factor is optimized at the expense of others. For example, increasing the size of cache improves run time performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness. There are instances where the programmer performing the optimization must decide to make the software better for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as pessimizations. Bottlenecks Optimization may include finding a bottleneck in a system a component that is the limiting factor on performance. In terms of code, this will often be a hot spot a critical part of the code that is the primary consumer of the needed resource though it can be another factor, such as I/O latency or network bandwidth. In computer science, resource consumption often follows a form of power law distribution, and the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context). More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus a hybrid algorithm or adaptive algorithm may be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions. In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use. When to optimize Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage. Donald Knuth made the following two statements on optimization: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%" (He also attributed the quote to Tony Hoare several years later, although this might have been an error as Hoare disclaims having coined the phrase.) "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering" "Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing. When deciding whether to optimize a specific part of the program, Amdahl's Law should always be considered: the impact on the overall program depends very much on how much time is actually spent in that specific part, which is not always clear from looking at the code without a performance analysis. A better approach is therefore to design first, code from the design and then profile/benchmark the resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization. In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization. Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated. Macros Optimization during code development using macros takes on different forms in different languages. In some procedural languages, such as C and C++, macros are implemented using token substitution. Nowadays, inline functions can be used as a type safe alternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, including constant folding, which may move some computations to compile time. In many functional programming languages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way. Lisp originated this style of macro, and such macros are often called "Lisp-like macros". A similar effect can be achieved by using template metaprogramming in C++. In both cases, work is moved to compile-time. The difference between C macros on one side, and Lisp-like macros and C++ template metaprogramming on the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion of C macros does not perform any computation, and relies on the optimizer ability to perform it. Additionally, C macros do not directly support recursion or iteration, so are not Turing complete. As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete. Automated and manual optimization See also :Category:Compiler optimizations Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm. Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers or system administrators explicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations. Since many parameters influence the program performance, the program optimization space is large. Meta-heuristics and machine learning are used to address the complexity of program optimization. Use a profiler (or performance analyzer) to find the sections of the program that are taking the most resources the bottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance. When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program. More often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine. After the programmer is reasonably sure that the best algorithm is selected, code optimization can start. Loops can be unrolled (for lower loop overhead, although this can often lead to lower speed if it overloads the CPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on. (See algorithmic efficiency article for these and other techniques.) Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler. Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed if the correct part(s) can be located. Manual optimization sometimes has the side effect of undermining readability. Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated. The program that performs an automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors. Today, automated optimizations are almost exclusively limited to compiler optimization. However, because compiler optimizations are usually limited to a fixed set of rather general optimizations, there is considerable demand for optimizers which can accept descriptions of problem and language-specific optimizations, allowing an engineer to specify custom optimizations. Tools that accept descriptions of optimizations are called program transformation systems and are beginning to be applied to real software systems such as C++. Some high-level languages (Eiffel, Esterel) optimize their programs by using an intermediate language. Grid computing or distributed computing aims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time. Time taken for optimization Sometimes, the time taken to undertake optimization therein itself may be an issue. Optimizing existing code usually does not add new features, and worse, it might add new bugs in previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile. An automatic optimizer (or optimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large. In particular, for just-in-time compilers the performance of the run time compile component, executing together with its target code, is the key to improving overall execution speed. References Further reading Jon Bentley: Writing Efficient Programs, . Donald Knuth: The Art of Computer Programming How To Write Fast Numerical Code: A Small Introduction "What Every Programmer Should Know About Memory" by Ulrich Drepper explains the structure of modern memory subsystems and suggests how to utilize them efficiently "Linux Multicore Performance Analysis and Optimization in a Nutshell", presentation slides by Philip Mucci Programming Optimization by Paul Hsieh Writing efficient programs ("Bentley's Rules") by Jon Bentley "Performance Anti-Patterns" by Bart Smaalders Programming language topics Articles with example C code
Program optimization
[ "Technology", "Engineering" ]
5,321
[ "Software engineering", "Programming language topics", "Computer optimization", "Computer performance" ]
225,873
https://en.wikipedia.org/wiki/Vienna%20Circle
The Vienna Circle () of logical empiricism was a group of elite philosophers and scientists drawn from the natural and social sciences, logic and mathematics who met regularly from 1924 to 1936 at the University of Vienna, chaired by Moritz Schlick. The Vienna Circle had a profound influence on 20th-century philosophy, especially philosophy of science and analytic philosophy. The philosophical position of the Vienna Circle was called logical empiricism (German: logischer Empirismus), logical positivism or neopositivism. It was influenced by Ernst Mach, David Hilbert, French conventionalism (Henri Poincaré and Pierre Duhem), Gottlob Frege, Bertrand Russell, Ludwig Wittgenstein and Albert Einstein. The Vienna Circle was pluralistic and committed to the ideals of the Enlightenment. It was unified by the aim of making philosophy scientific with the help of modern logic. Main topics were foundational debates in the natural and social sciences, logic and mathematics; the modernization of empiricism by modern logic; the search for an empiricist criterion of meaning; the critique of metaphysics and the unification of the sciences in the unity of science. The Vienna Circle appeared in public with the publication of various book series – Schriften zur wissenschaftlichen Weltauffassung (Monographs on the Scientific World-Conception), Einheitswissenschaft (Unified Science) and the journal Erkenntnis – and the organization of international conferences in Prague; Königsberg (today known as Kaliningrad); Paris; Copenhagen; Cambridge, UK, and Cambridge, Massachusetts. Its public profile was provided by the Ernst Mach Society (German: Verein Ernst Mach) through which members of the Vienna Circle sought to popularize their ideas in the context of programmes for popular education in Vienna. During the era of Austrofascism and after the annexation of Austria by Nazi Germany most members of the Vienna Circle were forced to emigrate. The murder of Schlick in 1936 by former student Johann Nelböck put an end to the Vienna Circle in Austria. History of the Vienna Circle The history and development of the Vienna Circle shows various stages: The "First Vienna Circle" (1907–1912) The pre-history of the Vienna Circle began with meetings on the philosophy of science and epistemology from 1908 on, promoted by Philipp Frank, Hans Hahn and Otto Neurath. Hans Hahn, the oldest of the three (1879–1934), was a mathematician. He received his degree in mathematics in 1902. Afterwards he studied under the direction of Ludwig Boltzmann in Vienna and David Hilbert, Felix Klein and Hermann Minkowski in Göttingen. In 1905 he received the Habilitation in mathematics. He taught at Innsbruck (1905–1906) and Vienna (from 1909). Otto Neurath (1882–1945) studied mathematics, political economy, and history in Vienna and Berlin. From 1907 to 1914 he taught in Vienna at the Neue Wiener Handelsakademie (Viennese Commercial Academy). Neurath married Olga, Hahn's sister, in 1911. Philipp Frank, the youngest of the group (1884–1966), studied physics at Göttingen and Vienna with Ludwig Boltzmann, David Hilbert and Felix Klein. From 1912, he held the chair of theoretical physics in the German University in Prague. Their meetings were held in Viennese coffeehouses from 1907 onward. Frank remembered: A number of further authors were discussed in the meetings such as Brentano, Meinong, Helmholtz, Hertz, Husserl, Freud, Russell, Whitehead, Lenin and Frege. Presumably the meetings stopped in 1912, when Frank went to Prague, to hold the chair of theoretical physics left vacant by Albert Einstein. Hahn left Vienna during World War I and returned in 1921. The formative years (1918–1924) The formation of the Vienna Circle began with Hahn returning to Vienna in 1921. Together with the mathematician Kurt Reidemeister he organized seminars on Ludwig Wittgenstein's Tractatus logico-philosophicus and on Whitehead and Russell's Principia Mathematica. With the support of Hahn, Moritz Schlick was appointed to the chair of philosophy of the inductive sciences at the University of Vienna in 1922 – the chair formerly held by Ernst Mach and partly by Boltzmann. Schlick had already published two important works Raum und Zeit in die gegenwärtigen Physik (Space and Time in contemporary Physics) in 1917 and Allgemeine Erkenntnislehre (General Theory of Knowledge) in 1918. Immediately after Schlick's arrival in Vienna, he organized discussions with the mathematicians around Hahn. In 1924 Schlick's students Friedrich Waismann and Herbert Feigl suggested to their teacher a sort of regular "evening circle". From winter term 1924 on regular meetings were held at the Institute of Mathematics in Vienna's Boltzmanngasse 5 on personal invitation by Schlick. These discussions can be seen as the beginning of the Vienna Circle. The non-public phase of the Vienna Circle – The Schlick Circle (1924–1928) The group that met from 1924 on was quite diverse and included not only recognized scientists such as Schlick, Hahn, Kraft, Philipp Frank, Neurath, Olga Hahn-Neurath, and Heinrich Gomperz, but also younger students and doctoral candidates. In addition, the group invited foreign visitors. In 1926 Schlick and Hahn arranged to bring Rudolf Carnap to the University of Vienna as a Privatdozent (private lecturer). Carnap's Logical Structure of the World was intensely discussed in the Circle. Also Wittgenstein's Tractatus logico-philosophicus was read out loud and discussed. From 1927 on personal meetings were arranged between Wittgenstein and Schlick, Waismann, Carnap and Feigl. The public phase – Schlick Circle and Verein Ernst Mach (1928–1934) In 1928 the Verein Ernst Mach (Ernst Mach Society) was founded, with Schlick as its chairman. The aim of the society was the spreading of a "scientific world conception" through public lectures that were in large part held by members of the Vienna Circle. In 1929 the Vienna Circle made its first public appearance under this name – invented by Neurath – with the publication of its manifesto Wissenschaftliche Weltauffassung. Der Wiener Kreis (The Scientific Conception of the World. The Vienna Circle also known as Viewing the World Scientifically: The Vienna Circle) The pamphlet is dedicated to Schlick, and its preface was signed by Hahn, Neurath and Carnap. The manifesto was presented at the Tagung für Erkenntnislehre der exakten Wissenschaften (Conference on the Epistemology of the Exact Sciences) in autumn 1929, organized by the Vienna Circle together with the Berlin Circle. This conference was the first international appearance of logical empiricism and the first of a number of conferences: Königsberg (1930), Prague (1934), Paris (1935), Copenhague (1936), Cambridge, UK (1938), Cambridge, Mass. (1939), and Chicago (1941). While primarily known for its views on the natural sciences and metaphysics, the public phase of the Vienna Circle was explicitly political. Neurath and Hahn were both socialists and believed the rejection of magic was a necessary component for liberation of the working classes. The manifesto linked Karl Marx and Friedrich Nietzsche to their political and anti-metaphysical views, indicating a blur between what are now considered two separate schools of contemporary philosophy – analytic philosophy and continental philosophy. In 1930 the Vienna Circle and the Berlin Society took over the journal Annalen der Philosophie and made it the main journal of logical empiricism under the title Erkenntnis, edited by Carnap and Reichenbach. In addition, the Vienna Circle published a number of book series: Schriften zur wissenschaftlichen Weltauffassung (Monographs on the Scientific World-Conception, ed. by Schlick und Frank, 1928–1937), Einheitswissenschaft (Unified Science, edited by Neurath, 1933–1939), and later the International Encyclopedia of Unified Science (edited by Neurath, Carnap and Charles W. Morris, 1938–1970). Disintegration, emigration, internationalization (1934–1938) From the beginning of the 1930s first signs of disintegration appeared for political and racist reasons: Herbert Feigl left Austria in 1930. Carnap was appointed to a chair at Prague University in 1931 and left for Chicago in 1935. 1934 marks an important break: Hahn died after surgery, Neurath fled to Holland because of the victory of Austrofascism in the Austrian Civil War following which the Ernst Mach Society was dissolved for political reasons by the Schuschnigg regime. The murder of Moritz Schlick by the former student Hans Nelböck for political and personal reasons in 1936 set an end to the meetings of the Schlick Circle. Some members of the circle such as Kraft, Waismann, Zilsel, Menger and Gomperz continued to meet occasionally. But the annexation of Austria to Nazi Germany in 1938 meant the definite end of the activities of the Vienna Circle in Austria. With the emigration went along the internationalization of logical empiricism. Many former members of the Vienna Circle and the Berlin Circle emigrated to the English-speaking world where they had an immense influence on the development of philosophy of science. The unity of science movement for the construction of an International Encyclopedia of Unified Science, promoted mainly by Neurath, Carnap, and Morris, is symptomatic of the internationalization of logical empiricism, organizing numerous international conferences and the publication of the International Encyclopedia of Unified Science. Overview of the members of the Vienna Circle Apart from the central figures of the Schlick Circle the question of membership in the Vienna Circle is in many cases unsettled. The partition into "members" and "those sympathetic to the Vienna Circle" produced in the manifesto from 1929 is representative only of a specific moment in the development of the Circle. Depending on the criteria used (regular attendance, philosophical affinities etc.) there are different possible distributions in "inner circle" and "periphery". In the following list (in alphabetical order), the "inner circle" is defined using the criterion of regular attendance. The "periphery" comprises occasional visitors, foreign visitors and leading intellectual figures who stood in regular contact with the Circle (such as Wittgenstein and Popper). Inner Circle: Gustav Bergmann, Rudolf Carnap, Herbert Feigl, Philipp Frank, Kurt Gödel, Hans Hahn, Olga Hahn-Neurath, Béla Juhos, Felix Kaufmann, Victor Kraft, Karl Menger, Richard von Mises, Otto Neurath, Rose Rand, Josef Schächter, Moritz Schlick, Friedrich Waismann, Edgar Zilsel. Periphery: Alfred Jules Ayer, Egon Brunswik, Karl Bühler, Josef Frank, Else Frenkel-Brunswik, Heinrich Gomperz, Carl Gustav Hempel, Eino Kaila, Hans Kelsen, Charles W. Morris, Arne Naess, Karl Raimund Popper, Willard Van Orman Quine, Frank P. Ramsey, Hans Reichenbach, Kurt Reidemeister, Alfred Tarski, Olga Taussky-Todd, Ludwig Wittgenstein. Reception in the United States and the United Kingdom The spread of logical positivism in the United States occurred throughout the 1920s and 1930s. In 1929 and in 1932, Schlick was a visiting professor at Stanford, while Feigl, who immigrated to the United States in 1930, became lecturer (1931) and professor (1933) at the University of Iowa. The definite diffusion of logical positivism in the United States was due to Carl Hempel, Hans Reichenbach, Rudolf Carnap, Philipp Frank, and Herbert Feigl, who emigrated and taught in the United States. Another link to the United States is Willard Van Orman Quine, who traveled in 1932 and 1933 as a Sheldon Traveling Fellow to Vienna, Prague, and Warsaw. Moreover, American semiotician and philosopher Charles W. Morris helped many German and Austrian philosophers emigrate to the United States, including Rudolf Carnap, in 1936. In the United Kingdom it was Alfred Jules Ayer who acquainted the British academia with the work of the Vienna Circle with his book Language, Truth, and Logic (1936). Karl Popper was also important for the reception and critique of their work, even though he never participated in the meetings of the Vienna Circle. Congresses and publications The Vienna Circle was very active in advertising their new philosophical ideas. Several congresses on epistemology and philosophy of science were organized, with the help of the Berlin Circle. There were some preparatory congresses: Prague (1929), Königsberg (1930), Prague (1934) and then the first congress on scientific philosophy held in Paris (1935), followed by congresses in Copenhagen (1936), Paris (1937), Cambridge, UK (1938), Cambridge, Massachusetts. (1939). The Königsberg congress (1930) was very important, for Kurt Gödel announced that he had proven the completeness of first-order logic and the incompleteness of formal arithmetic. Another very interesting congress was the one held in Copenhagen (1936), which was dedicated to quantum physics and causality. Between 1928 and 1937, the Vienna Circle published ten books in a collection named Schriften zur wissenschaftlichen Weltauffassung (Monographs on the Scientific World-Conception), edited by Schlick and Frank. Karl Raimund Popper's book Logik der Forschung was published in this collection. Seven works were published in another collection, called Einheitswissenschaft (Unified Science). In 1930 Rudolf Carnap and Hans Reichenbach undertook the editorship of the journal Erkenntnis, which was published between 1930 and 1940 (from 1939 the editors were Otto Neurath, Rudolf Carnap and Charles Morris). The following is the list of works published in the two collections edited by the Vienna Circle. Schriften zur wissenschaftlichen Weltauffassung (Monographs on the Scientific World-Conception), edited by Schlick and Frank: Richard von Mises, Wahrscheinlichkeit, Statistik und Wahrheit, 1928 (Probability, Statistics, and Truth, New York: Macmillan company, 1939) Rudolf Carnap, Abriss der Logistik, 1929 Moritz Schlick, Fragen der Ethik, 1930 (Problems of Ethics, New York: Prentice-Hall, 1939) Otto Neurath, Empirische Soziologie, 1931 Philipp Frank, Das Kausalgesetz und seine Grenzen, 1932 (The Law of Causality and its Limits, Dordrecth; Boston: Kluwer, 1997) Otto Kant, Zur Biologie der Ethik, 1932 Rudolf Carnap, Logische Syntax der Sprache, 1934 (The Logical Syntax of Language, New York: Humanities, 1937) Karl Raimund Popper, Logik der Forschung, 1934 (The Logic of Scientific Discovery, New York: Basic Books, 1959) Josef Schächter, Prolegomena zu einer kritischen Grammatik, 1935 (Prolegomena to a Critical Grammar, Dordrecht; Boston: D. Reidel Pub. Co., 1973) Victor Kraft, Die Grundlagen einer wissenschaftliche Wertlehre, 1937 (Foundations for a Scientific Analysis of Value, Dordrecht; Boston: D. Reidel Pub. Co., 1981) Einheitswissenschaft (Unified Science), edited by Carnap, Frank, Hahn, Neurath, Jørgensen (after Hahn's death), Morris (from 1938): Hans Hahn, Logik, Mathematik und Naturerkennen, 1933 Otto Neurath, Einheitswissenschaft und Psychologie, 1933 Rudolf Carnap, Die Aufgabe der Wissenschaftlogik, 1934 Philipp Frank, Das Ende der mechanistischen Physik, 1935 Otto Neurath, Was bedeutet rationale Wirtschaftsbetrachtung, 1935 Otto Neurath, E. Brunswik, C. Hull, G. Mannoury, J. Woodger, Zur Enzyklopädie der Einheitswissenschaft. Vorträge, 1938 Richard von Mises, Ernst Mach und die empiristische Wissenschaftauffassung, 1939 These works are translated in Unified Science: The Vienna Circle Monograph Series Originally Edited by Otto Neurath, Kluwer, 1987. Monographs, arranged in chronological order, published in the International Encyclopedia of Unified Science: Otto Neurath, Niels Bohr, John Dewey, Bertrand Russell, Rudolf Carnap, Charles Morris, Encyclopedia and unified science, 1938, vol.1 n.1 Charles Morris, Foundations of the theory of signs, 1938, vol.1 n.2 Victor Lenzen, Procedures of empirical sciences, 1938, vol.1 n.5 Rudolf Carnap, Foundations of logic and mathematics, 1939, vol.1 n.3 Leonard Bloomfield, Linguistic aspects of science, 1939, vol.1 n.4 Ernest Nagel, Principles of the theory of probability, 1939, vol.1 n.6 John Dewey, Theory of valuation, 1939, vol.2 n.4 Giorgio de Santillana and Edgar Zilsel, The development of rationalism and empiricism, 1941, vol.2 n.8 Otto Neurath, Foundations of social sciences, 1944, vol.2 n.1 Joseph H. Woodger, The technique of theory construction, 1949, vol.2 n.5 Philipp Frank, Foundations of physics, 1946, vol.1 n.7 Erwin Finlay-Freundlich, Cosmology, 1951, vol.1 n.8 Jørgen Jørgensen, The development of logical empiricism, 1951, vol.2 n.9 Egon Brunswik, The conceptual framework of psychology, 1952, vol.1 n.10 Carl Hempel, Fundamentals of concept formation in empirical science, 1952, vol.2 n.7 Felix Mainx, Foundations of biology, 1955, vol.1 n.9 Abraham Edel, Science and the structure of ethics, 1961, vol.2 n.3 Thomas S. Kuhn, The structure of scientific revolutions, 1962, vol.2 n.2 Gerhard Tintner, Methodology of mathematical economics and econometrics, 1968, vol.2 n.6 Herbert Feigl and Charles Morris, Bibliography and index, 1969, vol.2 n.10 Topics and debates The Vienna Circle cannot be assigned one single philosophy. First, there existed a plurality of philosophical positions within the Circle, and second, members often changed their views fundamentally in the course of time and in reaction to discussions in the Circle. It thus seems more convenient to speak of "the philosophies (in the plural) of the Vienna Circle". However, some central topics and debates can be identified. The Manifesto (1929) This states the scientific world-conception of the Vienna Circle, which is characterized "essentially by two features. First it is empiricist and positivist: there is knowledge only from experience. Second, the scientific world-conception is marked by the application of a certain method, namely logical analysis." Logical analysis is the method of clarification of philosophical problems; it makes an extensive use of symbolic logic and distinguishes the Vienna Circle empiricism from earlier versions. The task of philosophy lies in the clarification—through the method of logical analysis—of problems and assertions. Logical analysis shows that there are two different kinds of statements; one kind includes statements reducible to simpler statements about the empirically given; the other kind includes statements which cannot be reduced to statements about experience and thus they are devoid of meaning. Metaphysical statements belong to this second kind and therefore they are meaningless. Hence many philosophical problems are rejected as pseudo-problems which arise from logical mistakes, while others are re-interpreted as empirical statements and thus become the subject of scientific inquiries. One source of the logical mistakes that are at the origins of metaphysics is the ambiguity of natural language. "Ordinary language for instance uses the same part of speech, the substantive, for things ('apple') as well as for qualities ('hardness'), relations ('friendship'), and processes ('sleep'); therefore it misleads one into a thing-like conception of functional concepts". Another source of mistakes is "the notion that thinking can either lead to knowledge out of its own resources without using any empirical material, or at least arrive at new contents by an inference from given states of affair". Synthetic knowledge a priori is rejected by the Vienna Circle. Mathematics, which at a first sight seems an example of necessarily valid synthetic knowledge derived from pure reason alone, has instead a tautological character, that is its statements are analytical statements, thus very different from Kantian synthetic statements. The only two kinds of statements accepted by the Vienna Circle are synthetic statements a posteriori (i.e., scientific statements) and analytic statements a priori (i.e., logical and mathematical statements). However, the persistence of metaphysics is connected not only with logical mistakes but also with "social and economical struggles". Metaphysics and theology are allied to traditional social forms, while the group of people who "faces modern times, rejects these views and takes its stand on the ground of empirical sciences". Thus the struggle between metaphysics and scientific world-conception is not only a struggle between different kinds of philosophies, but it is also—and perhaps primarily—a struggle between different political, social, and economical attitudes. Of course, as the manifesto itself acknowledged, "not every adherent of the scientific world-conception will be a fighter". Many historians of the Vienna Circle see in the latter sentence an implicit reference to a contrast between the so-called 'left wing' of the Vienna Circle, mainly represented by Neurath and Carnap, and Moritz Schlick. The aim of the left wing was to facilitate the penetration of the scientific world-conception in "the forms of personal and public life, in education, upbringing, architecture, and the shaping of economic and social life". In contrast, Schlick was primarily interested in the theoretical study of science and philosophy. Perhaps the sentence "Some, glad of solitude, will lead a withdrawn existence on the icy slopes of logic" is an ironic reference to Schlick. The manifesto lists Walter Dubislav, Josef Frank, Kurt Grelling, Hasso Härlen, Eino Kaila, Heinrich Loewy, F. P. Ramsey, Hans Reichenbach, Kurt Reidemeister, and Edgar Zilsel as people "sympathetic to the Vienna Circle" and Albert Einstein, Bertrand Russell, and Ludwig Wittgenstein as "leading representatives of the scientific world-conception". Unified science The final goal pursued by the Vienna Circle was unified science, that is the construction of a "constitutive system" in which every legitimate statement is reduced to the concepts of lower level which refer directly to the given experience. "The endeavour is to link and harmonise the achievements of individual investigators in their various fields of science". From this aim follows the search for clarity, neatness, and for a symbolic language that eliminates the problems arising from the ambiguity of natural language. The Vienna Circle published a collection, called Einheitswissenschaft (Unified Science), edited by Rudolf Carnap, Philipp Frank, Hans Hahn, Otto Neurath, Jørgen Jørgensen (after Hahn's death) and Charles W. Morris (from 1938), whose aim was to present a unified vision of science. After the publication in Europe of seven monographs from 1933 to 1939, the collection was dismissed, because of the problems arising from the World War II. In 1938 a new series of publications started in the United States. It was the International Encyclopedia of Unified Science, an ambitious project never completed devoted to unified science. Only the first section Foundations of the Unity of Sciences was published; it contains two volumes for a total of twenty monographs published from 1938 to 1969. As remembered by Rudolf Carnap and Charles Morris in the Preface to the 1969 edition of the International Encyclopedia of Unified Science: Thomas Kuhn's well known work, The Structure of Scientific Revolutions, was published in this Encyclopedia in 1962, as the number two in the second volume. Critique of metaphysics The attitude of Vienna Circle towards metaphysics is well expressed by Carnap in the article 'Überwindung der Metaphysik durch Logische Analyse der Sprache' in Erkenntnis, vol. 2, 1932 (English translation 'The Elimination of Metaphysics Through Logical Analysis of Language' in Sarkar, Sahotra, ed., Logical empiricism at its peak: Schlick, Carnap, and Neurath, New York : Garland Pub., 1996, pp. 10–31). A language—says Carnap—consists of a vocabulary, i.e., a set of meaningful words, and a syntax, i.e., a set of rules governing the formation of sentences from the words of the vocabulary. Pseudo-statements, i.e., sequences of words that at first sight resemble statements but in reality have no meaning, are formed in two ways: either meaningless words occur in them, or they are formed in an invalid syntactical way. According to Carnap, pseudo-statements of both kinds occur in metaphysics. A word W has a meaning if two conditions are satisfied. First, the mode of the occurrence of W in its elementary sentence form (i.e., the simplest sentence form in which W is capable of occurring) must be fixed. Secondly, if W occurs in an elementary sentence S, it is necessary to give an answer to the following questions (that are—according to Carnap—equivalent formulation of the same question): What sentences is S deducible from, and what sentences are deducible from S? Under what conditions is S supposed to be true, and under what conditions false? How is S verified? What is the meaning of S? (Carnap, "The Elimination of Metaphysics Through Logical Analysis of Language" in Sarkar, Sahotra 1996, p. 12) An example offered by Carnap concerns the word 'arthropod'. The sentence form "the thing x is an arthropod" is an elementary sentence form that is derivable from "x is an animal", "x has a segmented body" and "x has jointed legs". Conversely, these sentences are derivable from "the thing x is an arthropod". Thus the meaning of the word 'arthropod' is determined. According to Carnap, many words of metaphysics do not fulfill these requirements and thus they are meaningless. As an example, Carnap considers the word 'principle'. This word has a definite meaning, if the sentence "x is the principle of y" is supposed to be equivalent to the sentence "y exists by virtue of x" or "y arises out of x". The latter sentence is perfectly clear: y arises out of x when x is invariably followed by y, and the invariable association between x and y is empirically verifiable. But—says Carnap—metaphysicians are not satisfied with this interpretation of the meaning of 'principle'. They assert that no empirical relation between x and y can completely explain the meaning of "x is the principle of y", because there is something that cannot be grasped by means of the experience, something for which no empirical criterion can be specified. It is the lacking of any empirical criterion—says Carnap—that deprives of meaning the word 'principle' when it occurs in metaphysics. Therefore, metaphysical pseudo-statements such as "water is the principle of the world" or "the spirit is the principle of the world" are void of meaning because a meaningless word occurs in them. However, there are pseudo-statements in which occur only meaningful words; these pseudo-statements are formed in a counter-syntactical way. An example is the word sequence "Caesar is a prime number"; every word has a definite meaning, but the sequence has no meaning. The problem is that "prime number" is a predicate of numbers, not a predicate of human beings. In the example the nonsense is evident; however, in natural language the rules of grammar do not prohibit the formation of analogous meaningless word sequences that are not so easily detectable. In the grammar of natural languages, every sequence of the kind "x is y", where x is a noun and y is a predicate, is acceptable. In fact, in the grammar there is no distinction between predicate which can be affirmed of human beings and predicate which can be affirmed of numbers. So "Caesar is a general" and "Caesar is a prime number" are both well-formed, in contrast for example with "Caesar is and", which is ill-formed. In a logically constructed language—says Carnap—a distinction between the various kinds of predicate is specified, and pseudo-statements as "Caesar is a prime number" are ill-formed. Now, and this is the main point of Carnap's argument, metaphysical statements in which meaningless words do not occur, are indeed meaningless because they are formed in a way which is admissible in natural languages, but not in logically constructed languages. Carnap attempts to indicate the most frequent sources of errors from which metaphysical pseudo-statements can arise. One source of mistakes is the ambiguity of the verb "to be", which is sometimes used as a copula ("I am hungry"), and sometimes to designate existence ("I am"). The latter statement incorrectly suggests a predicative form, and thus it suggests that existence is a predicate. Only modern logic, with the introduction of an explicit sign to designate existence (the sign ), which occurs only in statements such as , never as a predicate, has shown that existence is not a predicate, and thus has revealed the logical error from which pseudo-statements such as "cogito, ergo sum" has arisen. Another source of mistakes is type confusions, in which a predicate of a kind is used as a predicate of another kind. For example, the pseudo-statements "we know the Nothing" is analogous to "we know the rain", but while the latter is well-formed, the former is ill-formed, at least in a logically constructed language, because "Nothing" is incorrectly used as a noun. In a formal language, "Nothing" only means , such as "there is nothing which is outside"—i.e., , and thus "Nothing" never occurs as a noun or as a predicate. According to Carnap, although metaphysics has no theoretical content, it does have content: metaphysical pseudo-statements express the attitude of a person towards life, and this is the role of metaphysics. He compares it to an art like lyrical poetry; the metaphysician works with the medium of the theoretical; he confuses art with science, attitude towards life with knowledge, and thus produces an unsatisfactory and inadequate work. "Metaphysicians are musicians without musical ability". Institute Vienna Circle / Vienna Circle Society In 1991 the Institute Vienna Circle (IVC) was established as a society in Vienna. It is dedicated to studying the work and influence of the Vienna Circle. In 2011 it was integrated in the University of Vienna as a subunit of the Faculty of Philosophy and Education. Since 2016 the former society continues its activities in close cooperation with the IVC under the changed name Vienna Circle Society (VCS). In 2015 the institute co-organized an exhibition on the Vienna Circle in the main building of the University of Vienna. See also Formalism (mathematics) Logical behaviorism Logicism List of Austrian intellectual traditions Notes Bibliography Primary literature Carnap, Rudolf. "Überwindung der Metaphysik durch Logische Analyse der Sprache" in Erkenntnis, vol. 2, 1932 (English translation "The Elimination of Metaphysics Through Logical Analysis of Language" in Sarkar, Sahotra, ed., Logical empiricism at its peak: Schlick, Carnap, and Neurath, New York : Garland Pub., 1996, pp. 10–31) Neurath, Otto and Carnap, Rudolf and Morris, Charles W. Foundations of the Unity of Sciences, vol. 1, Chicago : The University of Chicago Press, 1969. Wissenschaftliche Weltauffassung. Der Wiener Kreis, 1929. English translation The Scientific Conception of the World. The Vienna Circle in Sarkar, Sahotra, ed., The Emergence of Logical Empiricism: from 1900 to the Vienna Circle, New York : Garland Publishing, 1996, pp. 321–340 Stadler, Friedrich and Uebel, Thomas (eds.): Wissenschaftliche Weltauffassung. Der Wiener Kreis. Hrsg. vom Verein Ernst Mach (1929). Reprint of the first edition. With translations into English, French, Spanish and Italian. Vienna: Springer, 2012. Stöltzner, Michael and Uebel, Thomas (eds.). Wiener Kreis. Texte zur wissenschaftlichen Weltauffassung. Meiner, Hamburg, 2006, . (Anthology in German) Secondary literature Arnswald, Ulrich, Stadler, Friedrich and Weibel, Peter (ed.): Der Wiener Kreis – Aktualität in Wissenschaft, Literatur, Architektur und Kunst. Wien: LIT Verlag 2019. Ayer, Alfred Jules. Language, Truth and Logic. London, Victor Gollancz, 1936. Ayer, Alfred Jules. Logical Positivism. Glencoe, Ill: Free Press, 1959. Barone, Francesco. Il neopositivismo logico. Roma Bari: Laterza, 1986. Bergmann, Gustav. The Metaphysics of Logical Positivism. New York: Longmans Green, 1954. Cirera, Ramon. Carnap and the Vienna Circle: Empiricism and Logical Syntax. Atlanta, GA: Rodopi, 1994. Frank, Philipp: Modern Science and its Philosophy. Cambridge, 1949. Friedman, Michael, Reconsidering Logical Positivism. Cambridge, UK: Cambridge University Press, 1999. Gadol, Eugene T. Rationality and Science: A Memorial Volume for Moritz Schlick in Celebration of the Centennial of his Birth. Wien: Springer, 1982. Geymonat, Ludovico. La nuova filosofia della natura in Germania. Torino, 1934. Giere, Ronald N. and Richardson, Alan W. Origins of Logical Empiricism. Minneapolis: University of Minnesota Press, 1997. Haller, Rudolf. Neopositivismus. Eine historische Einführung in die Philosophie des Wiener Kreises. Wissenschaftliche Buchgesellschaft, Darmstadt, 1993, . (German) Holt, Jim. "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76. Kraft, Victor. The Vienna Circle: The Origin of Neo-positivism, a Chapter in the History of Recent Philosophy. New York: Greenwood Press, 1953. Limbeck, Christoph and Stadler, Friedrich (eds.). The Vienna Circle. Texts and Pictures of an Exhibition. Münster-Berlin-London 2015. McGuinness, Brian. Wittgenstein and the Vienna Circle: Conversations Recorded by Friedrich Waismann. Trans. by Joachim Schulte and Brian McGuinness. New York: Barnes & Noble Books, 1979. Parrini, Paolo; Salmon, Wesley C.; Salmon, Merrilee H. (ed.) Logical Empiricism – Historical and Contemporary Perspectives, Pittsburgh: University of Pittsburgh Press, 2003. Reisch, George. How the Cold War Transformed Philosophy of Science : To the Icy Slopes of Logic. New York: Cambridge University Press, 2005. Rescher, Nicholas (ed.). The Heritage of Logical Positivism. University Press of America, 1985. Richardson, Alan W. "The Scientific World Conception. Logical Positivism", in: T. Baldwin (Hg.), The Cambridge History of Philosophy, 1870–1945, 2003, 391–400. Richardson, Alan W. and Uebel, Thomas (ed.). The Cambridge Companion to Logical Empiricism. Cambridge, 2007. Salmon, Wesley and Wolters, Gereon (ed.), Logic, Language, and the Structure of Scientific Theories: Proceedings of the Carnap-Reichenbach Centennial, University of Konstanz, 21–24 May 1991, Pittsburgh: University of Pittsburgh Press, 1994. Sarkar, Sahotra. The Emergence of Logical Empiricism: From 1900 to the Vienna Circle. New York: Garland Publishing, 1996. Sarkar, Sahotra. Logical Empiricism at its Peak: Schlick, Carnap, and Neurath. New York: Garland Pub., 1996. Sarkar, Sahotra. Logical Empiricism and the Special Sciences: Reichenbach, Feigl, and Nagel. New York: Garland Pub., 1996. Sarkar, Sahotra. Decline and Obsolescence of Logical Empiricism: Carnap vs. Quine and the Critics. New York: Garland Pub., 1996. Sarkar, Sahotra. The Legacy of the Vienna Circle: Modern Reappraisals. New York: Garland Pub., 1996. Spohn, Wolfgang (ed.), Erkenntnis Orientated: A Centennial Volume for Rudolf Carnap and Hans Reichenbach, Boston: Kluwer Academic Publishers, 1991. Stadler, Friedrich. The Vienna Circle. Studies in the Origins, Development, and Influence of Logical Empiricism. New York: Springer, 2001. – 2nd Edition: Dordrecht: Springer, 2015. Stadler, Friedrich (ed.). The Vienna Circle and Logical Empiricism. Re-evaluation and Future Perspectives. Dordrecht – Boston London, Kluwer, 2003. Uebel, Thomas. Vernunftkritik und Wissenschaft: Otto Neurath und der erste Wiener Kreis. Wien-New York 2000. (German) Uebel, Thomas, "On the Austrian Roots of Logical Empiricism" in Logical Empiricism – Historical and contemporary Perspectives, ed. Paolo Parrini, Wesley C. Salmon, Merrilee H. Salmon, Pittsburgh : University of Pittsburgh Press, 2003, pp. 76–93. External links Institute Vienna Circle Vienna Circle Society Vienna Circle Foundation Amsterdam Thomas Uebel, "Vienna Circle", The Stanford Encyclopedia of Philosophy Logical positivism Science studies Philosophy of science Epistemology Epistemology of science 1924 establishments in Austria
Vienna Circle
[ "Mathematics" ]
8,307
[ "Mathematical logic", "Logical positivism" ]
225,982
https://en.wikipedia.org/wiki/Ground%20state
The ground state of a quantum-mechanical system is its stationary state of lowest energy; the energy of the ground state is known as the zero-point energy of the system. An excited state is any state with energy greater than the ground state. In quantum field theory, the ground state is usually called the vacuum state or the vacuum. If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator that acts non-trivially on a ground state and commutes with the Hamiltonian of the system. According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature. Absence of nodes in one dimension In one dimension, the ground state of the Schrödinger equation can be proven to have no nodes. Derivation Consider the average energy of a state with a node at ; i.e., . The average energy in this state would be where is the potential. With integration by parts: Hence in case that is equal to zero, one gets: Now, consider a small interval around ; i.e., . Take a new (deformed) wave function to be defined as , for ; and , for ; and constant for . If is small enough, this is always possible to do, so that is continuous. Assuming around , one may write where is the norm. Note that the kinetic-energy densities hold everywhere because of the normalization. More significantly, the average kinetic energy is lowered by by the deformation to . Now, consider the potential energy. For definiteness, let us choose . Then it is clear that, outside the interval , the potential energy density is smaller for the because there. On the other hand, in the interval we have which holds to order . However, the contribution to the potential energy from this region for the state with a node is lower, but still of the same lower order as for the deformed state , and subdominant to the lowering of the average kinetic energy. Therefore, the potential energy is unchanged up to order , if we deform the state with a node into a state without a node, and the change can be ignored. We can therefore remove all nodes and reduce the energy by , which implies that cannot be the ground state. Thus the ground-state wave function cannot have a node. This completes the proof. (The average energy may then be further lowered by eliminating undulations, to the variational absolute minimum.) Implication As the ground state has no nodes it is spatially non-degenerate, i.e. there are no two stationary quantum states with the energy eigenvalue of the ground state (let's name it ) and the same spin state and therefore would only differ in their position-space wave functions. The reasoning goes by contradiction: For if the ground state would be degenerate then there would be two orthonormal stationary states and — later on represented by their complex-valued position-space wave functions and — and any superposition with the complex numbers fulfilling the condition would also be a be such a state, i.e. would have the same energy-eigenvalue and the same spin-state. Now let be some random point (where both wave functions are defined) and set: and with (according to the premise no nodes). Therefore, the position-space wave function of is Hence for all . But i.e., is a node of the ground state wave function and that is in contradiction to the premise that this wave function cannot have a node. Note that the ground state could be degenerate because of different spin states like and while having the same position-space wave function: Any superposition of these states would create a mixed spin state but leave the spatial part (as a common factor of both) unaltered. Examples The wave function of the ground state of a particle in a one-dimensional box is a half-period sine wave, which goes to zero at the two edges of the well. The energy of the particle is given by , where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well. The wave function of the ground state of a hydrogen atom is a spherically symmetric distribution centred on the nucleus, which is largest at the center and reduces exponentially at larger distances. The electron is most likely to be found at a distance from the nucleus equal to the Bohr radius. This function is known as the 1s atomic orbital. For hydrogen (H), an electron in the ground state has energy , relative to the ionization threshold. In other words, 13.6 eV is the energy input required for the electron to no longer be bound to the atom. The exact definition of one second of time since 1997 has been the duration of periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at a temperature of 0 K. Notes Bibliography Quantum mechanics
Ground state
[ "Physics" ]
1,124
[ "Quantum states", "Quantum mechanics" ]
226,187
https://en.wikipedia.org/wiki/Prandtl%E2%80%93Glauert%20singularity
The Prandtl–Glauert singularity is a theoretical construct in flow physics, often incorrectly used to explain vapor cones in transonic flows. It is the prediction by the Prandtl–Glauert transformation that infinite pressures would be experienced by an aircraft as it approaches the speed of sound. Because it is invalid to apply the transformation at these speeds, the predicted singularity does not emerge. The incorrect association is related to the early-20th-century misconception of the impenetrability of the sound barrier. Reasons of invalidity around Mach 1 The Prandtl–Glauert transformation assumes linearity (i.e. a small change will have a small effect that is proportional to its size). This assumption becomes inaccurate toward Mach 1 and is entirely invalid in places where the flow reaches supersonic speeds, since sonic shock waves are instantaneous (and thus manifestly non-linear) changes in the flow. Indeed, one assumption in the Prandtl–Glauert transformation is approximately constant Mach number throughout the flow, and the increasing slope in the transformation indicates that very small changes will have a very strong effect at higher Mach numbers, thus violating the assumption, which breaks down entirely at the speed of sound. This means that the singularity featured by the transformation near the sonic speed (M=1) is not within the area of validity. The aerodynamic forces are calculated to approach infinity at the so-called Prandtl–Glauert singularity; in reality, the aerodynamic and thermodynamic perturbations do get amplified strongly near the sonic speed, but they remain finite and a singularity does not occur. The Prandtl–Glauert transformation is a linearized approximation of compressible, inviscid potential flow. As the flow approaches sonic speed, the nonlinear phenomena dominate within the flow, which this transformation completely ignores for the sake of simplicity. Prandtl–Glauert transformation The Prandtl–Glauert transformation is found by linearizing the potential equations associated with compressible, inviscid flow. For two-dimensional flow, the linearized pressures in such a flow are equal to those found from incompressible flow theory multiplied by a correction factor. This correction factor is given below: where cp is the compressible pressure coefficient cp0 is the incompressible pressure coefficient M∞ is the freestream Mach number. This formula is known as "Prandtl's rule", and works well up to low-transonic Mach numbers (M < ~0.7). However, note the limit: This obviously nonphysical result (of an infinite pressure) is known as the Prandtl–Glauert singularity. Reason for condensation clouds The reason that observable clouds sometimes form around high speed aircraft is that humid air is entering low-pressure regions, which also reduces local density and temperature sufficiently to cause water to supersaturate around the aircraft and to condense in the air, thus creating clouds. The clouds vanish as soon as the pressure increases again to ambient levels. In the case of objects at transonic speeds, the local pressure increase happens at the shock wave location. Condensation in free flow does not require supersonic flow. Given sufficiently high humidity, condensation clouds can be produced in purely subsonic flow over wings, or in the cores of wing tips, and even within, or around vortices themselves. This can often be observed during humid days on aircraft approaching or departing airports. See also Prandtl–Glauert transformation Compressible flow Vapor cone Sonic boom References Aerodynamics Physical phenomena Shock waves Obsolete theories in physics
Prandtl–Glauert singularity
[ "Physics", "Chemistry", "Engineering" ]
751
[ "Physical phenomena", "Shock waves", "Theoretical physics", "Aerodynamics", "Waves", "Aerospace engineering", "Fluid dynamics", "Obsolete theories in physics" ]
226,309
https://en.wikipedia.org/wiki/Secondary%20metabolite
Secondary metabolites, also called specialised metabolites, secondary products, or natural products, are organic compounds produced by any lifeform, e.g. bacteria, archaea, fungi, animals, or plants, which are not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. Specific secondary metabolites are often restricted to a narrow set of species within a phylogenetic group. Secondary metabolites often play an important role in plant defense against herbivory and other interspecies defenses. Humans use secondary metabolites as medicines, flavourings, pigments, and recreational drugs. The term secondary metabolite was first coined by Albrecht Kossel, the 1910 Nobel Prize laureate for medicine and physiology. 30 years later a Polish botanist Friedrich Czapek described secondary metabolites as end products of nitrogen metabolism. Secondary metabolites commonly mediate antagonistic interactions, such as competition and predation, as well as mutualistic ones such as pollination and resource mutualisms. Usually, secondary metabolites are confined to a specific lineage or even species, though there is considerable evidence that horizontal transfer across species or genera of entire pathways plays an important role in bacterial (and, likely, fungal) evolution. Research also shows that secondary metabolism can affect different species in varying ways. In the same forest, four separate species of arboreal marsupial folivores reacted differently to a secondary metabolite in eucalypts. This shows that differing types of secondary metabolites can be the split between two herbivore ecological niches. Additionally, certain species evolve to resist secondary metabolites and even use them for their own benefit. For example, monarch butterflies have evolved to be able to eat milkweed (Asclepias) despite the presence of toxic cardiac glycosides. The butterflies are not only resistant to the toxins, but are actually able to benefit by actively sequestering them, which can lead to the deterrence of predators. Plant secondary metabolites Plants are capable of producing and synthesizing diverse groups of organic compounds and are divided into two major groups: primary and secondary metabolites. Secondary metabolites are metabolic intermediates or products which are not essential to growth and life of the producing plants but rather required for interaction of plants with their environment and produced in response to stress. Their antibiotic, antifungal and antiviral properties protect the plant from pathogens. Some secondary metabolites such as phenylpropanoids protect plants from UV damage. The biological effects of plant secondary metabolites on humans have been known since ancient times. The herb Artemisia annua which contains Artemisinin, has been widely used in Chinese traditional medicine more than two thousand years ago. Plant secondary metabolites are classified by their chemical structure and can be divided into four major classes: terpenes, phenylpropanoids (i.e. phenolics), polyketides, and alkaloids. Chemical classes Terpenoids Terpenes constitute a large class of natural products which are composed of isoprene units. Terpenes are only hydrocarbons and terpenoids are oxygenated hydrocarbons. The general molecular formula of terpenes are multiples of (C5H8)n, where 'n' is number of linked isoprene units. Hence, terpenes are also termed as isoprenoid compounds. Classification is based on the number of isoprene units present in their structure. Some terpenoids (i.e. many sterols) are primary metabolites. Some terpenoids that may have originated as secondary metabolites have subsequently been recruited as plant hormones, such as gibberellins, brassinosteroids, and strigolactones. Examples of terpenoids built from hemiterpene oligomerization are: Azadirachtin, present in Azadirachta indica, the (Neem tree) Artemisinin, present in Artemisia annua, Chinese wormwood Tetrahydrocannabinol, present in Cannabis sativa, cannabis Saponins, glycosylated triterpenes present in e.g. Chenopodium quinoa, quinoa. Phenolic compounds Phenolics are a chemical compound characterized by the presence of aromatic ring structure bearing one or more hydroxyl groups. Phenolics are the most abundant secondary metabolites of plants ranging from simple molecules such as phenolic acid to highly polymerized substances such as tannins. Classes of phenolics have been characterized on the basis of their basic skeleton. An example of a plant phenol is: Resveratrol, a C14 stilbenoid produced by e.g. grapes. Alkaloids Alkaloids are a diverse group of nitrogen-containing basic compounds. They are typically derived from plant sources and contain one or more nitrogen atoms. Chemically they are very heterogeneous. Based on chemical structures, they may be classified into two broad categories: Non heterocyclic or atypical alkaloids, for example hordenine or N-methyltyramine, colchicine, and taxol Heterocyclic or typical alkaloids, for example quinine, caffeine, and nicotine Examples of alkaloids produced by plants are: Hyoscyamine, present in Datura stramonium Atropine, present in Atropa belladonna, deadly nightshade Cocaine, present in Erythroxylum coca the Coca plant Scopolamine, present in the Solanaceae (nightshade) plant family Codeine and morphine, present in Papaver somniferum, the opium poppy Vincristine and vinblastine, mitotic inhibitors found in Catharanthus roseus, the rosy periwinkle Many alkaloids affect the central nervous system of animals by binding to neurotransmitter receptors. Glucosinolates Glucosinolates are secondary metabolites that include both sulfur and nitrogen atoms, and are derived from glucose, an amino acid and sulfate. An example of a glucosinolate in plants is Glucoraphanin, from broccoli (Brassica oleracea var. italica). Plant secondary metabolites in medicine Many drugs used in modern medicine are derived from plant secondary metabolites. The two most commonly known terpenoids are artemisinin and paclitaxel. Artemisinin was widely used in Traditional Chinese medicine and later rediscovered as a powerful antimalarial by a Chinese scientist Tu Youyou. She was later awarded the Nobel Prize for the discovery in 2015. Currently, the malaria parasite, Plasmodium falciparum, has become resistant to artemisinin alone and the World Health Organization recommends its use with other antimalarial drugs for a successful therapy. Paclitaxel the active compound found in Taxol is a chemotherapy drug used to treat many forms of cancers including ovarian cancer, breast cancer, lung cancer, Kaposi sarcoma, cervical cancer, and pancreatic cancer. Taxol was first isolated in 1973 from barks of a coniferous tree, the Pacific Yew. Morphine and codeine both belong to the class of alkaloids and are derived from opium poppies. Morphine was discovered in 1804 by a German pharmacist Friedrich Sertürnert. It was the first active alkaloid extracted from the opium poppy. It is mostly known for its strong analgesic effects, however, morphine is also used to treat shortness of breath and treatment of addiction to stronger opiates such as heroin. Despite its positive effects on humans, morphine has very strong adverse effects, such as addiction, hormone imbalance or constipation. Due to its highly addictive nature morphine is a strictly controlled substance around the world, used only in very severe cases with some countries underusing it compared to the global average due to the social stigma around it. Codeine, also an alkaloid derived from the opium poppy, is considered the most widely used drug in the world according to World Health Organization. It was first isolated in 1832 by a French chemist Pierre Jean Robiquet, also known for the discovery of caffeine and a widely used red dye alizarin. Primarily codeine is used to treat mild pain and relief coughing although in some cases it is used to treat diarrhea and some forms of irritable bowel syndrome. Codeine has the strength of 0.1-0.15 compared to morphine ingested orally, hence it is much safer to use. Although codeine can be extracted from the opium poppy, the process is not feasible economically due to the low abundance of pure codeine in the plant. A chemical process of methylation of the much more abundant morphine is the main method of production. Atropine is an alkaloid first found in Atropa belladonna, a member of the nightshade family. While atropine was first isolated in the 19th century, its medical use dates back to at least the fourth century B.C. where it was used for wounds, gout, and sleeplessness. Currently atropine is administered intravenously to treat bradycardia and as an antidote to organophosphate poisoning. Overdosing of atropine may lead to atropine poisoning which results in side effects such as blurred vision, nausea, lack of sweating, dry mouth and tachycardia. Resveratrol is a phenolic compound of the flavonoid class. It is highly abundant in grapes, blueberries, raspberries and peanuts. It is commonly taken as a dietary supplement for extending life and reducing the risk of cancer and heart disease, however there is no strong evidence supporting its efficacy. Nevertheless, flavonoids are in general thought to have beneficial effects for humans. Certain studies shown that flavonoids have direct antibiotic activity. A number of in vitro and limited in vivo studies shown that flavonoids such as quercetin have synergistic activity with antibiotics and are able to suppress bacterial loads. Digoxin is a cardiac glycoside first derived by William Withering in 1785 from the foxglove (Digitalis) plant. It is typically used to treat heart conditions such as atrial fibrillation, atrial flutter or heart failure. Digoxin can, however, have side effects such as nausea, bradycardia, diarrhea or even life-threatening arrhythmia. Fungal secondary metabolites The three main classes of fungal secondary metabolites are: polyketides, nonribosomal peptides and terpenes. Although fungal secondary metabolites are not required for growth they play an essential role in survival of fungi in their ecological niche. The most known fungal secondary metabolite is penicillin discovered by Alexander Fleming in 1928. Later in 1945, Fleming, alongside Ernst Chain and Howard Florey, received a Nobel Prize for its discovery which was pivotal in reducing the number of deaths in World War II by over 100,000. Examples of other fungal secondary metabolites are: Lovastatin, a polyketide from e.g. Pleurotus ostreatus, oyster mushrooms. Aflatoxin B1, a polyketide from Aspergillus flavus. Ciclosporin, a non-ribosomal cyclic peptide from Tolypocladium inflatum. Lovastatin was the first FDA approved secondary metabolite to lower cholesterol levels. Lovastatin occurs naturally in low concentrations in oyster mushrooms, red yeast rice, and Pu-erh. Lovastatin's mode of action is competitive inhibition of HMG-CoA reductase, and a rate-limiting enzyme responsible for converting HMG-CoA to mevalonate. Fungal secondary metabolites can also be dangerous to humans. Claviceps purpurea, a member of the ergot group of fungi typically growing on rye, results in death when ingested. The build-up of poisonous alkaloids found in C. purpurea lead to symptoms such as seizures and spasms, diarrhea, paresthesias, Itching, psychosis or gangrene. Currently, removal of ergot bodies requires putting the rye in brine solution with healthy grains sinking and infected floating. Bacterial secondary metabolites Bacterial production of secondary metabolites starts in the stationary phase as a consequence of lack of nutrients or in response to environmental stress. Secondary metabolite synthesis in bacteria is not essential for their growth, however, they allow them to better interact with their ecological niche. The main synthetic pathways of secondary metabolite production in bacteria are; b-lactam, oligosaccharide, shikimate, polyketide and non-ribosomal pathways. Many bacterial secondary metabolites are toxic to mammals. When secreted those poisonous compounds are known as exotoxins whereas those found in the prokaryotic cell wall are endotoxins. Examples of bacterial secondary metabolites are: Phenazine Pyocyanin, from Pseudomonas aeruginosa. Other phenazines from Pseudomonas ssp. and Streptomyces ssp. Polyketides Avermectin, from Streptomyces avermitilis. Epothilones, macrolactones from the soil-dwelling myxobacterium Sorangium cellulosum. Erythromycin, Saccharopolyspora erythraea. Nystatin, from Streptomyces noursei. Rifamycin, from Amycolatopsis rifamycinica. Nonribosomal peptides Bacitracin, from Bacillus subtilis (Tracy strain). Gramicidin, from Brevibacillus brevis. Polymyxin, from Paenibacillus polymyxa. Ramoplanin, from Actinoplanes strain ATCC 33076. Teicoplanins, from Actinoplanes teicomyceticus. Vancomycin, from the soil bacterium Amycolatopsis orientalis. Ribosomal peptides Microcins, bacteriocins such as microcin V from Escherichia coli. Thiostrepton, from several strains of streptomycetes, e.g. Streptomyces azureus. Glucosides Nojirimycin, an iminosugar from a class of Streptomyces species. Alkaloids Tetrodotoxin, a neurotoxin produced by Pseudoalteromonas and other bacteria living in symbiosis with animals such as e.g. pufferfish. Terpenoids Carotenoids, a pigment produced by different species of bacteria, such as Micrococcus sp. Strepsesquitriol, compound produced by Streptomyces sp. that can reduce inflammation without being toxic to cells, making a promising candidate for developing anti-inflammatory medicines. Micromonohalimane B., a diterpene identified from Micromonospora sp., shows moderate antibacterial activity against some antibiotic resistant Gram-positive bacteria. Cyclomarins, a potent anti-inflammatory and antiviral compound produced by a marine Streptomyces, with strong cytotoxicity against cancer cell lines and also effective against herpesviruses. Archaea secondary metabolites Archaea are capable of producing a variety of secondary metabolites, which may have significant biotechnological applications. Despite knowing this, the biosynthetic pathways for secondary metabolites in archaea are less understood than those in bacteria. Notably, archaea often lack some biosynthesis genes commonly present in bacteria, which suggests that they may possess unique metabolic pathways for synthetizing these compounds. Extracellular polymeric substances Extracellular polymeric substances can effectively adsorb and degrade hazardous organic chemicals. While these compounds are produced by various organisms, archaea are particularly promising for wastewater treatment due to their high tolerance to saline concentrations and their ability to grow anaerobically. Biotechnological approaches Selective breeding was used as one of the first biotechnological techniques used to reduce the unwanted secondary metabolites in food, such as naringin causing bitterness in grapefruit. In some cases increasing the content of secondary metabolites in a plant is the desired outcome. Traditionally this was done using in-vitro plant tissue culture techniques which allow for: control of growth conditions, mitigate seasonality of plants or protect them from parasites and harmful-microbes. Synthesis of secondary metabolites can be further enhanced by introducing elicitors into a tissue plant culture, such as jasmonic acid, UV-B or ozone. These compounds induce stress onto a plant leading to increased production of secondary metabolites. To further increase the yield of SMs new approaches have been developed. A novel approach used by Evolva uses recombinant yeast S. cerevisiae strains to produce secondary metabolites normally found in plants. The first successful chemical compound synthesised with Evolva was vanillin, widely used in the food beverage industry as flavouring. The process involves inserting the desired secondary metabolite gene into an artificial chromosome in the recombinant yeast leading to synthesis of vanillin. Currently Evolva produces a wide array of chemicals such as stevia, resveratrol or nootkatone. Nagoya protocol With the development of recombinant technologies the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity was signed in 2010. The protocol regulates the conservation and protection of genetic resources to prevent the exploitation of smaller and poorer countries. If genetic, protein or small molecule resources sourced from biodiverse countries become profitable a compensation scheme was put in place for the countries of origin. See also Chemical ecology Hairy root culture, a strategy used in plant tissue culture to produce commercially viable quantities of valuable secondary metabolites Plant physiology Volatile organic compound Cetoniacytone A References External links Chemical ecology Ecology Evolutionary biology
Secondary metabolite
[ "Chemistry", "Biology" ]
3,843
[ "Evolutionary biology", "Chemical ecology", "Secondary metabolites", "Ecology", "Biochemistry", "Metabolism" ]
226,424
https://en.wikipedia.org/wiki/Four-vector
In special relativity, a four-vector (or 4-vector, sometimes Lorentz vector) is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (,) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts (a change by a constant velocity to another inertial reference frame). Four-vectors describe, for instance, position in spacetime modeled as Minkowski space, a particle's four-momentum , the amplitude of the electromagnetic four-potential at a point in spacetime, and the elements of the subspace spanned by the gamma matrices inside the Dirac algebra. The Lorentz group may be represented by 4×4 matrices . The action of a Lorentz transformation on a general contravariant four-vector (like the examples above), regarded as a column vector with Cartesian coordinates with respect to an inertial frame in the entries, is given by (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the corresponding covariant vectors , and . These transform according to the rule where denotes the matrix transpose. This rule is different from the above rule. It corresponds to the dual representation of the standard representation. However, for the Lorentz group the dual of any representation is equivalent to the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that is not a four-vector, see bispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule reads , where is a 4×4 matrix other than . Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These include scalars, spinors, tensors and spinor-tensors. The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends to general relativity, some of the results stated in this article require modification in general relativity. Notation The notations in this article are: lowercase bold for three-dimensional vectors, hats for three-dimensional unit vectors, capital bold for four dimensional vectors (except for the four-gradient), and tensor index notation. Four-vector algebra Four-vectors in a real-valued basis A four-vector A is a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations: where Aα is the magnitude component and Eα is the basis vector component; note that both are necessary to make a vector, and that when Aα is seen alone, it refers strictly to the components of the vector. The upper indices indicate contravariant components. Here the standard convention is that Latin indices take values for spatial components, so that i = 1, 2, 3, and Greek indices take values for space and time components, so α = 0, 1, 2, 3, used with the summation convention. The split between the time component and the spatial components is a useful one to make when determining contractions of one four vector with other tensor quantities, such as for calculating Lorentz invariants in inner products (examples are given below), or raising and lowering indices. In special relativity, the spacelike basis E1, E2, E3 and components A1, A2, A3 are often Cartesian basis and components: although, of course, any other basis and components may be used, such as spherical polar coordinates or cylindrical polar coordinates, or any other orthogonal coordinates, or even general curvilinear coordinates. Note the coordinate labels are always subscripted as labels and are not indices taking numerical values. In general relativity, local curvilinear coordinates in a local basis must be used. Geometrically, a four-vector can still be interpreted as an arrow, but in spacetime - not just space. In relativity, the arrows are drawn as part of Minkowski diagram (also called spacetime diagram). In this article, four-vectors will be referred to simply as vectors. It is also customary to represent the bases by column vectors: so that: The relation between the covariant and contravariant coordinates is through the Minkowski metric tensor (referred to as the metric), η which raises and lowers indices as follows: and in various equivalent notations the covariant components are: where the lowered index indicates it to be covariant. Often the metric is diagonal, as is the case for orthogonal coordinates (see line element), but not in general curvilinear coordinates. The bases can be represented by row vectors: so that: The motivation for the above conventions are that the inner product is a scalar, see below for details. Lorentz transformation Given two inertial or rotated frames of reference, a four-vector is defined as a quantity which transforms according to the Lorentz transformation matrix Λ: In index notation, the contravariant and covariant components transform according to, respectively: in which the matrix has components in row  and column , and the matrix has components in row  and column . For background on the nature of this transformation definition, see tensor. All four-vectors transform in the same way, and this can be generalized to four-dimensional relativistic tensors; see special relativity. Pure rotations about an arbitrary axis For two frames rotated by a fixed angle about an axis defined by the unit vector: without any boosts, the matrix Λ has components given by: where δij is the Kronecker delta, and εijk is the three-dimensional Levi-Civita symbol. The spacelike components of four-vectors are rotated, while the timelike components remain unchanged. For the case of rotations about the z-axis only, the spacelike part of the Lorentz matrix reduces to the rotation matrix about the z-axis: Pure boosts in an arbitrary direction For two frames moving at constant relative three-velocity v (not four-velocity, see below), it is convenient to denote and define the relative velocity in units of c by: Then without rotations, the matrix Λ has components given by: where the Lorentz factor is defined by: and is the Kronecker delta. Contrary to the case for pure rotations, the spacelike and timelike components are mixed together under boosts. For the case of a boost in the x-direction only, the matrix reduces to; Where the rapidity expression has been used, written in terms of the hyperbolic functions: This Lorentz matrix illustrates the boost to be a hyperbolic rotation in four dimensional spacetime, analogous to the circular rotation above in three-dimensional space. Properties Linearity Four-vectors have the same linearity properties as Euclidean vectors in three dimensions. They can be added in the usual entrywise way: and similarly scalar multiplication by a scalar λ is defined entrywise by: Then subtraction is the inverse operation of addition, defined entrywise by: Minkowski tensor Applying the Minkowski tensor to two four-vectors and , writing the result in dot product notation, we have, using Einstein notation: in special relativity. The dot product of the basis vectors is the Minkowski metric, as opposed to the Kronecker delta as in Euclidean space. It is convenient to rewrite the definition in matrix form: in which case above is the entry in row and column of the Minkowski metric as a square matrix. The Minkowski metric is not a Euclidean metric, because it is indefinite (see metric signature). A number of other expressions can be used because the metric tensor can raise and lower the components of or . For contra/co-variant components of and co/contra-variant components of , we have: so in the matrix notation: while for and each in covariant components: with a similar matrix expression to the above. Applying the Minkowski tensor to a four-vector A with itself we get: which, depending on the case, may be considered the square, or its negative, of the length of the vector. Following are two common choices for the metric tensor in the standard basis (essentially Cartesian coordinates). If orthogonal coordinates are used, there would be scale factors along the diagonal part of the spacelike part of the metric, while for general curvilinear coordinates the entire spacelike part of the metric would have components dependent on the curvilinear basis used. Standard basis, (+−−−) signature The (+−−−) metric signature is sometimes called the "mostly minus" convention, or the "west coast" convention. In the (+−−−) metric signature, evaluating the summation over indices gives: while in matrix form: It is a recurring theme in special relativity to take the expression in one reference frame, where C is the value of the inner product in this frame, and: in another frame, in which C′ is the value of the inner product in this frame. Then since the inner product is an invariant, these must be equal: that is: Considering that physical quantities in relativity are four-vectors, this equation has the appearance of a "conservation law", but there is no "conservation" involved. The primary significance of the Minkowski inner product is that for any two four-vectors, its value is invariant for all observers; a change of coordinates does not result in a change in value of the inner product. The components of the four-vectors change from one frame to another; A and A′ are connected by a Lorentz transformation, and similarly for B and B′, although the inner products are the same in all frames. Nevertheless, this type of expression is exploited in relativistic calculations on a par with conservation laws, since the magnitudes of components can be determined without explicitly performing any Lorentz transformations. A particular example is with energy and momentum in the energy-momentum relation derived from the four-momentum vector (see also below). In this signature we have: With the signature (+−−−), four-vectors may be classified as either spacelike if , timelike if , and null vectors if . Standard basis, (−+++) signature The (-+++) metric signature is sometimes called the "east coast" convention. Some authors define η with the opposite sign, in which case we have the (−+++) metric signature. Evaluating the summation with this signature: while the matrix form is: Note that in this case, in one frame: while in another: so that: which is equivalent to the above expression for C in terms of A and B. Either convention will work. With the Minkowski metric defined in the two ways above, the only difference between covariant and contravariant four-vector components are signs, therefore the signs depend on which sign convention is used. We have: With the signature (−+++), four-vectors may be classified as either spacelike if , timelike if , and null if . Dual vectors Applying the Minkowski tensor is often expressed as the effect of the dual vector of one vector on the other: Here the Aνs are the components of the dual vector A* of A in the dual basis and called the covariant coordinates of A, while the original Aν components are called the contravariant coordinates. Four-vector calculus Derivatives and differentials In special relativity (but not general relativity), the derivative of a four-vector with respect to a scalar λ (invariant) is itself a four-vector. It is also useful to take the differential of the four-vector, dA and divide it by the differential of the scalar, dλ: where the contravariant components are: while the covariant components are: In relativistic mechanics, one often takes the differential of a four-vector and divides by the differential in proper time (see below). Fundamental four-vectors Four-position A point in Minkowski space is a time and spatial position, called an "event", or sometimes the position four-vector or four-position or 4-position, described in some reference frame by a set of four coordinates: where r is the three-dimensional space position vector. If r is a function of coordinate time t in the same frame, i.e. r = r(t), this corresponds to a sequence of events as t varies. The definition R0 = ct ensures that all the coordinates have the same dimension (of length) and units (in the SI, meters). These coordinates are the components of the position four-vector for the event. The displacement four-vector is defined to be an "arrow" linking two events: For the differential four-position on a world line we have, using a norm notation: defining the differential line element ds and differential proper time increment dτ, but this "norm" is also: so that: When considering physical phenomena, differential equations arise naturally; however, when considering space and time derivatives of functions, it is unclear which reference frame these derivatives are taken with respect to. It is agreed that time derivatives are taken with respect to the proper time . As proper time is an invariant, this guarantees that the proper-time-derivative of any four-vector is itself a four-vector. It is then important to find a relation between this proper-time-derivative and another time derivative (using the coordinate time t of an inertial reference frame). This relation is provided by taking the above differential invariant spacetime interval, then dividing by (cdt)2 to obtain: where u = dr/dt is the coordinate 3-velocity of an object measured in the same frame as the coordinates x, y, z, and coordinate time t, and is the Lorentz factor. This provides a useful relation between the differentials in coordinate time and proper time: This relation can also be found from the time transformation in the Lorentz transformations. Important four-vectors in relativity theory can be defined by applying this differential . Four-gradient Considering that partial derivatives are linear operators, one can form a four-gradient from the partial time derivative /t and the spatial gradient ∇. Using the standard basis, in index and abbreviated notations, the contravariant components are: Note the basis vectors are placed in front of the components, to prevent confusion between taking the derivative of the basis vector, or simply indicating the partial derivative is a component of this four-vector. The covariant components are: Since this is an operator, it doesn't have a "length", but evaluating the inner product of the operator with itself gives another operator: called the D'Alembert operator. Kinematics Four-velocity The four-velocity of a particle is defined by: Geometrically, U is a normalized vector tangent to the world line of the particle. Using the differential of the four-position, the magnitude of the four-velocity can be obtained: in short, the magnitude of the four-velocity for any object is always a fixed constant: The norm is also: so that: which reduces to the definition of the Lorentz factor. Units of four-velocity are m/s in SI and 1 in the geometrized unit system. Four-velocity is a contravariant vector. Four-acceleration The four-acceleration is given by: where a = du/dt is the coordinate 3-acceleration. Since the magnitude of U is a constant, the four acceleration is orthogonal to the four velocity, i.e. the Minkowski inner product of the four-acceleration and the four-velocity is zero: which is true for all world lines. The geometric meaning of four-acceleration is the curvature vector of the world line in Minkowski space. Dynamics Four-momentum For a massive particle of rest mass (or invariant mass) m0, the four-momentum is given by: where the total energy of the moving particle is: and the total relativistic momentum is: Taking the inner product of the four-momentum with itself: and also: which leads to the energy–momentum relation: This last relation is useful in relativistic mechanics, essential in relativistic quantum mechanics and relativistic quantum field theory, all with applications to particle physics. Four-force The four-force acting on a particle is defined analogously to the 3-force as the time derivative of 3-momentum in Newton's second law: where P is the power transferred to move the particle, and f is the 3-force acting on the particle. For a particle of constant invariant mass m0, this is equivalent to An invariant derived from the four-force is: from the above result. Thermodynamics Four-heat flux The four-heat flux vector field, is essentially similar to the 3d heat flux vector field q, in the local frame of the fluid: where T is absolute temperature and k is thermal conductivity. Four-baryon number flux The flux of baryons is: where is the number density of baryons in the local rest frame of the baryon fluid (positive values for baryons, negative for antibaryons), and the four-velocity field (of the fluid) as above. Four-entropy The four-entropy vector is defined by: where is the entropy per baryon, and the absolute temperature, in the local rest frame of the fluid. Electromagnetism Examples of four-vectors in electromagnetism include the following. Four-current The electromagnetic four-current (or more correctly a four-current density) is defined by formed from the current density j and charge density ρ. Four-potential The electromagnetic four-potential (or more correctly a four-EM vector potential) defined by formed from the vector potential and the scalar potential . The four-potential is not uniquely determined, because it depends on a choice of gauge. In the wave equation for the electromagnetic field: In vacuum, With a four-current source and using the Lorenz gauge condition , Waves Four-frequency A photonic plane wave can be described by the four-frequency, defined as where is the frequency of the wave and is a unit vector in the travel direction of the wave. Now: so the four-frequency of a photon is always a null vector. Four-wavevector The quantities reciprocal to time and space are the angular frequency and angular wave vector , respectively. They form the components of the four-wavevector or wave four-vector: The wave four-vector has coherent derived unit of reciprocal meters in the SI. A wave packet of nearly monochromatic light can be described by: The de Broglie relations then showed that four-wavevector applied to matter waves as well as to light waves: yielding and , where is the Planck constant divided by  . The square of the norm is: and by the de Broglie relation: we have the matter wave analogue of the energy–momentum relation: Note that for massless particles, in which case , we have: or  . Note this is consistent with the above case; for photons with a 3-wavevector of modulus in the direction of wave propagation defined by the unit vector Quantum theory Four-probability current In quantum mechanics, the four-probability current or probability four-current is analogous to the electromagnetic four-current: where is the probability density function corresponding to the time component, and is the probability current vector. In non-relativistic quantum mechanics, this current is always well defined because the expressions for density and current are positive definite and can admit a probability interpretation. In relativistic quantum mechanics and quantum field theory, it is not always possible to find a current, particularly when interactions are involved. Replacing the energy by the energy operator and the momentum by the momentum operator in the four-momentum, one obtains the four-momentum operator, used in relativistic wave equations. Four-spin The four-spin of a particle is defined in the rest frame of a particle to be where is the spin pseudovector. In quantum mechanics, not all three components of this vector are simultaneously measurable, only one component is. The timelike component is zero in the particle's rest frame, but not in any other frame. This component can be found from an appropriate Lorentz transformation. The norm squared is the (negative of the) magnitude squared of the spin, and according to quantum mechanics we have This value is observable and quantized, with the spin quantum number (not the magnitude of the spin vector). Other formulations Four-vectors in the algebra of physical space A four-vector A can also be defined in using the Pauli matrices as a basis, again in various equivalent notations: or explicitly: and in this formulation, the four-vector is represented as a Hermitian matrix (the matrix transpose and complex conjugate of the matrix leaves it unchanged), rather than a real-valued column or row vector. The determinant of the matrix is the modulus of the four-vector, so the determinant is an invariant: This idea of using the Pauli matrices as basis vectors is employed in the algebra of physical space, an example of a Clifford algebra. Four-vectors in spacetime algebra In spacetime algebra, another example of Clifford algebra, the gamma matrices can also form a basis. (They are also called the Dirac matrices, owing to their appearance in the Dirac equation). There is more than one way to express the gamma matrices, detailed in that main article. The Feynman slash notation is a shorthand for a four-vector A contracted with the gamma matrices: The four-momentum contracted with the gamma matrices is an important case in relativistic quantum mechanics and relativistic quantum field theory. In the Dirac equation and other relativistic wave equations, terms of the form: appear, in which the energy and momentum components are replaced by their respective operators. See also Basic introduction to the mathematics of curved spacetime Dust (relativity) for the number-flux four-vector Minkowski space Paravector Relativistic mechanics Wave vector References Rindler, W. Introduction to Special Relativity (2nd edn.) (1991) Clarendon Press Oxford Minkowski spacetime Theory of relativity Concepts in physics Vectors (mathematics and physics)
Four-vector
[ "Physics" ]
4,688
[ "Physical quantities", "Four-vectors", "Vector physical quantities", "nan", "Theory of relativity" ]
226,644
https://en.wikipedia.org/wiki/Unit%20cell
In geometry, biology, mineralogy and solid state physics, a unit cell is a repeating unit formed by the vectors spanning the points of a lattice. Despite its suggestive name, the unit cell (unlike a unit vector, for example) does not necessarily have unit size, or even a particular size at all. Rather, the primitive cell is the closest analogy to a unit vector, since it has a determined size for a given lattice and is the basic building block from which larger cells are constructed. The concept is used particularly in describing crystal structure in two and three dimensions, though it makes sense in all dimensions. A lattice can be characterized by the geometry of its unit cell, which is a section of the tiling (a parallelogram or parallelepiped) that generates the whole tiling using only translations. There are two special cases of the unit cell: the primitive cell and the conventional cell. The primitive cell is a unit cell corresponding to a single lattice point, it is the smallest possible unit cell. In some cases, the full symmetry of a crystal structure is not obvious from the primitive cell, in which cases a conventional cell may be used. A conventional cell (which may or may not be primitive) is a unit cell with the full symmetry of the lattice and may include more than one lattice point. The conventional unit cells are parallelotopes in n dimensions. Primitive cell A primitive cell is a unit cell that contains exactly one lattice point. For unit cells generally, lattice points that are shared by cells are counted as of the lattice points contained in each of those cells; so for example a primitive unit cell in three dimensions which has lattice points only at its eight vertices is considered to contain of each of them. An alternative conceptualization is to consistently pick only one of the lattice points to belong to the given unit cell (so the other lattice points belong to adjacent unit cells). The primitive translation vectors , , span a lattice cell of smallest volume for a particular three-dimensional lattice, and are used to define a crystal translation vector where , , are integers, translation by which leaves the lattice invariant. That is, for a point in the lattice , the arrangement of points appears the same from as from . Since the primitive cell is defined by the primitive axes (vectors) , , , the volume of the primitive cell is given by the parallelepiped from the above axes as Usually, primitive cells in two and three dimensions are chosen to take the shape parallelograms and parallelepipeds, with an atom at each corner of the cell. This choice of primitive cell is not unique, but volume of primitive cells will always be given by the expression above. Wigner–Seitz cell In addition to the parallelepiped primitive cells, for every Bravais lattice there is another kind of primitive cell called the Wigner–Seitz cell. In the Wigner–Seitz cell, the lattice point is at the center of the cell, and for most Bravais lattices, the shape is not a parallelogram or parallelepiped. This is a type of Voronoi cell. The Wigner–Seitz cell of the reciprocal lattice in momentum space is called the Brillouin zone. Conventional cell For each particular lattice, a conventional cell has been chosen on a case-by-case basis by crystallographers based on convenience of calculation. These conventional cells may have additional lattice points located in the middle of the faces or body of the unit cell. The number of lattice points, as well as the volume of the conventional cell is an integer multiple (1, 2, 3, or 4) of that of the primitive cell. Two dimensions For any 2-dimensional lattice, the unit cells are parallelograms, which in special cases may have orthogonal angles, equal lengths, or both. Four of the five two-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The centered rectangular lattice also has a primitive cell in the shape of a rhombus, but in order to allow easy discrimination on the basis of symmetry, it is represented by a conventional cell which contains two lattice points. Three dimensions For any 3-dimensional lattice, the conventional unit cells are parallelepipeds, which in special cases may have orthogonal angles, or equal lengths, or both. Seven of the fourteen three-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The other seven Bravais lattices (known as the centered lattices) also have primitive cells in the shape of a parallelepiped, but in order to allow easy discrimination on the basis of symmetry, they are represented by conventional cells which contain more than one lattice point. See also Wigner–Seitz cell Bravais lattice Wallpaper group Space group Notes References Crystallography Mineralogy
Unit cell
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
985
[ "Crystallography", "Condensed matter physics", "Materials science" ]
226,856
https://en.wikipedia.org/wiki/Norton%27s%20theorem
In direct-current circuit theory, Norton's theorem, also called the Mayer–Norton theorem, is a simplification that can be applied to networks made of linear time-invariant resistances, voltage sources, and current sources. At a pair of terminals of the network, it can be replaced by a current source and a single resistor in parallel. For alternating current (AC) systems the theorem can be applied to reactive impedances as well as resistances. The Norton equivalent circuit is used to represent any network of linear sources and impedances at a given frequency. Norton's theorem and its dual, Thévenin's theorem, are widely used for circuit analysis simplification and to study circuit's initial-condition and steady-state response. Norton's theorem was independently derived in 1926 by Siemens & Halske researcher Hans Ferdinand Mayer (1895–1980) and Bell Labs engineer Edward Lawry Norton (1898–1983). To find the Norton equivalent of a linear time-invariant circuit, the Norton current Ino is calculated as the current flowing at the two terminals A and B of the original circuit that is now short (zero impedance between the terminals). The Norton resistance Rno is found by calculating the output voltage Vo produced at A and B with no resistance or load connected to, then Rno = Vo / Ino; equivalently, this is the resistance between the terminals with all (independent) voltage sources short-circuited and independent current sources open-circuited (i.e., each independent source is set to produce zero energy). This is equivalent to calculating the Thevenin resistance. When there are dependent sources, the more general method must be used. The voltage at the terminals is calculated for an injection of a 1 ampere test current at the terminals. This voltage divided by the 1 A current is the Norton impedance Rno (in ohms). This method must be used if the circuit contains dependent sources, but it can be used in all cases even when there are no dependent sources. Example of a Norton equivalent circuit In the example, the total current Itotal is given by: The current through the load is then, using the current divider rule: And the equivalent resistance looking back into the circuit is: So the equivalent circuit is a 3.75 mA current source in parallel with a 2 kΩ resistor. Conversion to a Thévenin equivalent A Norton equivalent circuit is related to the Thévenin equivalent by the equations: An original circuit and its Thévenin and Norton equivalents have the same voltage between the two open-circuited terminals, and the same short-circuited current in between. Queueing theory The passive circuit equivalent of "Norton's theorem" in queuing theory is called the Chandy Herzog Woo theorem. In a reversible queueing system, it is often possible to replace an uninteresting subset of queues by a single (FCFS or PS) queue with an appropriately chosen service rate. See also Ohm's law Millman's theorem Source transformation Superposition theorem Thévenin's theorem Maximum power transfer theorem Extra element theorem References External links Norton's theorem at allaboutcircuits.com Circuit theorems Eponymous theorems of physics Linear electronic circuits
Norton's theorem
[ "Physics" ]
672
[ "Eponymous theorems of physics", "Equations of physics", "Circuit theorems", "Physics theorems" ]
226,864
https://en.wikipedia.org/wiki/Th%C3%A9venin%27s%20theorem
As originally stated in terms of direct-current resistive circuits only, Thévenin's theorem states that "Any linear electrical network containing only voltage sources, current sources and resistances can be replaced at terminals by an equivalent combination of a voltage source in a series connection with a resistance ." The equivalent voltage is the voltage obtained at terminals of the network with terminals open circuited. The equivalent resistance is the resistance that the circuit between terminals and would have if all ideal voltage sources in the circuit were replaced by a short circuit and all ideal current sources were replaced by an open circuit (i.e., the sources are set to provide zero voltages and currents). If terminals and are connected to one another (short), then the current flowing from and will be according to the Thévenin equivalent circuit. This means that could alternatively be calculated as divided by the short-circuit current between and when they are connected together. In circuit theory terms, the theorem allows any one-port network to be reduced to a single voltage source and a single impedance. The theorem also applies to frequency domain AC circuits consisting of reactive (inductive and capacitive) and resistive impedances. It means the theorem applies for AC in an exactly same way to DC except that resistances are generalized to impedances. The theorem was independently derived in 1853 by the German scientist Hermann von Helmholtz and in 1883 by Léon Charles Thévenin (1857–1926), an electrical engineer with France's national Postes et Télégraphes telecommunications organization. Thévenin's theorem and its dual, Norton's theorem, are widely used to make circuit analysis simpler and to study a circuit's initial-condition and steady-state response. Thévenin's theorem can be used to convert any circuit's sources and impedances to a Thévenin equivalent; use of the theorem may in some cases be more convenient than use of Kirchhoff's circuit laws. A proof of the theorem Various proofs have been given of Thévenin's theorem. Perhaps the simplest of these was the proof in Thévenin's original paper. Not only is that proof elegant and easy to understand, but a consensus exists that Thévenin's proof is both correct and general in its applicability. The proof goes as follows: Consider an active network containing impedances, (constant-) voltage sources and (constant-) current sources. The configuration of the network can be anything. Access to the network is provided by a pair of terminals. Designate the voltage measured between the terminals as , as shown in the box on the left side of Figure 2. Suppose that the voltage sources within the box are replaced by short circuits, and the current sources by open circuits. If this is done, no voltage appears across the terminals, and it is possible to measure the impedance between the terminals. Call this impedance . Now suppose that one attaches some linear network to the terminals of the box, having impedance , as in Figure 2a. We wish to find the current through . The answer is not obvious, since the terminal voltage will not be after is connected. Instead, we imagine that we attach, in series with impedance , a source with electromotive force equal to but directed to oppose , as shown in Figure 2b. No current will then flow through since balances . Next, we insert another source of electromotive force, , in series with , where has the same magnitude as but is opposed in direction (see Figure 2c). The current, , can be determined as follows: it is the current that would result from acting alone, with all other sources (within the active network and the external network) set to zero. This current is, therefore, because is the impedance external to the box and looking into the box when its sources are zero. Finally, we note that and can be removed together without changing the current, and when they are removed, we are back to Figure 2a. Therefore, is the current, , that we are seeking, i.e. thus, completing the proof. Figure 2d shows the Thévenin equivalent circuit. Helmholtz's proof As noted, Thévenin's theorem was first discovered and published by the German scientist Hermann von Helmholtz in 1853, four years before Thévenin's birth. Thévenin's 1883 proof, described above, is nearer in spirit to modern methods of electrical engineering, and this may explain why his name is more commonly associated with the theorem. Helmholtz's earlier formulation of the problem reflects a more general approach that is closer to physics. In his 1853 paper, Helmholtz was concerned with the electromotive properties of "physically extensive conductors", in particular, with animal tissue. He noted that earlier work by physiologist Emil du Bois-Reymond had shown that "every smallest part of a muscle that can be stimulated is capable of producing electrical currents." At this time, experiments were carried out by attaching a galvanometer at two points to a sample of animal tissue and measuring current flow through the external circuit. Since the goal of this work was to understand something about the internal properties of the tissue, Helmholtz wanted to find a way to relate those internal properties to the currents measured externally. Helmholtz's starting point was a result published by Gustav Kirchhoff in 1848. Like Helmholtz, Kirchhoff was concerned with three-dimensional, electrically conducting systems. Kirchhoff considered a system consisting of two parts, which he labelled parts A and B. Part A (which played the part of the "active network" in Fig. 2 above) consisted of a collection of conducting bodies connected end to end, each body characterized by an electromotive force and a resistance. Part B was assumed to be connected to the endpoints of A via two wires. Kirchhoff then showed (p. 195) that "without changing the flow at any point in B, one can substitute for A a conductor in which an electromotive force is located which is equal to the sum of the voltage differences in A, and which has a resistance equal to the summed resistances of the elements of A". In his 1853 paper, Helmholtz acknowledged Kirchhoff's result, but noted that it was only valid in the case that, "as in hydroelectric batteries", there are no closed current curves in A, but rather that all such curves pass through B. He therefore set out to generalize Kirchhoff's result to the case of an arbitrary, three-dimensional distribution of currents and voltage sources within system A. Helmholtz began by providing a more general formulation than had previously been published of the superposition principle, which he expressed (p. 212-213) as follows: If any system of conductors contains electromotive forces at various locations, the electrical voltage at every point in the system through which the current flows is equal to the algebraic sum of those voltages which each of the electromotive forces would produce independently of the others. And similarly, the components of the current intensity that are parallel to three perpendicular axes are equal to the sum of the corresponding components that belong to the individual forces. Using this theorem, as well as Ohm's law, Helmholtz proved the following three theorems about the relation between the internal voltages and currents of "physical" system A, and the current flowing through the "linear" system B, which was assumed to be attached to A at two points on its surface: From these, Helmholtz derived his final result (p. 222): If a physical conductor with constant electromotive forces in two specific points on its surface is connected to any linear conductor, then in its place one can always substitute a linear conductor with a certain electromotive force and a certain resistance, which in all applied linear conductors would excite exactly the same currents as the physical one. ... The resistance of the linear conductor to be substituted is equal to that of the body when a current is passed through it from the two entry points of the linear conductor. He then noted that his result, derived for a general "physical system", also applied to "linear" (in a geometric sense) circuits like those considered by Kirchhoff: What applies to every physical conductor also applies to the special case of a branched linear current system. Even if two specific points of such a system are connected to any other linear conductors, it behaves compared to them like a linear conductor of certain resistance, the magnitude of which can be found according to the well-known rules for branched lines, and of certain electromotive force, which is given by the voltage difference of the derived points as it existed before the added circuit. This formulation of the theorem is essentially the same as Thévenin's, published 30 years later. Calculating the Thévenin equivalent The Thévenin-equivalent circuit of a linear electrical circuit is a voltage source with voltage in series with a resistance . The Thévenin-equivalent voltage is the open-circuit voltage at the output terminals of the original circuit. When calculating a Thévenin-equivalent voltage, the voltage divider principle is often useful, by declaring one terminal to be and the other terminal to be at the ground point. The Thévenin-equivalent resistance is the resistance measured across points and "looking back" into the circuit. The resistance is measured after replacing all voltage- and current-sources with their internal resistances. That means an ideal voltage source is replaced with a short circuit, and an ideal current source is replaced with an open circuit. Resistance can then be calculated across the terminals using the formulae for series and parallel circuits. This method is valid only for circuits with independent sources. If there are dependent sources in the circuit, another method must be used such as connecting a test source across and and calculating the voltage across or current through the test source. As a mnemonic, the Thevenin replacements for voltage and current sources can be remembered as the sources' values (meaning their voltage or current) are set to zero. A zero valued voltage source would create a potential difference of zero volts between its terminals, just like an ideal short circuit would do, with two leads touching; therefore the source is replaced with a short circuit. Similarly, a zero valued current source and an open circuit both pass zero current. Example In the example, calculating the equivalent voltage:(Notice that is not taken into consideration, as above calculations are done in an open-circuit condition between and , therefore no current flows through this part, which means there is no current through and therefore no voltage drop along this part.) Calculating equivalent resistance ( is the total resistance of two parallel resistors): Conversion to a Norton equivalent A Norton equivalent circuit is related to the Thévenin equivalent by Practical limitations Many circuits are only linear over a certain range of values, thus the Thévenin equivalent is valid only within this linear range. The Thévenin equivalent has the equivalent I–V characteristic of an original circuit only from the point of view of a load connecting to the circuit. The power dissipation of the Thévenin equivalent is not necessarily identical to the power dissipation of the real system. However, the power dissipated by an external resistor between the two output terminals is the same regardless of how the internal circuit is implemented. In three-phase circuits In 1933, A. T. Starr published a generalization of Thévenin's theorem in an article of the magazine Institute of Electrical Engineers Journal, titled A New Theorem for Active Networks, which states that any three-terminal active linear network can be substituted by three voltage sources with corresponding impedances, connected in wye or in delta. See also Extra element theorem Maximum power transfer theorem Millman's theorem Source transformation References Further reading First-Order Filters: Shortcut via Thévenin Equivalent Source – showing on p. 4 complex circuit's Thévenin's theorem simplication to first-order low-pass filter and associated voltage divider, time constant and gain. External links Circuit theorems Eponymous theorems of physics Linear electronic circuits
Thévenin's theorem
[ "Physics" ]
2,492
[ "Circuit theorems", "Eponymous theorems of physics", "Equations of physics", "Physics theorems" ]
227,100
https://en.wikipedia.org/wiki/Silver%20nitrate
Silver nitrate is an inorganic compound with chemical formula . It is a versatile precursor to many other silver compounds, such as those used in photography. It is far less sensitive to light than the halides. It was once called lunar caustic because silver was called luna by ancient alchemists who associated silver with the moon. In solid silver nitrate, the silver ions are three-coordinated in a trigonal planar arrangement. Synthesis and structure Albertus Magnus, in the 13th century, documented the ability of nitric acid to separate gold and silver by dissolving the silver. Indeed silver nitrate can be prepared by dissolving silver in nitric acid followed by evaporation of the solution. The stoichiometry of the reaction depends upon the concentration of nitric acid used. 3 Ag + 4 HNO3 (cold and diluted) → 3 AgNO3 + 2 H2O + NO Ag + 2 HNO3 (hot and concentrated) → AgNO3 + H2O + NO2 The structure of silver nitrate has been examined by X-ray crystallography several times. In the common orthorhombic form stable at ordinary temperature and pressure, the silver atoms form pairs with Ag---Ag contacts of 3.227 Å. Each Ag+ center is bonded to six oxygen centers of both uni- and bidentate nitrate ligands. The Ag-O distances range from 2.384 to 2.702 Å. Reactions A typical reaction with silver nitrate is to suspend a rod of copper in a solution of silver nitrate and leave it for a few hours. The silver nitrate reacts with copper to form hairlike crystals of silver metal and a blue solution of copper nitrate: 2 AgNO3 + Cu → Cu(NO3)2 + 2 Ag Silver nitrate decomposes when heated: 2 AgNO3(l) → 2 Ag(s) + O2(g) + 2 NO2(g) Qualitatively, decomposition is negligible below the melting point, but becomes appreciable around 250 °C and fully decomposes at 440 °C. Most metal nitrates thermally decompose to the respective oxides, but silver oxide decomposes at a lower temperature than silver nitrate, so the decomposition of silver nitrate yields elemental silver instead. Uses Precursor to other silver compounds Silver nitrate is the least expensive salt of silver; it offers several other advantages as well. It is non-hygroscopic, in contrast to silver fluoroborate and silver perchlorate. In addition, it is relatively stable to light, and it dissolves in numerous solvents, including water. The nitrate can be easily replaced by other ligands, rendering AgNO3 versatile. Treatment with solutions of halide ions gives a precipitate of AgX (X = Cl, Br, I). When making photographic film, silver nitrate is treated with halide salts of sodium or potassium to form insoluble silver halide in situ in photographic gelatin, which is then applied to strips of tri-acetate or polyester. Similarly, silver nitrate is used to prepare some silver-based explosives, such as the fulminate, azide, or acetylide, through a precipitation reaction. Treatment of silver nitrate with base gives dark grey silver oxide: 2 AgNO3 + 2 NaOH → Ag2O + 2 NaNO3 + H2O Halide abstraction The silver cation, , reacts quickly with halide sources to produce the insoluble silver halide, which is a cream precipitate if is used, a white precipitate if is used and a yellow precipitate if is used. This reaction is commonly used in inorganic chemistry to abstract halides: (aq) + (aq) → AgX(s) where = , , or . Other silver salts with non-coordinating anions, namely silver tetrafluoroborate and silver hexafluorophosphate are used for more demanding applications. Similarly, this reaction is used in analytical chemistry to confirm the presence of chloride, bromide, or iodide ions. Samples are typically acidified with dilute nitric acid to remove interfering ions, e.g. carbonate ions and sulfide ions. This step avoids confusion of silver sulfide or silver carbonate precipitates with that of silver halides. The color of precipitate varies with the halide: white (silver chloride), pale yellow/cream (silver bromide), yellow (silver iodide). AgBr and especially AgI photo-decompose to the metal, as evidenced by a grayish color on exposed samples. The same reaction was used on steamships in order to determine whether or not boiler feedwater had been contaminated with seawater. It is still used to determine if moisture on formerly dry cargo is a result of condensation from humid air, or from seawater leaking through the hull. Organic synthesis Silver nitrate is used in many ways in organic synthesis, e.g. for deprotection and oxidations. binds alkenes reversibly, and silver nitrate has been used to separate mixtures of alkenes by selective absorption. The resulting adduct can be decomposed with ammonia to release the free alkene. Silver nitrate is highly soluble in water but is poorly soluble in most organic solvents, except acetonitrile (111.8 g/100 g, 25 °C). Biology In histology, silver nitrate is used for silver staining, for demonstrating reticular fibers, proteins and nucleic acids. For this reason it is also used to demonstrate proteins in PAGE gels. It can be used as a stain in scanning electron microscopy. Cut flower stems can be placed in a silver nitrate solution, which prevents the production of ethylene. This delays ageing of the flower. Indelible ink Silver nitrate produces long-lasting stain when applied to skin and is one of indelible ink’s ingredients. An electoral stain makes use of this to mark a finger of people who have voted in an election, allowing easy identification to prevent double-voting. In addition to staining skin, silver nitrate has a history of use in stained glass. In the 14th century, artists began using a "silver stain" (also known as a yellow stain) made from silver nitrate to create a yellow effect on clear glass. The stain would produce a stable color that could range from pale lemon to deep orange or gold. Silver stain was often used with glass paint, and was applied to the opposite side of the glass as the paint. It was also used to create a mosaic effect by reducing the number of pieces of glass in a window. Despite the age of the technique, this process of creating stained glass remains almost entirely unchanged. Medicine Silver salts have antiseptic properties. In 1881 Credé introduced a method known as Credé's prophylaxis, which used of dilute (2%) solutions of silver nitrate in newborn babies' eyes at birth to prevent contraction of gonorrhea from the mother, which could cause blindness via ophthalmia neonatorum. (Modern antibiotics are now used instead). Fused silver nitrate, shaped into sticks, was traditionally called "lunar caustic". It is used as a cauterizing agent, for example to remove granulation tissue around a stoma. General Sir James Abbott noted in his journals that in India in 1827 it was infused by a British surgeon into wounds in his arm resulting from the bite of a mad dog to cauterize the wounds and prevent the onset of rabies. Silver nitrate is used to cauterize superficial blood vessels in the nose to help prevent nosebleeds. Dentists sometimes use silver nitrate-infused swabs to heal oral ulcers. Silver nitrate is used by some podiatrists to kill cells located in the nail bed. The Canadian physician C. A. Douglas Ringrose researched the use of silver nitrate for sterilization procedures, believing that silver nitrate could be used to block and corrode the fallopian tubes. The technique was ineffective. Disinfection Much research has been done in evaluating the ability of the silver ion at inactivating Escherichia coli, a microorganism commonly used as an indicator for fecal contamination and as a surrogate for pathogens in drinking water treatment. Concentrations of silver nitrate evaluated in inactivation experiments range from 10–200 micrograms per liter as Ag+. Silver's antimicrobial activity saw many applications prior to the discovery of modern antibiotics, when it fell into near disuse. Its association with argyria made consumers wary and led them to turn away from it when given an alternative. Against warts Repeated daily application of silver nitrate can induce adequate destruction of cutaneous warts, but occasionally pigmented scars may develop. In a placebo-controlled study of 70 patients, silver nitrate given over nine days resulted in clearance of all warts in 43% and improvement in warts in 26% one month after treatment compared to 11% and 14%, respectively, in the placebo group. Safety As an oxidant, silver nitrate should be properly stored away from organic compounds. It reacts explosively with ethanol. Despite its common usage in extremely low concentrations to prevent gonorrhea and control nosebleeds, silver nitrate is still very toxic and corrosive. Brief exposure will not produce any immediate side effects other than the purple, brown or black stains on the skin, but upon constant exposure to high concentrations, side effects will be noticeable, which include burns. Long-term exposure may cause eye damage. Silver nitrate is known to be a skin and eye irritant. Silver nitrate has not been thoroughly investigated for potential carcinogenic effect. Silver nitrate is currently unregulated in water sources by the United States Environmental Protection Agency. However, if more than 1 gram of silver is accumulated in the body, a condition called argyria may develop. Argyria is a permanent cosmetic condition in which the skin and internal organs turn a blue-gray color. The United States Environmental Protection Agency used to have a maximum contaminant limit for silver in water until 1990, when it was determined that argyria did not impact the function of any affected organs despite the discolouration. Argyria is more often associated with the consumption of colloidal silver solutions rather than with silver nitrate, since it is only used at extremely low concentrations to disinfect the water. However, it is still important to be wary before ingesting any sort of silver-ion solution. References External links International Chemical Safety Card 1116 NIOSH Pocket Guide to Chemical Hazards History of Kodak: About Film and Imaging https://www.cofesilver.com/en/silver_bar :silver bar explanation. pricing investing 13th century in science Antiseptics Electron microscopy stains Nitrates Photographic chemicals Silver compounds Staining dyes Alchemical substances Light-sensitive chemicals Oxidizing agents Chemical tests
Silver nitrate
[ "Chemistry" ]
2,272
[ "Light-sensitive chemicals", "Redox", "Alchemical substances", "Nitrates", "Oxidizing agents", "Salts", "Chemical tests", "Light reactions" ]
227,223
https://en.wikipedia.org/wiki/MicroStation
MicroStation is a CAD software platform for two- and three-dimensional design and drafting, developed and sold by Bentley Systems and used in the architectural and engineering industries. It generates 2D/3D vector graphics objects and elements and includes building information modeling (BIM) features. The current version is MicroStation CONNECT Edition. History MicroStation was initially developed by 3 Individual developers and sold and supported by Intergraph in the 1980s. The latest versions of the software are released solely for Microsoft Windows operating systems, but historically MicroStation was available for Macintosh platforms and a number of Unix-like operating systems. From its inception MicroStation was designed as an IGDS (Interactive Graphics Design System) file editor for the PC. Its initial development was a result of the developers experience developing PseudoStation released in 1984, a program designed to replace the use of proprietary Intergraph graphic workstations to edit DGN files by substituting the much less expensive Tektronix compatible graphics terminals. PseudoStation as well as Intergraph's IGDS program ran on a modified version of Digital Equipment Corporation's VAX super-mini computer. In 1985, MicroStation 1.0 was released as a DGN file read-only and plot program designed to run exclusively on the IBM PC-AT personal computer. In 1987, MicroStation 2.0 was released, and was the first version of MicroStation to read and write DGN files. Almost two years later, MicroStation 3.0 was released, which took advantage of the increasing processing power of the PC, particularly with respect to dynamics. Intergraph MicroStation 4.0 was released in late 1990 and added many features: reference file clipping and masking, a DWG translator, fence modes, the ability to name levels, as well as GUI enhancements. The 1992 release of version 4 introduced the ability to write applications using the MicroStation Development Language (MDL). In 1993, MicroStation 5.0 was released. New capabilities included binary raster support, custom line styles, settings manager, and dimension driven design. The "V5 for Power Macintosh provided a comprehensive tool set for both 2-D and 3-D CAD ... with added several truly useful features ... the high-end PowerPC- native CAD package runs on steroids." This was the last version to be supported in Unix. This version was branded both Intergraph (on CLIX) and Bentley MicroStation (on PC). Later versions were all branded Bentley. This was the last version to run on Intergraph CLIX. All platforms other than the PC used 32-bit processors. In 1995, Windows 95 was released. Bentley soon followed with a release of MicroStation for that operating system. Aside from being the first version of MicroStation to not include the version number in its name (MicroStation 95 was actually MicroStation v5.5), MicroStation 95 included the ability to be mostly driven by graphic icon buttons. This version introduced a host of new features: Accudraw, dockable dialogs, Smartline, revised view controls, movie generation, and the ability to use two application windows (similar to previous Unix driven Intergraph terminals. Many of these features are among the most popular used today. MicroStation 95 was the first version of MicroStation for a PC platform to use 32-bit hardware. The last multi-platform release, MicroStation SE (SE standing for special edition, but it was actually MicroStation 5.7) was released late in 1997, and was the first MicroStation release to include color button icons. These icons could also be made borderless, just like in Office 97. This version of MicroStation also included several features to enable more work over the internet. This version also introduced enhanced precision and a very commonly used tool in MicroStation - PowerSelector. MicroStation/J (a.k.a. MicroStation 7.0, a.k.a. MicroStation V7) was released almost a year after SE. The J in the software title stood for Java, as this version introduced a Java-enhanced version of MDL, called JMDL. Other features included QuickvisionGL and a revised help system. MicroStation/J was the last version to be based upon the IGDS file format; since MicroStation/J was actually Version 7, the file format became known as "V7 DGN". That file format had been used for about 20 years. However, with the advent of MicroStation V8 in 2001 came a new IEEE-754 based 64-bit file format, referred to as V8 DGN. Along with the new file format came many new enhancements, including unlimited levels, a nearly limitless design plane and no limits on file size. Other features that were added were: Accusnap, Design History, models, unlimited undo, VBA programming, .Net interoperability, True Scale, and standard definitions for working units (as the new file format stored everything internally in meters, but can recognize rational unit conversions so that it can know the size of geometry)(some of these features were also available in MicroStation 95 to MicroStation/J). It also included the ability to work natively with DWG files. MicroStation V8 2004 Edition (V8.5) followed nearly three years later with support for newer DWG releases, Multi-snaps, PDF creation, the Standards Checker and Feature modeling. MicroStation V8 XM Edition (V8.9) was released in May 2006. It builds upon the changes made by V8. The XM edition includes a completely revised Direct3d-based graphics subsystem, PDF References, task navigation, element templates, color books, support for PANTONE and RAL color systems and keyboard mapping. In MicroStation V8i (V8.11) (November 2008) the task navigation was overhauled and the then newest DWG format was supported. MicroStation now contains a module for GPS data. MicroStation CONNECT Edition (V10.XX) first release in September 2015. This version updated the application architecture to 64-bit and changed to a Ribbon Interface. Future versions are being delivered as (roughly) quarterly updates. MicroStation 2023 (23.00.00.108) was released on June 28th, 2023. This is the first major release adopting the new naming convention. New features include improved workflows, and several user experience enhancements, with focuses on a new access to geospatial features and maps, issue resolution improvements, increased data reporting. File format support Its native format is the DGN format, though it can also read and write a variety of standard CAD formats including DWG, DXF, SKP and OBJ and produce media output in such forms as rendered images (JPEG and BMP), animations (AVI), 3D web pages in Virtual Reality Modeling Language (VRML), and Adobe Systems PDF. At its inception, MicroStation was used in the engineering and architecture fields primarily for creating construction drawings; however, it has evolved through its various versions to include advanced parametric modeling and rendering features, including Boolean solids, VUE Rendering, raytracing, pathtracing, PBR Materials, and keyframe animation. It can provide specialized environments for architecture, civil engineering, mapping, or plant design, among others. In 2000, Bentley made revisions to the DGN file format in V8 to add additional features like Digital Rights and Design History - a revision control ability that allows reinstating previous revisions either globally or by selection, and to better support import/export of Autodesk's DWG format. Additionally, the V8 DGN file format removed many data restrictions from earlier releases such as limited design levels and drawing area. CONNECT Edition versions continue to use the V8 DGN file format. Criticism The software has been criticized many times, somewhat reflected by the steep decline in usage. Common issues include, but are not limited to: Crashes during the process of rendering Blank renders Arbitrary light disruption Material applying unpredictably See also ProjectWise GenerativeComponents Comparison of computer-aided design editors Rendering (computer graphics) References External links MicroStation home page at Bentley CONNECT Edition Book Series at Bentley Computer-aided design software Computer-aided design software for Windows
MicroStation
[ "Engineering" ]
1,692
[ "Building engineering", "Building information modeling" ]
227,686
https://en.wikipedia.org/wiki/Lennard-Jones%20potential
In computational chemistry, molecular physics, and physical chemistry, the Lennard-Jones potential (also termed the LJ potential or 12-6 potential; named for John Lennard-Jones) is an intermolecular pair potential. Out of all the intermolecular potentials, the Lennard-Jones potential is probably the one that has been the most extensively studied. It is considered an archetype model for simple yet realistic intermolecular interactions. The Lennard-Jones potential is often used as a building block in molecular models (a.k.a. force fields) for more complex substances. Many studies of the idealized "Lennard-Jones substance" use the potential to understand the physical nature of matter. Overview The Lennard-Jones potential is a simple model that still manages to describe the essential features of interactions between simple atoms and molecules: Two interacting particles repel each other at very close distance, attract each other at moderate distance, and eventually stop interacting at infinite distance, as shown in the Figure. The Lennard-Jones potential is a pair potential, i.e. no three- or multi-body interactions are covered by the potential. The general Lennard-Jones potential combines a repulsive potential, , with an attractive potential, , using empirically determined coefficients and : In his 1931 review Lennard-Jones suggested using to match the London dispersion force and based matching experimental data. Setting and gives the widely used Lennard-Jones 12-6 potential: where is the distance between two interacting particles, is the depth of the potential well, and is the distance at which the particle-particle potential energy is zero. The Lennard-Jones 12-6 potential has its minimum at a distance of where the potential energy has the value The Lennard-Jones potential is usually the standard choice for the development of theories for matter (especially soft-matter) as well as for the development and testing of computational methods and algorithms. Numerous intermolecular potentials have been proposed in the past for the modeling of simple soft repulsive and attractive interactions between spherically symmetric particles, i.e. the general shape shown in the Figure. Examples for other potentials are the Morse potential, the Mie potential, the Buckingham potential and the Tang-Tönnies potential. While some of these may be more suited to modelling real fluids, the simplicity of the Lennard-Jones potential, as well as its often surprising ability to accurately capture real fluid behavior, has historically made it the pair-potential of greatest general importance. History In 1924, the year that Lennard-Jones received his PhD from Cambridge University, he published a series of landmark papers on the pair potentials that would ultimately be named for him. In these papers he adjusted the parameters of the potential then using the result in a model of gas viscosity, seeking a set of values consistent with experiment. His initial results suggested a repulsive and an attractive . Before Lennard-Jones, back in 1903, Gustav Mie had worked on effective field theories; Eduard Grüneisen built on Mie work for solids, showing that and is required for solids. As a result of this work the Lennard-Jones potential is sometimes called the Mie− Grüneisen potential in solid-state physics. In 1930, after the discovery of quantum mechanics, Fritz London showed that theory predicts the long-range attractive force should have . In 1931, Lennard-Jones applied this form of the potential to describe many properties of fluids setting the stage for many subsequent studies. Dimensionless (reduced units) Dimensionless reduced units can be defined based on the Lennard-Jones potential parameters, which is convenient for molecular simulations. From a numerical point of view, the advantages of this unit system include computing values which are closer to unity, using simplified equations and being able to easily scale the results. This reduced units system requires the specification of the size parameter and the energy parameter of the Lennard-Jones potential and the mass of the particle . All physical properties can be converted straightforwardly taking the respective dimension into account, see table. The reduced units are often abbreviated and indicated by an asterisk. In general, reduced units can also be built up on other molecular interaction potentials that consist of a length parameter and an energy parameter. Long-range interactions The Lennard-Jones potential, cf. Eq. (1) and Figure on the top, has an infinite range. Only under its consideration, the 'true' and 'full' Lennard-Jones potential is examined. For the evaluation of an observable of an ensemble of particles interacting by the Lennard-Jones potential using molecular simulations, the interactions can only be evaluated explicitly up to a certain distance – simply due to the fact that the number of particles will always be finite. The maximum distance applied in a simulation is usually referred to as 'cut-off' radius (because the Lennard-Jones potential is radially symmetric). To obtain thermophysical properties (both macroscopic or microscopic) of the 'true' and 'full' Lennard-Jones (LJ) potential, the contribution of the potential beyond the cut-off radius has to be accounted for. Different correction schemes have been developed to account for the influence of the long-range interactions in simulations and to sustain a sufficiently good approximation of the 'full' potential. They are based on simplifying assumptions regarding the structure of the fluid. For simple cases, such as in studies of the equilibrium of homogeneous fluids, simple correction terms yield excellent results. In other cases, such as in studies of inhomogeneous systems with different phases, accounting for the long-range interactions is more tedious. These corrections are usually referred to as 'long-range corrections'. For most properties, simple analytical expressions are known and well established. For a given observable , the 'corrected' simulation result is then simply computed from the actually sampled value and the long-range correction value , e.g. for the internal energy . The hypothetical true value of the observable of the Lennard-Jones potential at truly infinite cut-off distance (thermodynamic limit) can in general only be estimated. Furthermore, the quality of the long-range correction scheme depends on the cut-off radius. The assumptions made with the correction schemes are usually not justified at (very) short cut-off radii. This is illustrated in the example shown in Figure on the right. The long-range correction scheme is said to be converged, if the remaining error of the correction scheme is sufficiently small at a given cut-off distance, cf. Figure. Extensions and modifications The Lennard-Jones potential – as an archetype for intermolecular potentials – has been used numerous times as starting point for the development of more elaborate or more generalized intermolecular potentials. Various extensions and modifications of the Lennard-Jones potential have been proposed in the literature; a more extensive list is given in the 'interatomic potential' article. The following list refers only to several example potentials that are directly related to the Lennard-Jones potential and are of both historic importance and still relevant for present research. Mie potential The Mie potential is the generalized version of the Lennard-Jones potential, i.e. the exponents 12 and 6 are introduced as parameters and . Especially thermodynamic derivative properties, e.g. the compressibility and the speed of sound, are known to be very sensitive to the steepness of the repulsive part of the intermolecular potential, which can therefore be modeled more sophisticated by the Mie potential. The first explicit formulation of the Mie potential is attributed to Eduard Grüneisen. Hence, the Mie potential was actually proposed before the Lennard-Jones potential. The Mie potential is named after Gustav Mie. Buckingham potential The Buckingham potential was proposed by Richard Buckingham. The repulsive part of the Lennard-Jones potential is therein replaced by an exponential function and it incorporates an additional parameter. Stockmayer potential The Stockmayer potential is named after W.H. Stockmayer. The Stockmayer potential is a combination of a Lennard-Jones potential superimposed by a dipole. Hence, Stockmayer particles are not spherically symmetric, but rather have an important orientational structure. Two center Lennard-Jones potential The two center Lennard-Jones potential consists of two identical Lennard-Jones interaction sites (same , , ) that are bonded as a rigid body. It is often abbreviated as 2CLJ. Usually, the elongation (distance between the Lennard-Jones sites) is significantly smaller than the size parameter . Hence, the two interaction sites are significantly fused. Lennard-Jones truncated & splined potential The Lennard-Jones truncated & splined potential is a rarely used yet useful potential. Similar to the more popular LJTS potential, it is sturdily truncated at a certain 'end' distance and no long-range interactions are considered beyond. Opposite to the LJTS potential, which is shifted such that the potential is continuous, the Lennard-Jones truncated & splined potential is made continuous by using an arbitrary but favorable spline function. Lennard-Jones truncated & shifted (LJTS) potential The Lennard-Jones truncated & shifted (LJTS) potential is an often used alternative to the 'full' Lennard-Jones potential (see Eq. (1)). The 'full' and the 'truncated & shifted' Lennard-Jones potential have to be kept strictly separate. They are simply two different intermolecular potentials yielding different thermophysical properties. The Lennard-Jones truncated & shifted potential is defined as with Hence, the LJTS potential is truncated at and shifted by the corresponding energy value . The latter is applied to avoid a discontinuity jump of the potential at . For the LJTS potential, no long-range interactions beyond are required – neither explicitly nor implicitly. The most frequently used version of the Lennard-Jones truncated & shifted potential is the one with . Nevertheless, different values have been used in the literature. Each LJTS potential with a given truncation radius has to be considered as a potential and accordingly a substance of its own. The LJTS potential is computationally significantly cheaper than the 'full' Lennard-Jones potential, but still covers the essential physical features of matter (the presence of a critical and a triple point, soft repulsive and attractive interactions, phase equilibria etc.). Therefore, the LJTS potential is used for the testing of new algorithms, simulation methods, and new physical theories. Interestingly, for homogeneous systems, the intermolecular forces that are calculated from the LJ and the LJTS potential at a given distance are the same (since is the same), whereas the potential energy and the pressure are affected by the shifting. Also, the properties of the LJTS substance may furthermore be affected by the chosen simulation algorithm, i.e. MD or MC sampling (this is in general not the case for the 'full' Lennard-Jones potential). For the LJTS potential with , the potential energy shift is approximately 1/60 of the dispersion energy at the potential well: . The Figure on the right shows the comparison of the vapor–liquid equilibrium of the 'full' Lennard-Jones potential and the 'Lennard-Jones truncated & shifted' potential. The 'full' Lennard-Jones potential results prevail a significantly higher critical temperature and pressure compared to the LJTS potential results, but the critical density is very similar. The vapor pressure and the enthalpy of vaporization are influenced more strongly by the long-range interactions than the saturated densities. This is due to the fact that the potential is manipulated mainly energetically by the truncation and shifting. Applications The Lennard-Jones potential is not only of fundamental importance in computational chemistry and soft-matter physics, but also for the modeling of real substances. The Lennard-Jones potential is used for fundamental studies on the behavior of matter and for elucidating atomistic phenomena. It is also often used for somewhat special use cases, e.g. for studying thermophysical properties of two- or four-dimensional substances (instead of the classical three spatial directions of our universe). There are two main applications of the Lennard-Jones potentials: (i) for studying the hypothetical Lennard-Jones substance and (ii) for modeling interactions in real substance models. These two applications are discussed in the following. Lennard-Jones substance A Lennard-Jones substance or "Lennard-Jonesium" is the name given to an idealized substance which would result from atoms or molecules interacting exclusively through the Lennard-Jones potential. Statistical mechanics and computer simulations can be used to study the Lennard-Jones potential and to obtain thermophysical properties of the 'Lennard-Jones substance'. The Lennard-Jones substance is often referred to as 'Lennard-Jonesium,' suggesting that it is viewed as a (fictive) chemical element. Moreover, its energy and length parameters can be adjusted to fit many different real substances. Both the Lennard-Jones potential and, accordingly, the Lennard-Jones substance are simplified yet realistic models, such as they accurately capture essential physical principles like the presence of a critical and a triple point, condensation and freezing. Due in part to its mathematical simplicity, the Lennard-Jones potential has been extensively used in studies on matter since the early days of computer simulation. Thermophysical properties of the Lennard-Jones substance Thermophysical properties of the Lennard-Jones substance, i.e. particles interacting with the Lennard-Jones potential can be obtained using statistical mechanics. Some properties can be computed analytically, i.e. with machine precision, whereas most properties can only be obtained by performing molecular simulations. The latter will in general be superimposed by both statistical and systematic uncertainties. The virial coefficients can for example be computed directly from the Lennard-potential using algebraic expressions and reported data has therefore no uncertainty. Molecular simulation results, e.g. the pressure at a given temperature and density has both statistical and systematic uncertainties. Molecular simulations of the Lennard-Jones potential can in general be performed using either molecular dynamics (MD) simulations or Monte Carlo (MC) simulation. For MC simulations, the Lennard-Jones potential is directly used, whereas MD simulations are always based on the derivative of the potential, i.e. the force . These differences in combination with differences in the treatment of the long-range interactions (see below) can influence computed thermophysical properties. Since the Lennard-Jonesium is the archetype for the modeling of simple yet realistic intermolecular interactions, a large number of thermophysical properties were studied and reported in the literature. Computer experiment data of the Lennard-Jones potential is presently considered the most accurately known data in classical mechanics computational chemistry. Hence, such data is also mostly used as a benchmark for validating and testing new algorithms and theories. The Lennard-Jones potential has been constantly used since the early days of molecular simulations. The first results from computer experiments for the Lennard-Jones potential were reported by Rosenbluth and Rosenbluth and Wood and Parker after molecular simulations on "fast computing machines" became available in 1953. Since then many studies reported data of the Lennard-Jones substance; approximately 50,000 data points are publicly available. The current state of research on the thermophysical properties of the Lennard-Jones substance is summarized by Stephan et al. (which did not cover transport and mixture properties). The US National Institute of Standards and Technology (NIST) provides examples of molecular dynamics and Monte Carlo codes along with results obtained from them. Transport property data of Lennard-Jones fluids have been compiled by Bell et al. and Lautenschaeger and Hasse. Figure on the right shows the phase diagram of the Lennard-Jones fluid. Phase equilibria of the Lennard-Jones potential have been studied numerous times and are accordingly known today with good precision. The Figure shows results correlations derived from computer experiment results (hence, lines instead of data points are shown). The mean intermolecular interaction of a Lennard-Jones particle strongly depends on the thermodynamic state, i.e., temperature and pressure (or density). For solid states, the attractive Lennard-Jones interaction plays a dominant role – especially at low temperatures. For liquid states, no ordered structure is present compared to solid states. The mean potential energy per particle is negative. For gaseous states, attractive interactions of the Lennard-Jones potential play a minor role – since they are far distanced. The main part of the internal energy is stored as kinetic energy for gaseous states. At supercritical states, the attractive Lennard-Jones interaction plays a minor role. With increasing temperature, the mean kinetic energy of the particles increases and exceeds the energy well of the Lennard-Jones potential. Hence, the particles mainly interact by the potentials' soft repulsive interactions and the mean potential energy per particle is accordingly positive. Overall, due to the large timespan the Lennard-Jones potential has been studied and thermophysical property data has been reported in the literature and computational resources were insufficient for accurate simulations (to modern standards), a noticeable amount of data is known to be dubious. Nevertheless, in many studies such data is used as reference. The lack of data repositories and data assessment is a crucial element for future work in the long-going field of Lennard-Jones potential research. Characteristic points and curves The most important characteristic points of the Lennard-Jones potential are the critical point and the vapor–liquid–solid triple point. They were studied numerous times in the literature and compiled in Ref. The critical point was thereby assessed to be located at The given uncertainties were calculated from the standard deviation of the critical parameters derived from the most reliable available vapor–liquid equilibrium data sets. These uncertainties can be assumed as a lower limit to the accuracy with which the critical point of fluid can be obtained from molecular simulation results. The triple point is presently assumed to be located at The uncertainties represent the scattering of data from different authors. The critical point of the Lennard-Jones substance has been studied far more often than the triple point. For both the critical point and the vapor–liquid–solid triple point, several studies reported results out of the above stated ranges. The above stated data is the presently assumed correct and reliable data. Nevertheless, the determinateness of the critical temperature and the triple point temperature is still unsatisfactory. Evidently, the phase coexistence curves (cf. figures) are of fundamental importance to characterize the Lennard-Jones potential. Furthermore, Brown's characteristic curves yield an illustrative description of essential features of the Lennard-Jones potential. Brown's characteristic curves are defined as curves on which a certain thermodynamic property of the substance matches that of an ideal gas. For a real fluid, and its derivatives can match the values of the ideal gas for special , combinations only as a result of Gibbs' phase rule. The resulting points collectively constitute a characteristic curve. Four main characteristic curves are defined: One 0th-order (named Zeno curve) and three 1st-order curves (named Amagat, Boyle, and Charles curve). The characteristic curve are required to have a negative or zero curvature throughout and a single maximum in a double-logarithmic pressure-temperature diagram. Furthermore, Brown's characteristic curves and the virial coefficients are directly linked in the limit of the ideal gas and are therefore known exactly at . Both computer simulation results and equation of state results have been reported in the literature for the Lennard-Jones potential. Points on the Zeno curve Z have a compressibility factor of unity . The Zeno curve originates at the Boyle temperature , surrounds the critical point, and has a slope of unity in the low temperature limit. Points on the Boyle curve B have . The Boyle curve originates with the Zeno curve at the Boyle temperature, faintly surrounds the critical point, and ends on the vapor pressure curve. Points on the Charles curve (a.k.a. Joule-Thomson inversion curve) have and more importantly , i.e. no temperature change upon isenthalpic throttling. It originates at in the ideal gas limit, crosses the Zeno curve, and terminates on the vapor pressure curve. Points on the Amagat curve A have . It also starts in the ideal gas limit at , surrounds the critical point and the other three characteristic curves and passes into the solid phase region. A comprehensive discussion of the characteristic curves of the Lennard-Jones potential is given by Stephan and Deiters. Properties of the Lennard-Jones fluid Properties of the Lennard-Jones fluid have been studied extensively in the literature due to the outstanding importance of the Lennard-Jones potential in soft-matter physics and related fields. About 50 datasets of computer experiment data for the vapor–liquid equilibrium have been published to date. Furthermore, more than 35,000 data points at homogeneous fluid states have been published over the years and recently been compiled and assessed for outliers in an open access database. The vapor–liquid equilibrium of the Lennard-Jones substance is presently known with a precision, i.e. mutual agreement of thermodynamically consistent data, of for the vapor pressure, for the saturated liquid density, for the saturated vapor density, for the enthalpy of vaporization, and for the surface tension. This status quo can not be considered satisfactory considering the fact that statistical uncertainties usually reported for single data sets are significantly below the above stated values (even for far more complex molecular force fields). Both phase equilibrium properties and homogeneous state properties at arbitrary density can in general only be obtained from molecular simulations, whereas virial coefficients can be computed directly from the Lennard-Jones potential. Numerical data for the second and third virial coefficient is available in a wide temperature range. For higher virial coefficients (up to the sixteenth), the number of available data points decreases with increasing number of the virial coefficient. Also transport properties (viscosity, heat conductivity, and self diffusion coefficient) of the Lennard-Jones fluid have been studied, but the database is significantly less dense than for homogeneous equilibrium properties like – or internal energy data. Moreover, a large number of analytical models (equations of state) have been developed for the description of the Lennard-Jones fluid (see below for details). Properties of the Lennard-Jones solid The database and knowledge for the Lennard-Jones solid is significantly poorer than for the fluid phases. It was realized early that the interactions in solid phases should not be approximated to be pair-wise additive – especially for metals. Nevertheless, the Lennard-Jones potential is used in solid-state physics due to its simplicity and computational efficiency. Hence, the basic properties of the solid phases and the solid–fluid phase equilibria have been investigated several times, e.g. Refs. The Lennard-Jones substance form fcc (face centered cubic), hcp (hexagonal close-packed) and other close-packed polytype lattices – depending on temperature and pressure, cf. figure above with phase diagram. At low temperature and up to moderate pressure, the hcp lattice is energetically favored and therefore the equilibrium structure. The fcc lattice structure is energetically favored at both high temperature and high pressure and therefore overall the equilibrium structure in a wider state range. The coexistence line between the fcc and hcp phase starts at at approximately , passes through a temperature maximum at approximately , and then ends on the vapor–solid phase boundary at approximately , which thereby forms a triple point. Hence, only the fcc solid phase exhibits phase equilibria with the liquid and supercritical phase, cf. figure above with phase diagram. The triple point of the two solid phases (fcc and hcp) and the vapor phase is reported to be located at: not reported yet Note, that other and significantly differing values have also been reported in the literature. Hence, the database for the fcc-hcp–vapor triple point should be further solidified in the future. Mixtures of Lennard-Jones substances Mixtures of Lennard-Jones particles are mostly used as a prototype for the development of theories and methods of solutions, but also to study properties of solutions in general. This dates back to the fundamental work of conformal solution theory of Longuet-Higgins and Leland and Rowlinson and co-workers. Those are today the basis of most theories for mixtures. Mixtures of two or more Lennard-Jones components are set up by changing at least one potential interaction parameter ( or ) of one of the components with respect to the other. For a binary mixture, this yields three types of pair interactions that are all modeled by the Lennard-Jones potential: 1-1, 2-2, and 1-2 interactions. For the cross interactions 1–2, additional assumptions are required for the specification of parameters or from , and , . Various choices (all more or less empirical and not rigorously based on physical arguments) can be used for these so-called combination rules. The most widely used combination rule is the one of Lorentz and Berthelot The parameter is an additional state-independent interaction parameter for the mixture. The parameter is usually set to unity since the arithmetic mean can be considered physically plausible for the cross-interaction size parameter. The parameter on the other hand is often used to adjust the geometric mean so as to reproduce the phase behavior of the model mixture. For analytical models, e.g. equations of state, the deviation parameter is usually written as . For , the cross-interaction dispersion energy and accordingly the attractive force between unlike particles is intensified, and the attractive forces between unlike particles are diminished for . For Lennard-Jones mixtures, both fluid and solid phase equilibria can be studied, i.e. vapor–liquid, liquid–liquid, gas–gas, solid–vapor, solid–liquid, and solid–solid. Accordingly, different types of triple points (three-phase equilibria) and critical points can exist as well as different eutectic and azeotropic points. Binary Lennard-Jones mixtures in the fluid region (various types of equilibria of liquid and gas phases) have been studied more comprehensively then phase equilibria comprising solid phases. A large number of different Lennard-Jones mixtures have been studied in the literature. To date, no standard for such has been established. Usually, the binary interaction parameters and the two component parameters are chosen such that a mixture with properties convenient for a given task are obtained. Yet, this often makes comparisons tricky. For the fluid phase behavior, mixtures exhibit practically ideal behavior (in the sense of Raoult's law) for . For attractive interactions prevail and the mixtures tend to form high-boiling azeotropes, i.e. a lower pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium. For repulsive interactions prevail and mixtures tend to form low-boiling azeotropes, i.e. a higher pressure than pure components' vapor pressures is required to stabilize the vapor–liquid equilibrium since the mean dispersive forces are decreased. Particularly low values of furthermore will result in liquid–liquid miscibility gaps. Also various types of phase equilibria comprising solid phases have been studied in the literature, e.g. by Carol and co-workers. Also, cases exist where the solid phase boundaries interrupt fluid phase equilibria. However, for phase equilibria that comprise solid phases, the amount of published data is sparse. Equations of state A large number of equations of state (EOS) for the Lennard-Jones potential/ substance have been proposed since its characterization and evaluation became available with the first computer simulations. Due to the fundamental importance of the Lennard-Jones potential, most currently available molecular-based EOS are built around the Lennard-Jones fluid. They have been comprehensively reviewed by Stephan et al. Equations of state for the Lennard-Jones fluid are of particular importance in soft-matter physics and physical chemistry, used as starting point for the development of EOS for complex fluids, e.g. polymers and associating fluids. The monomer units of these models are usually directly adapted from Lennard-Jones EOS as a building block, e.g. the PHC EOS, the BACKONE EOS, and SAFT type EOS. More than 30 Lennard-Jones EOS have been proposed in the literature. A comprehensive evaluation of such EOS showed that several EOS describe the Lennard-Jones potential with good and similar accuracy, but none of them is outstanding. Three of those EOS show an unacceptable unphysical behavior in some fluid region, e.g. multiple van der Waals loops, while being elsewise reasonably precise. Only the Lennard-Jones EOS of Kolafa and Nezbeda was found to be robust and precise for most thermodynamic properties of the Lennard-Jones fluid. Furthermore, the Lennard-Jones EOS of Johnson et al. was found to be less precise for practically all available reference data than the Kolafa and Nezbeda EOS. Lennard-Jones potential as building block for force fields The Lennard-Jones potential is extensively used for molecular modeling of real substances. There are essentially two ways the Lennard-Jones potential can be used for molecular modeling: (1) A real substance atom or molecule is modeled directly by the Lennard-Jones potential, which yields very good results for noble gases and methane, i.e. dispersively interacting spherical particles. In the case of methane, the molecule is assumed to be spherically symmetric and the hydrogen atoms are fused with the carbon atom to a common unit. This simplification can in general also be applied to more complex molecules, but yields usually poor results. (2) A real substance molecule is built of multiple Lennard-Jones interactions sites, which can be connected either by rigid bonds or flexible additional potentials (and eventually also consists of other potential types, e.g. partial charges). Molecular models (often referred to as 'force fields') for practically all molecular and ionic particles can be constructed using this scheme for example for alkanes. Upon using the first outlined approach, the molecular model has only the two parameters of the Lennard-Jones potential and that can be used for the fitting, e.g. and can be used for argon. Upon adjusting the model parameters and to real substance properties, the Lennard-Jones potential can be used to describe simple substance (like noble gases) with good accuracy. Evidently, this approach is only a good approximation for spherical and simply dispersively interacting molecules and atoms. The direct use of the Lennard-Jones potential has the great advantage that simulation results and theories for the Lennard-Jones potential can be used directly. Hence, available results for the Lennard-Jones potential and substance can be directly scaled using the appropriate and (see reduced units). The Lennard-Jones potential parameters and can in general be fitted to any desired real substance property. In soft-matter physics, usually experimental data for the vapor–liquid phase equilibrium or the critical point are used for the parametrization; in solid-state physics, rather the compressibility, heat capacity or lattice constants are employed. The second outlined approach of using the Lennard-Jones potential as a building block of elongated and complex molecules is far more sophisticated. Molecular models are thereby tailor-made in a sense that simulation results are only applicable for that particular model. This development approach for molecular force fields is today mainly performed in soft-matter physics and associated fields such as chemical engineering, chemistry, and computational biology. A large number of force fields are based on the Lennard-Jones potential, e.g. the TraPPE force field, the OPLS force field, and the MolMod force field (an overview of molecular force fields is out of the scope of the present article). For the state-of-the-art modeling of solid-state materials, more elaborate multi-body potentials (e.g. EAM potentials) are used. The Lennard-Jones potential yields a good approximation of intermolecular interactions for many applications: The macroscopic properties computed using the Lennard-Jones potential are in good agreement with experimental data for simple substances like argon on one side and the potential function is in fair agreement with results from quantum chemistry on the other side. The Lennard-Jones potential gives a good description of molecular interactions in fluid phases, whereas molecular interactions in solid phases are only roughly well described. This is mainly due to the fact that multi-body interactions play a significant role in solid phases, which are not comprised in the Lennard-Jones potential. Therefore, the Lennard-Jones potential is extensively used in soft-matter physics and associated fields, whereas it is less frequently used in solid-state physics. Due to its simplicity, the Lennard-Jones potential is often used to describe the properties of gases and simple fluids and to model dispersive and repulsive interactions in molecular models. It is especially accurate for noble gas atoms and methane. It is furthermore a good approximation for molecular interactions at long and short distances for neutral atoms and molecules. Therefore, the Lennard-Jones potential is very often used as a building block of molecular models of complex molecules, e.g. alkanes or water. The Lennard-Jones potential can also be used to model the adsorption interactions at solid–fluid interfaces, i.e. physisorption or chemisorption. It is well accepted, that the main limitations of the Lennard-Jones potential lie in the fact the potential is a pair potential (does not cover multi-body interactions) and that the exponent term is used for the repulsion. Results from quantum chemistry suggest that a higher exponent than 12 has to be used, i.e. a steeper potential. Furthermore, the Lennard-Jones potential has a limited flexibility, i.e. only the two model parameters and can be used for the fitting to describe a real substance. See also Comparison of force-field implementations Embedded atom model Force field (chemistry) Molecular mechanics Morse potential and Morse/Long-range potential Virial expansion References External links Lennard-Jones model on SklogWiki. Chemical bonding Computational chemistry Intermolecular forces Quantum mechanical potentials Theoretical chemistry Thermodynamics
Lennard-Jones potential
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
7,200
[ "Molecular physics", "Quantum mechanics", "Intermolecular forces", "Materials science", "Quantum mechanical potentials", "Computational chemistry", "Theoretical chemistry", "Condensed matter physics", "Thermodynamics", "nan", "Chemical bonding", "Dynamical systems" ]
227,912
https://en.wikipedia.org/wiki/Rose%20hip
The rose hip or rosehip, also called rose haw and rose hep, is the accessory fruit of the various species of rose plant. It is typically red to orange, but ranges from dark purple to black in some species. Rose hips begin to form after pollination of flowers in spring or early summer, and ripen in late summer through autumn. Propagation Roses are propagated from rose hips by removing the achenes that contain the seeds from the hypanthium (the outer coating) and sowing just beneath the surface of the soil. The seeds can take many months to germinate. Most species require chilling (stratification), with some such as Rosa canina only germinating after two winter chill periods. Uses Rose hips are used in bread and pies, jam, jelly, marmalade, syrup, soup, tea, wine, and other beverages. Rose hips can be eaten raw, like berries, if care is taken to avoid the hairs inside the fruit. These urticating hairs are used as itching powder. A few rose species are sometimes grown for the ornamental value of their hips, such as Rosa moyesii, which has prominent, large, red bottle-shaped fruits. Rosa macrophylla 'Master Hugh' has the largest hips of any readily available rose. Rose hips are commonly used in herbal tea, often blended with hibiscus. An oil is also extracted from the seeds. Rose hip soup, known as in Swedish, is especially popular in Sweden. Rhodomel, a type of mead, is made with rose hips. Rose hips can be used to make , the traditional Hungarian fruit brandy popular in Hungary, Romania, and other countries sharing Austro-Hungarian history. Rose hips are also the central ingredient of cockta, the fruity-tasting national soft drink of Slovenia. Dried rose hips are also sold for crafts and home fragrance purposes. The Inupiat mix rose hips with wild redcurrant and highbush cranberries and boil them into a syrup. Nutrients and research Wild rose hip fruits are particularly rich in vitamin C, containing 426 mg per 100 g or 0.4% by weight (w/w). RP-HPLC assays of fresh rose hips and several commercially available products revealed a wide range of L-ascorbic acid (vitamin C) content, ranging from 0.03 to 1.3%. Rose hips contain the carotenoids beta-carotene, lutein, zeaxanthin, and lycopene. A meta-analysis of human studies examining the potential for rose hip extracts to reduce arthritis pain concluded there was a small effect requiring further analysis of safety and efficacy in clinical trials. Use of rose hips is not considered an effective treatment for knee osteoarthritis. See also Rose hip seed oil Rosa moschata Rosa rubiginosa Rosa gymnocarpa Rosa roxburghii References External links Fruit morphology Herbal teas Roses Food ingredients
Rose hip
[ "Technology" ]
612
[ "Food ingredients", "Components" ]
228,107
https://en.wikipedia.org/wiki/Stress%20%28mechanics%29
In continuum mechanics, stress is a physical quantity that describes forces present during deformation. For example, an object being pulled apart, such as a stretched elastic band, is subject to tensile stress and may undergo elongation. An object being pushed together, such as a crumpled sponge, is subject to compressive stress and may undergo shortening. The greater the force and the smaller the cross-sectional area of the body on which it acts, the greater the stress. Stress has dimension of force per area, with SI units of newtons per square meter (N/m2) or pascal (Pa). Stress expresses the internal forces that neighbouring particles of a continuous material exert on each other, while strain is the measure of the relative deformation of the material. For example, when a solid vertical bar is supporting an overhead weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a closed container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface (such as a piston) push against them in (Newtonian) reaction. These macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Stress is frequently represented by a lowercase Greek letter sigma (σ). Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the bulk material (like gravity) or to its surface (like contact forces, external pressure, or friction). Any strain (deformation) of a solid material generates an internal elastic stress, analogous to the reaction force of a spring, that tends to restore the material to its original non-deformed state. In liquids and gases, only deformations that change the volume generate persistent elastic stress. If the deformation changes gradually with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the name mechanical stress. Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, or by external electromagnetic fields (as in piezoelectric and magnetostrictive materials). The relation between mechanical stress, strain, and the strain rate can be quite complicated, although a linear approximation may be adequate in practice if the quantities are sufficiently small. Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition. History Humans have known about stress inside materials since ancient times. Until the 17th century, this understanding was largely intuitive and empirical, though this did not prevent the development of relatively advanced technologies like the composite bow and glass blowing. Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as the capitals, arches, cupolas, trusses and the flying buttresses of Gothic cathedrals. Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries: Galileo Galilei's rigorous experimental method, René Descartes's coordinates and analytic geometry, and Newton's laws of motion and equilibrium and calculus of infinitesimals. With those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model of a deformed elastic body by introducing the notions of stress and strain. Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum). The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallel laminar flow. Definition Stress is defined as the force across a small boundary per unit area of that boundary, for all orientations of the boundary. Derived from a physical quantity (force) and a purely geometrical quantity (area), stress is also a physical quantity, like velocity, torque or energy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes. Following the basic premises of continuum mechanics, stress is a macroscopic concept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignore quantum effects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them. Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of a metal rod or the fibers of a piece of wood. Quantitatively, the stress is expressed by the Cauchy traction vector T defined as the traction force F between adjacent parts of the material across an imaginary separating surface S, divided by the area of S. In a fluid at rest the force is perpendicular to the surface, and is the familiar pressure. In a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the (Cauchy) stress tensor; which is a linear function that relates the normal vector n of a surface S to the traction vector T across S. With respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers. Even within a homogeneous body, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varying tensor field. Normal and shear In general, the stress T that a particle P applies on another particle Q across a surface S can have any direction relative to S. The vector T may be regarded as the sum of two components: the normal stress (compression or tension) perpendicular to the surface, and the shear stress that is parallel to the surface. If the normal unit vector n of the surface (pointing from Q towards P) is assumed fixed, the normal component can be expressed by a single number, the dot product . This number will be positive if P is "pulling" on Q (tensile stress), and negative if P is "pushing" against Q (compressive stress). The shear component is then the vector . Units The dimension of stress is that of pressure, and therefore its coordinates are measured in the same units as pressure: namely, pascals (Pa, that is, newtons per square metre) in the International System, or pounds per square inch (psi) in the Imperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress. Causes and effects Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes in temperature and phase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines or points; and possibly also on very short time intervals (as in the impulses due to collisions). In active matter, self-propulsion of microscopic particles generates macroscopic stress profiles. In general, the stress distribution in a body is expressed as a piecewise continuous function of space and time. Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties like birefringence, polarization, and permeability. The imposition of stress by an external agent usually creates some strain (deformation) in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretched spring, tending to restore the material to its original undeformed state. Fluid materials (liquids, gases and plasmas) by definition can only oppose deformations that would change their volume. If the deformation changes with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article on viscosity. The same for normal viscous stresses can be found in Sharma (2019). The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although a linear approximation may be adequate in practice if the quantities are small enough). Stress that exceeds certain strength limits of the material will result in permanent deformation (such as plastic flow, fracture, cavitation) or even change its crystal structure and chemical composition. Simple types In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three such simple stress situations, that are often encountered in engineering design, are the uniaxial normal stress, the simple shear stress, and the isotropic normal stress. Uniaxial normal A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected to tension by opposite forces of magnitude along its axis. If the system is in equilibrium and not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force, F with continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the single number σ, calculated simply with the magnitude of those forces, F, and cross sectional area, A. On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut. This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress. If the load is compression on the bar, rather than stretching it, the analysis is the same except that the force F and the stress change sign, and the stress is called compressive stress. This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the value = F/A will be only the average stress, called engineering stress or nominal stress. If the bar's length L is many times its diameter D, and it has no gross defects or built-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few times D from both ends. (This observation is known as the Saint-Venant's principle). Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resulting bending stress will still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is the hoop stress that occurs on the walls of a cylindrical pipe or vessel filled with pressurized fluid. Shear Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of a scissors-like tool. Let F be the magnitude of those forces, and M be the midplane of that layer. Just as in the normal stress case, the part of the layer on one side of M must pull the other part with the same force F. Assuming that the direction of the forces is known, the stress across M can be expressed simply by the single number , calculated simply with the magnitude of those forces, F and the cross sectional area, A.Unlike normal stress, this simple shear stress is directed parallel to the cross-section considered, rather than perpendicular to it. For any plane S that is perpendicular to the layer, the net internal force across S, and hence the stress, will be zero. As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratio F/A will only be an average ("nominal", "engineering") stress. That average is often sufficient for practical purposes. Shear stress is observed also when a cylindrical bar such as a shaft is subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") of I-beams under bending loads, due to the web constraining the end plates ("flanges"). Isotropic Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected. In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be called isotropic normal or just isotropic; if it is compressive, it is called hydrostatic pressure or just pressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand very large amounts of isotropic tensile stress under some circumstances. see Z-tube. Cylinder Parts with rotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or even cylindrical symmetry. The analysis of such cylinder stresses can take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor. General types Often, mechanical bodies experience more than one type of stress at the same time; this is called combined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain direction , and zero across any surfaces that are parallel to . When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is called biaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, called triaxial stress, the stress is nonzero across every surface element. Cauchy tensor Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way. Cauchy observed that the stress vector across a surface will always be a linear function of the surface's normal vector , the unit-length vector that is perpendicular to it. That is, , where the function satisfies for any vectors and any real numbers . The function , now called the (Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called a tensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) In tensor calculus, is classified as a second-order tensor of type (0,2) or (1,1) depending on convention. Like any linear map between vectors, the stress tensor can be represented in any chosen Cartesian coordinate system by a 3×3 matrix of real numbers. Depending on whether the coordinates are numbered or named , the matrix may be written as or The stress vector across a surface with normal vector (which is covariant - "row; horizontal" - vector) with coordinates is then a matrix product (where T in upper index is transposition, and as a result we get covariant (row) vector) (look on Cauchy stress tensor), that is The linear relation between and follows from the fundamental laws of conservation of linear momentum and static equilibrium of forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). Moreover, the principle of conservation of angular momentum implies that the stress tensor is symmetric, that is , , and . Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written where the elements are called the orthogonal normal stresses (relative to the chosen coordinate system), and the orthogonal shear stresses. Change of coordinates The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle of stress distribution. As a symmetric 3×3 real matrix, the stress tensor has three mutually orthogonal unit-length eigenvectors and three real eigenvalues , such that . Therefore, in a coordinate system with axes , the stress tensor is a diagonal matrix, and has only the three normal components the principal stresses. If the three eigenvalues are equal, the stress is an isotropic compression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame. Tensor field In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering an infinitesimal particle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point. Thin plates Human-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies. In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as a bending stress that tends to change the curvature of the plate. These simplifications may not hold at welds, at sharp bends and creases (where the radius of curvature is comparable to the thickness of the plate). Thin beams The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also a bending stress (that tries to change the bar's curvature, in some direction perpendicular to the axis) and a torsional stress (that tries to twist or un-twist it about its axis). Analysis Stress analysis is a branch of applied physics that covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena like plate tectonics, vulcanism and avalanches; and in biology, to understand the anatomy of living beings. Goals and assumptions Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopic static equilibrium. By Newton's laws of motion, any external forces being applied to such a system must be balanced by internal reaction forces, which are almost always surface contact forces between adjacent particles — that is, as stress. Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle to particle, creating a stress distribution throughout the body. The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may be body forces (such as gravity or magnetic attraction), that act throughout the volume of a material; or concentrated loads (such as friction between an axle and a bearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point. In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by known constitutive equations. Methods Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. Most stress is analysed by mathematical methods, especially during design. The basic stress analysis problem can be formulated by Euler's equations of motion for continuous bodies (which are consequences of Newton's laws for conservation of linear momentum and angular momentum) and the Euler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system of partial differential equations involving the stress tensor field and the strain tensor field, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore a boundary-value problem. Stress analysis for elastic structures is based on the theory of elasticity and infinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow, fracture, phase change, etc.). Engineered structures are usually designed so the maximum expected stresses are well within the range of linear elasticity (the generalization of Hooke's law for continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear. Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc. Still, for two- or three-dimensional cases one must solve a partial differential equation problem. Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as the finite element method, the finite difference method, and the boundary element method. Measures Other useful stress measures include the first and second Piola–Kirchhoff stress tensors, the Biot stress tensor, and the Kirchhoff stress tensor. See also Bending Compressive strength Critical plane analysis Kelvin probe force microscope Mohr's circle Lamé's stress ellipsoid Reinforced solid Residual stress Shear strength Shot peening Strain Strain tensor Strain rate tensor Stress–energy tensor Stress–strain curve Stress concentration Transient friction loading Tensile strength Thermal stress Virial stress Yield (engineering) Yield surface Virial theorem References Further reading Dieter, G. E. (3 ed.). (1989). Mechanical Metallurgy. New York: McGraw-Hill. . Landau, L.D. and E.M.Lifshitz. (1959). Theory of Elasticity. Love, A. E. H. (4 ed.). (1944). Treatise on the Mathematical Theory of Elasticity. New York: Dover Publications. . Solid mechanics Tensors
Stress (mechanics)
[ "Physics", "Engineering" ]
5,500
[ "Solid mechanics", "Tensors", "Mechanics" ]
228,108
https://en.wikipedia.org/wiki/Young%27s%20modulus
Young's modulus (or Young modulus) is a mechanical property of solid materials that measures the tensile or compressive stiffness when the force is applied lengthwise. It is the modulus of elasticity for tension or axial compression. Young's modulus is defined as the ratio of the stress (force per unit area) applied to the object and the resulting axial strain (displacement or deformation) in the linear elastic region of the material. Although Young's modulus is named after the 19th-century British scientist Thomas Young, the concept was developed in 1727 by Leonhard Euler. The first experiments that used the concept of Young's modulus in its modern form were performed by the Italian scientist Giordano Riccati in 1782, pre-dating Young's work by 25 years. The term modulus is derived from the Latin root term modus, which means measure. Definition Young's modulus, , quantifies the relationship between tensile or compressive stress (force per unit area) and axial strain (proportional deformation) in the linear elastic region of a material: Young's modulus is commonly measured in the International System of Units (SI) in multiples of the pascal (Pa) and common values are in the range of gigapascals (GPa). Examples: Rubber (increasing pressure: length increases quickly, meaning low ) Aluminium (increasing pressure: length increases slowly, meaning high ) Linear elasticity A solid material undergoes elastic deformation when a small load is applied to it in compression or extension. Elastic deformation is reversible, meaning that the material returns to its original shape after the load is removed. At near-zero stress and strain, the stress–strain curve is linear, and the relationship between stress and strain is described by Hooke's law that states stress is proportional to strain. The coefficient of proportionality is Young's modulus. The higher the modulus, the more stress is needed to create the same amount of strain; an idealized rigid body would have an infinite Young's modulus. Conversely, a very soft material (such as a fluid) would deform without force, and would have zero Young's modulus. Related but distinct properties Material stiffness is a distinct property from the following: Strength: maximum amount of stress that material can withstand while staying in the elastic (reversible) deformation regime; Geometric stiffness: a global characteristic of the body that depends on its shape, and not only on the local properties of the material; for instance, an I-beam has a higher bending stiffness than a rod of the same material for a given mass per length; Hardness: relative resistance of the material's surface to penetration by a harder body; Toughness: amount of energy that a material can absorb before fracture. The point E is the elastic limit or the yield point of the material within which the stress is proportional to strain and the material regains its original shape after removal of the external force. Usage Young's modulus enables the calculation of the change in the dimension of a bar made of an isotropic elastic material under tensile or compressive loads. For instance, it predicts how much a material sample extends under tension or shortens under compression. The Young's modulus directly applies to cases of uniaxial stress; that is, tensile or compressive stress in one direction and no stress in the other directions. Young's modulus is also used in order to predict the deflection that will occur in a statically determinate beam when a load is applied at a point in between the beam's supports. Other elastic calculations usually require the use of one additional elastic property, such as the shear modulus , bulk modulus , and Poisson's ratio . Any two of these parameters are sufficient to fully describe elasticity in an isotropic material. For example, calculating physical properties of cancerous skin tissue, has been measured and found to be a Poisson’s ratio of 0.43±0.12 and an average Young’s modulus of 52 KPa. Defining the elastic properties of skin may become the first step in turning elasticity into a clinical tool. For homogeneous isotropic materials simple relations exist between elastic constants that allow calculating them all as long as two are known: Linear versus non-linear Young's modulus represents the factor of proportionality in Hooke's law, which relates the stress and the strain. However, Hooke's law is only valid under the assumption of an elastic and linear response. Any real material will eventually fail and break when stretched over a very large distance or with a very large force; however, all solid materials exhibit nearly Hookean behavior for small enough strains or stresses. If the range over which Hooke's law is valid is large enough compared to the typical stress that one expects to apply to the material, the material is said to be linear. Otherwise (if the typical stress one would apply is outside the linear range), the material is said to be non-linear. Steel, carbon fiber and glass among others are usually considered linear materials, while other materials such as rubber and soils are non-linear. However, this is not an absolute classification: if very small stresses or strains are applied to a non-linear material, the response will be linear, but if very high stress or strain is applied to a linear material, the linear theory will not be enough. For example, as the linear theory implies reversibility, it would be absurd to use the linear theory to describe the failure of a steel bridge under a high load; although steel is a linear material for most applications, it is not in such a case of catastrophic failure. In solid mechanics, the slope of the stress–strain curve at any point is called the tangent modulus. It can be experimentally determined from the slope of a stress–strain curve created during tensile tests conducted on a sample of the material. Directional materials Young's modulus is not always the same in all orientations of a material. Most metals and ceramics, along with many other materials, are isotropic, and their mechanical properties are the same in all orientations. However, metals and ceramics can be treated with certain impurities, and metals can be mechanically worked to make their grain structures directional. These materials then become anisotropic, and Young's modulus will change depending on the direction of the force vector. Anisotropy can be seen in many composites as well. For example, carbon fiber has a much higher Young's modulus (is much stiffer) when force is loaded parallel to the fibers (along the grain). Other such materials include wood and reinforced concrete. Engineers can use this directional phenomenon to their advantage in creating structures. Temperature dependence The Young's modulus of metals varies with the temperature and can be realized through the change in the interatomic bonding of the atoms, and hence its change is found to be dependent on the change in the work function of the metal. Although classically, this change is predicted through fitting and without a clear underlying mechanism (for example, the Watchman's formula), the Rahemi-Li model demonstrates how the change in the electron work function leads to change in the Young's modulus of metals and predicts this variation with calculable parameters, using the generalization of the Lennard-Jones potential to solids. In general, as the temperature increases, the Young's modulus decreases via where the electron work function varies with the temperature as and is a calculable material property which is dependent on the crystal structure (for example, BCC, FCC). is the electron work function at T=0 and is constant throughout the change. Calculation Young's modulus is calculated by dividing the tensile stress, , by the engineering extensional strain, , in the elastic (initial, linear) portion of the physical stress–strain curve: where is the Young's modulus (modulus of elasticity); is the force exerted on an object under tension; is the actual cross-sectional area, which equals the area of the cross-section perpendicular to the applied force; is the amount by which the length of the object changes ( is positive if the material is stretched, and negative when the material is compressed); is the original length of the object. Force exerted by stretched or contracted material Young's modulus of a material can be used to calculate the force it exerts under specific strain. where is the force exerted by the material when contracted or stretched by . Hooke's law for a stretched wire can be derived from this formula: where it comes in saturation and Note that the elasticity of coiled springs comes from shear modulus, not Young's modulus. When a spring is stretched, its wire's length doesn't change, but its shape does. This is why only the shear modulus of elasticity is involved in the stretching of a spring. Elastic potential energy The elastic potential energy stored in a linear elastic material is given by the integral of the Hooke's law: now by explicating the intensive variables: This means that the elastic potential energy density (that is, per unit volume) is given by: or, in simple notation, for a linear elastic material: , since the strain is defined . In a nonlinear elastic material the Young's modulus is a function of the strain, so the second equivalence no longer holds, and the elastic energy is not a quadratic function of the strain: Examples Young's modulus can vary somewhat due to differences in sample composition and test method. The rate of deformation has the greatest impact on the data collected, especially in polymers. The values here are approximate and only meant for relative comparison. See also Bending stiffness Deflection Deformation Flexural modulus Impulse excitation technique List of materials properties Yield (engineering) References Further reading ASTM E 111, "Standard Test Method for Young's Modulus, Tangent Modulus, and Chord Modulus" The ASM Handbook (various volumes) contains Young's Modulus for various materials and information on calculations. Online version External links Matweb: free database of engineering properties for over 175,000 materials Young's Modulus for groups of materials, and their cost Elasticity (physics) Physical quantities Structural analysis
Young's modulus
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
2,133
[ "Structural engineering", "Physical phenomena", "Physical quantities", "Elasticity (physics)", "Deformation (mechanics)", "Quantity", "Structural analysis", "Mechanical engineering", "Aerospace engineering", "Physical properties" ]
12,522,713
https://en.wikipedia.org/wiki/Phycourobilin
Phycourobilin is an orange tetrapyrrole involved in photosynthesis in cyanobacteria and red algae. This chromophore is bound to the phycobiliprotein phycoerythrin, the distal component of the light-harvesting system of cyanobacteria and red algae (phycobilisome). When bound to phycoerythrin, phycourobilin shows an absorption maximum around 495 nm. This chromophore is always a donor chromophore of phycoerythrins, since their acceptor chromophore is always phycoerythrobilin. It can also be linked to the linker polypeptides of the phycobilisome, in which its precise role remains unclear. Phycourobilin is found in marine phycobilisome containing organisms, allowing them to efficiently absorb blue-green light. In the ubiquitous marine cyanobacteria Synechococcus, the amount of phycourobilin in the phycobilisomes is correlated to the ecological niche the cells inhabit: offshore Synechococcus are quite phycourobililin-rich, while coastal Synechococcus contain very little or no phycourobilin. This represents a remarkable adaptation of the cyanobacterial light-harvesting system, as oceanic waters are relatively richer in blue light than onshore waters. References Photosynthetic pigments Tetrapyrroles
Phycourobilin
[ "Chemistry" ]
321
[ "Photosynthetic pigments", "Photosynthesis" ]
12,527,335
https://en.wikipedia.org/wiki/Cosmic%20time
Cosmic time, or cosmological time, is the time coordinate commonly used in the Big Bang models of physical cosmology. This concept of time avoids some issues related to relativity by being defined within a solution to the equations of general relativity widely used in cosmology. Problems with absolute time Albert Einstein's theory of special relativity showed that simultaneity is not absolute. An observer at rest may believe that two events separated in space (say, two lightning strikes 10 meters apart) occurred at the same time, while another observer in (relative) motion claims that one occurred after the other. This coupling of space and time, Minkowski spacetime, complicates scientific time comparisons. However, Einstein's theory of general relativity provides a partial solution. In general relativity, spacetime is defined in relation to the distribution of mass. A "clock" conceptually linked to a mass will provide a well defined time measurement for all co-moving masses. Cosmic time is based on this concept of a clock. Hermann Weyl provided a solution to this issue by postulating that the galaxies in the expanding universe define geodesics in spacetime. Each galaxy gets its own local clock synchronized at the single point past where the geodesics intersect. Hypersurfaces perpendicular to the geodesics become surfaces of constant cosmic time. Definition Cosmic time is a measure of time by a physical clock with zero peculiar velocity in the absence of matter over-/under-densities (to prevent time dilation due to relativistic effects or confusions caused by expansion of the universe). Unlike other measures of time such as temperature, redshift, particle horizon, or Hubble horizon, the cosmic time (similar and complementary to the co-moving coordinates) is blind to the expansion of the universe. Cosmic time is the standard time coordinate for specifying the Friedmann–Lemaître–Robertson–Walker solutions of Einstein field equations of general relativity. Such time coordinate may be defined for a homogeneous, expanding universe so that the universe has the same density everywhere at each moment in time (the fact that this is possible means that the universe is, by definition, homogeneous). The clocks measuring cosmic time should move along the Hubble flow. Reference point There are two main ways for establishing a reference point for the cosmic time. Lookback time The present time can be used as the cosmic reference point creating lookback time. This can be described in terms of the time light has taken to arrive here from a distance object. Age of the universe Alternatively, the Big Bang may be taken as reference to define as the age of the universe, also known as time since the big bang. The current physical cosmology estimates the present age as 13.8 billion years. The doesn't necessarily have to correspond to a physical event (such as the cosmological singularity) but rather it refers to the point at which the scale factor would vanish for a standard cosmological model such as ΛCDM. For technical purposes, concepts such as the average temperature of the universe (in units of eV) or the particle horizon are used when the early universe is the objective of a study since understanding the interaction among particles is more relevant than their time coordinate or age. In mathematical terms, a cosmic time on spacetime is a fibration . This fibration, having the parameter , is made of three-dimensional manifolds . Relation to redshift Astronomical observations and theoretical models may use redshift as a time-like parameter. Cosmic time and redshift are related. In case of flat universe without dark energy the cosmic time can expressed as: Here is the Hubble constant and is the density parameter ratio of density of the universe, to the critical density for the Friedmann equation for a flat universe: Uncertainties in the value of these parameters make the time values derived from redshift measurements model dependent. See also Chronology of the universe Cosmic Calendar Cosmological horizon References Physical cosmology Coordinate systems Time
Cosmic time
[ "Physics", "Astronomy", "Mathematics" ]
815
[ "Astronomical sub-disciplines", "Physical quantities", "Time", "Quantity", "Theoretical physics", "Astrophysics", "Coordinate systems", "Spacetime", "Wikipedia categories named after physical quantities", "Physical cosmology" ]
12,528,651
https://en.wikipedia.org/wiki/Formation%20evaluation%20gamma%20ray
The formation evaluation gamma ray log is a record of the variation with depth of the natural radioactivity of earth materials in a wellbore. Measurement of natural emission of gamma rays in oil and gas wells are useful because shales and sandstones typically have different gamma ray levels. Shales and clays are responsible for most natural radioactivity, so gamma ray log often is a good indicator of such rocks. In addition, the log is also used for correlation between wells, for depth correlation between open and cased holes, and for depth correlation between logging runs. Physics Natural radioactivity is the spontaneous decay of the atoms of certain isotopes into other isotopes. If the resultant isotope is not stable, it undergoes further decay until a stable isotope forms. The decay process is usually accompanied by emissions of alpha, beta, and gamma radiation. Natural gamma ray radiation is one form of spontaneous radiation emitted by unstable nuclei. Gamma (γ) radiation may be considered either as an electromagnetic wave similar to visible light or X-rays, or as a particle of photon. Gamma rays are electromagnetic radiations emitted from an atomic nucleus during radioactive decay, with the wavelength in the range of 10−9 to 10−11cm Natural radioactivity in rocks Isotopes naturally found on earth are usually those that are stable or have a decay time larger than, or at least a significant fraction of the age of the Earth (about 5 x 109 years). Isotopes with shorter halflifes mainly exist as decay products from longer lived isotopes, and, as in C14, from irradiation of the upper atmosphere. Radioisotopes with a sufficiently long halflife, and whose decay produces an appreciable amount of gamma rays are: Potassium 40K with half-life of 1.3 x 109 years, which emits 0 α, 1 β, and 1 γ-ray Thorium 232Th with half-life of 1.4 x 1010 years, which emits 7 α, 5 β, and numerous γ-ray with different energies Uranium 238U with half-life of 4.4 x 109 years, which emits 8 α, 6 β, and numerous γ-ray with different energies Each of these elements emits gamma-rays with distinctive energy. Figure 1 shows the energies of emitted gamma-ray from the three main isotopes. Potassium 40 decays directly to stable argon 40 with the emission of 1.46 MeV gamma-ray. Uranium 238 and thorium 232 decay sequentially through a long sequence of various isotopes until a final stable isotope. The spectrum of the gamma-rays emitted by these two isotopes consists of gamma-ray of many different energies and form a complete spectra. The peak of thorium series can be found at 2.62 MeV and the Uranium series at 1.76 MeV. Applications The most common sources of natural gamma rays are potassium, thorium, and uranium. These elements are found in feldspars (i.e. granites, feldspathic), volcanic and igneous rocks, sands containing volcanic ash, and clays. Gamma-ray measurement has the following applications: Well to well correlation: gamma-ray log fluctuates with changes in formation mineralogy. As such, gamma-ray logs from different wells within the same field or region can be very useful for correlation purposes, because similar formations show similar features. Logging runs correlation: Gamma-ray tools is typically run in every logging tools runs in a well. Being a common measurement, logging data can be put on depth with each other by correlating the gamma-ray feature of each run. Quantitative evaluation of shaliness: Since natural radioactive elements tend to have greater concentration in shales than in other sedimentary lithologies, the total gamma ray measurement is frequently used to derive a shale volume (Ellis-1987, Rider-1996). This method however, is only likely to be used in a simple sandstone-shale formation, and is subject to error when radioactive elements are present in the sand. Interpretation Gamma-ray detected by Gamma-ray detector in an oil or gas wells, is not only a function of radioactivity of the formations, but also other factors as follows: Borehole Fluid: the influence of borehole fluid depends on its volume (i.e. hole size), the position of the tool, its density, and composition. Potassium chloride (KCl) in mud, for example, flows into permeable sections, resulting in an increase in gamma ray activity. Tubing, Casing, etc.: Their effect depend on the thickness, density, and nature of the materials (e.g. steel, aluminum). Steel reduces the gamma-ray level, but can be corrected once the density and thickness of the casing, cement sheath and borehole fluid are known. Cement: Its impact is determined by the type of cement, additives, density and thickness Bed Thickness: Gamma-ray reading does not reflect the true value in a bed with a thickness less than the diameter of the sphere of investigation. In a series of thin beds, the log reading is a volume average of the contributions within the sphere. In addition, all radioactive phenomena are random in nature. Count rates vary about a mean value, and counts must be averaged over time to obtain a reasonable estimate of the mean. The longer the averaged period and the higher the count rate, the more precise the estimate. Sample of corrections required for different gamma-ray tools are available from Schlumberger. Gamma ray log interpretation show different peaks in well. Shale are represent the Sharp Peaks and its range is 40-140 API and contain the high amount of potassium. Measurement technique Older gamma-ray detectors use the Geiger-Mueller counter principle, but have been mostly replaced thallium-doped sodium-iodide (NaI) scintillation detector, which has a higher efficiency. NaI detectors are usually composed of a NaI crystal coupled with a photomultiplier. When gamma ray from formation enters the crystal, it undergoes successive collisions with the atoms of the crystal, resulting in a short flashes of light when the gamma-ray is absorbed. The light is detected by the photomultiplier, which converts the energy into an electric pulse with amplitude proportional to the gamma-ray energy. The number of electric pulses is recorded in counts per seconds (CPS). The higher the gamma-ray count rate, the larger the clay content and vice versa. Primary calibration of gamma-ray tool is the test pit at the University of Houston. The artificial formation simulate about twice the radioactivity of a shale, which generates 200 API units of gamma radiation. The detector crystal is affected by hydration and its response changes with time. Consequently, a secondary and a field calibration is achieved with a portable jig carrying a small radioactive source. See also Gamma ray logging Gamma ray Formation evaluation References Ellis, Darwin V. (1987). Well Logging for Earth Scientists. Amsterdam: Elsevier. Rider, Malcolm (1996). The Geological Interpretation of Well Logs. 2nd edition. Caithness: Whittles Publishing. Schlumberger Limited (1999). Log Interpretation Principles/Applications. NY: Schlumberger Limited. Serra, Oberto; Serra, Lorenzo. (2004). Well Logging: Data Acquisition and Applications. Méry Corbon, France: Serralog. Radioactivity
Formation evaluation gamma ray
[ "Physics", "Chemistry" ]
1,520
[ "Radioactivity", "Nuclear physics" ]
12,528,703
https://en.wikipedia.org/wiki/C6H10O8
{{DISPLAYTITLE:C6H10O8}} The molecular formula C6H10O8 (molar mass: 210.14 g/mol) may refer to: Saccharic acid, or glucaric acid Mucic acid, also known as galactaric acid or meso-galactaric acid Molecular formulas
C6H10O8
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
12,528,723
https://en.wikipedia.org/wiki/C6H10O2
{{DISPLAYTITLE:C6H10O2}} The molecular formula C6H10O2 may refer to: Adipaldehyde Allyl glycidyl ether Caprolactone Cyclopentanecarboxylic acid Ethyl methacrylate Hexane-2,5-dione (2R)-2-Methylpent-4-enoic acid Molecular formulas
C6H10O2
[ "Physics", "Chemistry" ]
90
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
12,529,188
https://en.wikipedia.org/wiki/Weak%20value
In quantum mechanics (and computation), a weak value is a quantity related to a shift of a measuring device's pointer when usually there is pre- and postselection. It should not be confused with a weak measurement, which is often defined in conjunction. The weak value was first defined by Yakir Aharonov, David Albert, and Lev Vaidman, published in Physical Review Letters 1988, and is related to the two-state vector formalism. There is also a way to obtain weak values without postselection. Definition and Derivation There are many excellent review articles on weak values (see e.g. ) here we briefly cover the basics. Definition We will denote the initial state of a system as , while the final state of the system is denoted as . We will refer to the initial and final states of the system as the pre- and post-selected quantum mechanical states. With respect to these states, the weak value of the observable is defined as: Notice that if then the weak value is equal to the usual expected value in the initial state or the final state . In general the weak value quantity is a complex number. The weak value of the observable becomes large when the post-selected state, , approaches being orthogonal to the pre-selected state, , i.e. . If is larger than the largest eigenvalue of or smaller than the smallest eigenvalue of the weak value is said to be anomalous. As an example consider a spin 1/2 particle. Take to be the Pauli Z operator with eigenvalues . Using the initial state and the final state we can calculate the weak value to be For the weak value is anomalous. Derivation Here we follow the presentation given by Duck, Stevenson, and Sudarshan, (with some notational updates from Kofman et al. )which makes explicit when the approximations used to derive the weak value are valid. Consider a quantum system that you want to measure by coupling an ancillary (also quantum) measuring device. The observable to be measured on the system is . The system and ancilla are coupled via the Hamiltonian where the coupling constant is integrated over an interaction time and is the canonical commutator. The Hamiltonian generates the unitary Take the initial state of the ancilla to have a Gaussian distribution the position wavefunction of this state is The initial state of the system is given by above; the state , jointly describing the initial state of the system and ancilla, is given then by: Next the system and ancilla interact via the unitary . After this one performs a projective measurement of the projectors on the system. If we postselect (or condition) on getting the outcome , then the (unnormalized) final state of the meter is To arrive at this conclusion, we use the first order series expansion of on line (I), and we require that On line (II) we use the approximation that for small . This final approximation is only valid when As is the generator of translations, the ancilla's wavefunction is now given by This is the original wavefunction, shifted by an amount . By Busch's theorem the system and meter wavefunctions are necessarily disturbed by the measurement. There is a certain sense in which the protocol that allows one to measure the weak value is minimally disturbing, but there is still disturbance. Applications Quantum metrology and tomography At the end of the original weak value paper the authors suggested weak values could be used in quantum metrology: This suggestion was followed by Hosten and Kwiat and later by Dixon et al. It appears to be an interesting line of research that could result in improved quantum sensing technology. Additionally in 2011, weak measurements of many photons prepared in the same pure state, followed by strong measurements of a complementary variable, were used to perform quantum tomography (i.e. reconstruct the state in which the photons were prepared). Quantum foundations Weak values have been used to examine some of the paradoxes in the foundations of quantum theory. This relies to a large extent on whether weak values are deemed to be relevant to describe properties of quantum systems, a point which is not obvious since weak values are generally different from eigenvalues. For example, the research group of Aephraim M. Steinberg at the University of Toronto confirmed Hardy's paradox experimentally using joint weak measurement of the locations of entangled pairs of photons. (also see) Building on weak measurements, Howard M. Wiseman proposed a weak value measurement of the velocity of a quantum particle at a precise position, which he termed its "naïvely observable velocity". In 2010, a first experimental observation of trajectories of a photon in a double-slit interferometer was reported, which displayed the qualitative features predicted in 2001 by Partha Ghose for photons in the de Broglie-Bohm interpretation. Following up on Wiseman's weak velocity measurement, Johannes Fankhauser and Patrick Dürr suggest in a paper that weak velocity measurements constitute no new arguments, let alone empirical evidence, in favor of or against standard de Broglie-Bohm theory. According to the authors such measurements could not provide direct experimental evidence displaying the shape of particle trajectories, even if it is assumed that some deterministic particle trajectories exist. Quantum computation Weak values have been implemented into quantum computing to get a giant speed up in time complexity. In a paper, Arun Kumar Pati describes a new kind of quantum computer using weak value amplification and post-selection (WVAP), and implements search algorithm which (given a successful post selection) can find the target state in a single run with time complexity , beating out the well known Grover's algorithm. Criticisms Criticisms of weak values include philosophical and practical criticisms. Some noted researchers such as Asher Peres, Tony Leggett, David Mermin, and Charles H. Bennett are critical of weak values. Recently, it has been shown that the pre- and postselection of a quantum system recovers a completely hidden interference phenomenon in the measurement apparatus. Studying the interference pattern shows that what is interpreted as an amplification using the weak value is a pure phase effect and the weak value plays no role in its interpretation. This phase effect increases the degree of the entanglement which lies behind the effectiveness of the pre- and postselection in the parameter estimation. Further reading References Quantum information science Quantum measurement
Weak value
[ "Physics" ]
1,343
[ "Quantum measurement", "Quantum mechanics" ]
11,607,100
https://en.wikipedia.org/wiki/Steam%20separator
A steam separator, sometimes referred to as a moisture separator or steam drier, is a device for separating water droplets from steam. The simplest type of steam separator is the steam dome on a steam locomotive. Stationary boilers and nuclear reactors may have more complex devices which impart a "spin" to the steam so that water droplets are thrown outwards by centrifugal force and collected. All separators require steam traps to collect the water droplets that they remove. It is important to remove water droplets from steam because: In all engines, wet steam reduces the thermal efficiency In piston engines, water can accumulate in the cylinders and cause a hydraulic lock which will damage the engine In thermal power stations, water droplets in high velocity steam coming from nozzles (or vanes) in a steam turbine can impinge on and erode turbine internals such as turbine blades. In other steam-using industrial machinery, water can accumulate in piping and cause steam hammer: a form of water hammer caused by water build up 'plugging' a pipe then being accelerated by the steam flowing through the pipe until it reaches a sharp bend and results in catastrophic failure of the pipe. Steam drier is also sometimes applied to a drier which operates as a low-temperature superheater, adding heat to the steam. Applications Atomizers Boilers Catalyzing systems Cooking processes using a steamer Engines Other steam systems Rubber vulcanizing machines Steam-powered irons Turbines See also Steam dryer, a device for drying another material, such as laundry or a biomass fuel, with the use of hot steam, rather than for drying steam. Sources Nuclear Encyclopedia article; Separators and their Role in the Steam System Boilers Nuclear reactors Steam boiler components Steam power
Steam separator
[ "Physics", "Chemistry" ]
355
[ "Physical quantities", "Steam power", "Power (physics)", "Boilers", "Pressure vessels" ]
11,610,647
https://en.wikipedia.org/wiki/Tail%20lift
A tail lift (term used in the UK, also called a "liftgate" in North America) is a mechanical device permanently installed on the rear of a work truck, van, or lorry, and is designed to facilitate the handling of goods from ground level or a loading dock to the level of the vehicle bed, or vice versa. The majority of tail lifts are hydraulic or pneumatic in operation, although they can be mechanical, and are controlled by an operator using an electric relay switch. Using a tail lift can make it unnecessary to use machinery such as a forklift truck to load heavy items on to a vehicle. A tail lift can also bridge the difference in height between a loading dock and the vehicle load bed. Tail lifts are available for many sizes of vehicle, from standard vans to articulated lorries, and standard models can lift anywhere up to 2500kg. Some heavy-duty models can even exceed this limit, making them suitable for industrial applications where extreme loads need to be transported. Types Tail lifts are most often categorized by design type. Tail lift design types include Parallel Arm, Railgate, Column, Cantilever, Tuckunder, and Slider. Parallel Arm Parallel Arm lifts support lower lifting capacities and are commonly installed on pickup trucks and service truck bodies. The parallel "arms" attach to both sides of the lifting platform and guide the platform out and away from the liftgate mainframe. Parallel Arm designs can either feature two hydraulic cylinders applying force directly to the lifting platform or a single hydraulic cylinder using some version of a cable-pulley system. This type is particularly popular among small businesses and service industries due to its cost-effectiveness and simplicity in design. Railgate Railgate lifts are very similar in design to Column Lifts but (generally) support lower lifting capacities. Railgate lifts get their name from the "outrails" which install directly to the vehicle body and serve as the guides for the liftgate platform. Platforms on railgates are larger than those of parallel arm lifts and, like column lifts, fix at a 90° angle from the outrails and lift completely vertically. Railgate lifts are often preferred for deliveries that require stable, consistent vertical motion, such as fragile or sensitive goods. Column Column lifts are "beefier" versions of railgates, supporting some of the highest lifting capacities of any type of hydraulic lift. Like Railgate lifts, Column Lifts feature "tracks" that install directly onto the vehicle body. From the tracks a folding platform extends and lifts completely vertically. Column lifts have the advantage of being able to lift to a higher level than the load bed, also known as "above bed travel," and are therefore preferable for vehicles with bed heights lower than standard dock height. The disadvantages of column lifts include that the platform is only usually able to operate at a 90° angle from the track, meaning that on uneven surfaces, the lift will not meet the ground properly. Their robust design makes them a common choice for logistics companies handling large-scale operations. Cantilever Cantilever lifts work by a set of rams attached to the chassis of the vehicle. These rams are on hinges, allowing them to change angle as they expand or contract. By using the rams in sequence, the working platform can either be tilted, or raised and lowered. Cantilever lifts have the advantage of being able to tilt, which means they can often form a ramp arrangement, which may be more appropriate for some applications. It also means that it can be easier to load or unload on uneven ground. Tuckunder On Tuckunder lifts, the lifting platform may be folded and stored underneath the load bed of the vehicle, leaving the option of it not being used when at a loading dock, and giving access and egress for operators without the need to operate the lift. Common tuckunder designs are either single- or dual-cylinder, with dual-cylinder designs supporting higher lifting capacities. The Maxon company claims to have invented the first tuckunder lift in 1957 under the brand name Tuk-A-Way. Slider Slider Lift designs, like tuckunders, are characterized by folding and storing directly underneath the vehicle bed. However, slider designs feature lifting platforms that "slide" out from underneath the vehicle bed (instead of lowering and unfolding). Slider lift designs support some of the highest lifting capacities of any type of hydraulic lift. Liftgate In North America, "liftgate" is the commonly used term for a hydraulic lift installed at the rear of a vehicle that can be used to mechanically load or unload cargo. In the automobile industry, "liftgate" is also used to refer to the automatic rear door of a van, minivan, or crossover SUV type vehicle. This opening system is also sometimes called a "rear hatch." Modern liftgates often come with advanced features such as remote control operation, automatic height adjustment, and integrated safety mechanisms, making them more user-friendly and efficient. References Freight transport Mechanical engineering Hydraulics
Tail lift
[ "Physics", "Chemistry", "Engineering" ]
1,014
[ "Applied and interdisciplinary physics", "Physical systems", "Hydraulics", "Mechanical engineering", "Fluid dynamics" ]
11,611,098
https://en.wikipedia.org/wiki/Locally%20finite%20collection
A collection of subsets of a topological space is said to be locally finite if each point in the space has a neighbourhood that intersects only finitely many of the sets in the collection. In the mathematical field of topology, local finiteness is a property of collections of subsets of a topological space. It is fundamental in the study of paracompactness and topological dimension. Note that the term locally finite has different meanings in other mathematical fields. Examples and properties A finite collection of subsets of a topological space is locally finite. Infinite collections can also be locally finite: for example, the collection of subsets of of the form for an integer . A countable collection of subsets need not be locally finite, as shown by the collection of all subsets of of the form for a natural number n. Every locally finite collection of sets is point finite, meaning that every point of the space belongs to only finitely many sets in the collection. Point finiteness is a strictly weaker notion, as illustrated by the collection of intervals in , which is point finite, but not locally finite at the point . The two concepts are used in the definitions of paracompact space and metacompact space, and this is the reason why every paracompact space is metacompact. If a collection of sets is locally finite, the collection of the closures of these sets is also locally finite. The reason for this is that if an open set containing a point intersects the closure of a set, it necessarily intersects the set itself, hence a neighborhood can intersect at most the same number of closures (it may intersect fewer, since two distinct, indeed disjoint, sets can have the same closure). The converse, however, can fail if the closures of the sets are not distinct. For example, in the finite complement topology on the collection of all open sets is not locally finite, but the collection of all closures of these sets is locally finite (since the only closures are and the empty set). An arbitrary union of closed sets is not closed in general. However, the union of a locally finite collection of closed sets is closed. To see this we note that if is a point outside the union of this locally finite collection of closed sets, we merely choose a neighbourhood of that intersects this collection at only finitely many of these sets. Define a bijective map from the collection of sets that intersects to thus giving an index to each of these sets. Then for each set, choose an open set containing that doesn't intersect it. The intersection of all such for intersected with , is a neighbourhood of that does not intersect the union of this collection of closed sets. In compact spaces Every locally finite collection of sets in a compact space is finite. Indeed, let be a locally finite family of subsets of a compact space . For each point , choose an open neighbourhood that intersects a finite number of the subsets in . Clearly the family of sets: is an open cover of , and therefore has a finite subcover: . Since each intersects only a finite number of subsets in , the union of all such intersects only a finite number of subsets in . Since this union is the whole space , it follows that intersects only a finite number of subsets in the collection . And since is composed of subsets of every member of must intersect , thus is finite. In Lindelöf spaces Every locally finite collection of sets in a Lindelöf space, in particular in a second-countable space, is countable. This is proved by a similar argument as in the result above for compact spaces. Countably locally finite collections A collection of subsets of a topological space is called or if it is a countable union of locally finite collections. The σ-locally finite notion is a key ingredient in the Nagata–Smirnov metrization theorem, which states that a topological space is metrizable if and only if it is regular, Hausdorff, and has a σ-locally finite base. In a Lindelöf space, in particular in a second-countable space, every σ-locally finite collection of sets is countable. Citations References Families of sets General topology Properties of topological spaces
Locally finite collection
[ "Mathematics" ]
860
[ "General topology", "Properties of topological spaces", "Space (mathematics)", "Combinatorics", "Topological spaces", "Basic concepts in set theory", "Topology", "Families of sets" ]
17,030,615
https://en.wikipedia.org/wiki/Transposon%20mutagenesis
Transposon mutagenesis, or transposition mutagenesis, is a biological process that allows genes to be transferred to a host organism's chromosome, interrupting or modifying the function of an extant gene on the chromosome and causing mutation. Transposon mutagenesis is much more effective than chemical mutagenesis, with a higher mutation frequency and a lower chance of killing the organism. Other advantages include being able to induce single hit mutations, being able to incorporate selectable markers in strain construction, and being able to recover genes after mutagenesis. Disadvantages include the low frequency of transposition in living systems, and the inaccuracy of most transposition systems. History Transposon mutagenesis was first studied by Barbara McClintock in the mid-20th century during her Nobel Prize-winning work with corn. McClintock received her BSc in 1923 from Cornell’s College of Agriculture. By 1927 she had her PhD in botany, and she immediately began working on the topic of maize chromosomes. In the early 1940s, McClintock was studying the progeny of self-pollinated maize plants which resulted from crosses having a broken chromosome 9. These plants were missing their telomeres. This research prompted the first discovery of a transposable element, and from there transposon mutagenesis has been exploited as a biological tool. Dynamics In the case of bacteria, transposition mutagenesis is usually accomplished by way of a plasmid from which a transposon is extracted and inserted into the host chromosome. This usually requires a set of enzymes including transposase to be translated. The transposase can be expressed either on a separate plasmid, or on the plasmid containing the gene to be integrated. Alternatively, an injection of transposase mRNA into the host cell can induce translation and expression. Early transposon mutagenesis experiments relied on bacteriophages and conjugative bacterial plasmids for the insertion of sequences. These were very non-specific, and made it difficult to incorporate specific genes. A newer technique called shuttle mutagenesis uses specific cloned genes from the host species to incorporate genetic elements. Another effective approach is to deliver transposons through viral capsids. This facilitates integration into the chromosome and long-term transgene expression. Tn5 transposon system The Tn5 transposon system is a model system for the study of transposition and for the application of transposon mutagenesis. Tn5 is a bacterial composite transposon in which genes (the original system containing antibiotic resistance genes) are flanked by two nearly identical insertion sequences, named IS50R and IS50L corresponding to the right and left sides of the transposon respectively. The IS50R sequence codes for two proteins, Tnp and Inh. These two proteins are identical in sequence, save for the fact that Inh is lacking the 55 N-terminal amino acids. Tnp codes for a transposase for the entire system, and Inh encodes an inhibitor of transposase. The DNA-binding domain of Tnp resides in the 55 N-terminal amino acids, and so these residues are essential for function. The IS50R and IS50L sequences are both flanked by 19-base pair elements on the inside and outside ends of the transposon, labelled IE and OE respectively. Mutation of these regions results in an inability of transposase genes to bind to the sequences. The binding interactions between transposase and these sequences is very complicated, and is affected by DNA methylation and other epigenetic marks. In addition, other proteins seem to be able to bind with and affect the transposition of the IS50 elements, such as DnaA. The most likely pathway of Tn 5 transposition is the common pathway for all transposon systems. It begins with Tnp binding the OE and IE sequences of each IS50 sequence. The two ends are brought together, and through oligomerization of DNA, the sequence is cut out of the chromosome. After introducing 9-base pair 5' ends in target DNA, the transposon and its incorporated genes are inserted into the target DNA, duplicating the regions on either end of the transposon. Genes of interest can be genetically engineered into the transposon system between the IS50 sequences. By placing the transposon under the control of a host promoter, the genes will be expressed. Incorporated genes usually include, in addition to the gene of interest, a selectable marker to identify transformants, a eukaryotic promoter/terminator (if expressing in a eukaryote), and 3' UTR sequences to separate genes in a polycistronic stretch of sequence. Sleeping Beauty transposon system The Sleeping Beauty transposon system (SBTS) is the first successful non-viral vector for incorporation of a gene cassette into a vertebrate genome. Up until the development of this system, the major problems with non-viral gene therapy have been the intracellular breakdown of the transgene due to it being recognized as Prokaryotes and the inefficient delivery of the transgene into organ systems. The SBTS revolutionized these issues by combining the advantages of viruses and naked DNA. It consists of a transposon containing the cassette of genes to be expressed, as well as its own transposase enzyme. By transposing the cassette directly into the genome of the organism from the plasmid, sustained expression of the transgene can be attained. This can be further refined by enhancing the transposon sequences and the transposase enzymes used. SB100X is a hyperactive mammalian transposase which is roughly 100x more efficient than the typical first-generation transposase. Incorporation of this enzyme into the cassette results in even more sustained transgene expression (over one year). Additionally, transgenesis frequencies can be as high as 45% when using pronuclear injection into mouse zygotes. The mechanism of the SBTS is similar to the Tn5 transposon system, however the enzyme and gene sequences are eukaryotic in nature as opposed to prokaryotic. The system's tranposase can act in trans as well as in cis, allowing a diverse collection of transposon structures. The transposon itself is flanked by inverted repeat sequences, which are each repeated twice in a direct fashion, designated IR/DR sequences. The internal region consists of the gene or sequence to be transposed, and could also contain the transposase gene. Alternatively, the transposase can be encoded on a separate plasmid or injected in its protein form. Yet another approach is to incorporate both the transposon and the transposase genes into a viral vector, which can target a cell or tissue of choice. The transposase protein is extremely specific in the sequences that it binds, and is able to discern its IR/DR sequences from a similar sequence by three base pairs. Once the enzyme is bound to both ends of the transposon, the IR/DR sequences are brought together and held by the transposase in a Synaptic Complex Formation (SCF). The formation of the SCF is a checkpoint ensuring proper cleavage. HMGB1 is a non-histone protein from the host which is associated with eukaryotic chromatin. It enhances the preferential binding of the transposase to the IR/DR sequences and is likely essential for SCF complex formation/stability. Transposase cleaves the DNA at the target sites, generating 3' overhangs. The enzyme then targets TA dinucleotides in the host genome as target sites for integration. The same enzymatic catalytic site which cleaved the DNA is responsible for integrating the DNA into the genome, duplicating the region of the genome in the process. Although transposase is specific for TA dinucleotides, the high frequency of these pairs in the genome indicates that the transposon undergoes fairly random integration. Practical applications As a result of the capacity of transposon mutagenesis to incorporate genes into most areas of target chromosomes, there are a number of functions associated with the process. Virulence genes in viruses and bacteria can be discovered by disrupting genes and observing for a change in phenotype. This has importance in antibiotic production and disease control. Non-essential genes can be discovered by inducing transposon mutagenesis in an organism. The transformed genes can then be identified by performing PCR on the organism's recovered genome using an ORF-specific primer and a transposon-specific primer. Since transposons can incorporate themselves into non-coding regions of DNA, the ORF-specific primer ensures that the transposon interrupted a gene. Because the organism survived after homologous integration, the interrupted gene was clearly non-essential. Cancer-causing genes can be identified by genome-wide mutagenesis and screening of mutants containing tumours. Based on the mechanism and results of the mutation, cancer-causing genes can be identified as oncogenes or tumour-suppressor genes. Specific examples Mycobacterium tuberculosis virulence gene cluster identification In 1999, the virulence genes associated with Mycobacterium tuberculosis were identified through transposon mutagenesis-mediated gene knockout. A plasmid named pCG113 containing kanamycin resistance genes and the IS1096 insertion sequence was engineered to contain variable 80-base pair tags. The plasmids were then transformed into M. tuberculosis cells by electroporation. Colonies were plated on kanamycin to select for resistant cells. Colonies that underwent random transposition events were identified by BamHI digestion and Southern blotting using an internal IS1096 DNA probe. Colonies were screened for attenuated multiplication to identify colonies with mutations in candidate virulence genes. Mutations leading to an attenuated phenotype were mapped by amplification of adjacent regions to the IS1096 sequences and compared with the published M. tuberculosis genome. In this instance transposon mutagenesis identified 13 pathogenic loci in the M. tuberculosis genome which were not previously associated with disease. This is essential information in understanding the infectious cycle of the bacterium. PiggyBac (PB) transposon mutagenesis for cancer gene discovery The PiggyBac (PB) transposon from the cabbage looper moth Trichoplusia ni was engineered to be highly active in mammalian cells, and is capable of genome-wide mutagenesis. Transposons contained both PB and Sleeping Beauty inverted repeats, in order to be recognized by both transposases and increase the frequency of transposition. In addition, the transposon contained promoter and enhancer elements, a splice donor and acceptors to allow gain- or loss-of-function mutations depending on the transposon's orientation, and bidirectional polyadenylation signals. The transposons were transformed into mouse cells in vitro and mutants containing tumours were analyzed. The mechanism of the mutation leading to tumour formation determined if the gene was classified as an oncogene or a tumour-suppressor gene. Oncogenes tended to be characterized by insertions in regions leading to overexpression of a gene, whereas tumour-suppressor genes were classified as such based on loss-of-function mutations. Since the mouse is a model organism for the study of human physiology and disease, this research will help lead to an increased understanding of cancer-causing genes and potential therapeutic targets. See also Transposons as a genetic tool Transposable element Transposase Barbara McClintock References External links "Transposon Mutagenisis" - Food Ingredient News Molecular biology Mobile genetic elements Mutagenesis
Transposon mutagenesis
[ "Chemistry", "Biology" ]
2,491
[ "Biochemistry", "Molecular genetics", "Mobile genetic elements", "Molecular biology" ]
17,033,211
https://en.wikipedia.org/wiki/Quantitative%20models%20of%20the%20action%20potential
In neurophysiology, several mathematical models of the action potential have been developed, which fall into two basic types. The first type seeks to model the experimental data quantitatively, i.e., to reproduce the measurements of current and voltage exactly. The renowned Hodgkin–Huxley model of the axon from the Loligo squid exemplifies such models. Although qualitatively correct, the H-H model does not describe every type of excitable membrane accurately, since it considers only two ions (sodium and potassium), each with only one type of voltage-sensitive channel. However, other ions such as calcium may be important and there is a great diversity of channels for all ions. As an example, the cardiac action potential illustrates how differently shaped action potentials can be generated on membranes with voltage-sensitive calcium channels and different types of sodium/potassium channels. The second type of mathematical model is a simplification of the first type; the goal is not to reproduce the experimental data, but to understand qualitatively the role of action potentials in neural circuits. For such a purpose, detailed physiological models may be unnecessarily complicated and may obscure the "forest for the trees". The FitzHugh–Nagumo model is typical of this class, which is often studied for its entrainment behavior. Entrainment is commonly observed in nature, for example in the synchronized lighting of fireflies, which is coordinated by a burst of action potentials; entrainment can also be observed in individual neurons. Both types of models may be used to understand the behavior of small biological neural networks, such as the central pattern generators responsible for some automatic reflex actions. Such networks can generate a complex temporal pattern of action potentials that is used to coordinate muscular contractions, such as those involved in breathing or fast swimming to escape a predator. Hodgkin–Huxley model [[File:MembraneCircuit.svg|thumb|right|448px|Equivalent electrical circuit for the Hodgkin–Huxley model of the action potential. Im and Vm represent the current through, and the voltage across, a small patch of membrane, respectively. The Cm represents the capacitance of the membrane patch, whereas the four ''gs represent the conductances of four types of ions. The two conductances on the left, for potassium (K) and sodium (Na), are shown with arrows to indicate that they can vary with the applied voltage, corresponding to the voltage-sensitive ion channels.]] In 1952 Alan Lloyd Hodgkin and Andrew Huxley developed a set of equations to fit their experimental voltage-clamp data on the axonal membrane. The model assumes that the membrane capacitance C is constant; thus, the transmembrane voltage V changes with the total transmembrane current Itot according to the equation where INa, IK, and IL are currents conveyed through the local sodium channels, potassium channels, and "leakage" channels (a catch-all), respectively. The initial term Iext represents the current arriving from external sources, such as excitatory postsynaptic potentials from the dendrites or a scientist's electrode. The model further assumes that a given ion channel is either fully open or closed; if closed, its conductance is zero, whereas if open, its conductance is some constant value g. Hence, the net current through an ion channel depends on two variables: the probability popen of the channel being open, and the difference in voltage from that ion's equilibrium voltage, V − Veq. For example, the current through the potassium channel may be written as which is equivalent to Ohm's law. By definition, no net current flows (IK = 0) when the transmembrane voltage equals the equilibrium voltage of that ion (when V = EK). To fit their data accurately, Hodgkin and Huxley assumed that each type of ion channel had multiple "gates", so that the channel was open only if all the gates were open and closed otherwise. They also assumed that the probability of a gate being open was independent of the other gates being open; this assumption was later validated for the inactivation gate. Hodgkin and Huxley modeled the voltage-sensitive potassium channel as having four gates; letting pn denote the probability of a single such gate being open, the probability of the whole channel being open is the product of four such probabilities, i.e., popen, K = n4. Similarly, the probability of the voltage-sensitive sodium channel was modeled to have three similar gates of probability m and a fourth gate, associated with inactivation, of probability h; thus, popen, Na = m3h. The probabilities for each gate are assumed to obey first-order kinetics where both the equilibrium value meq and the relaxation time constant τm depend on the instantaneous voltage V across the membrane. If V changes on a time-scale more slowly than τm, the m probability will always roughly equal its equilibrium value meq; however, if V changes more quickly, then m will lag behind meq. By fitting their voltage-clamp data, Hodgkin and Huxley were able to model how these equilibrium values and time constants varied with temperature and transmembrane voltage. The formulae are complex and depend exponentially on the voltage and temperature. For example, the time constant for sodium-channel activation probability h varies as 3(θ−6.3)/10 with the Celsius temperature θ, and with voltage V as In summary, the Hodgkin–Huxley equations are complex, non-linear ordinary differential equations in four independent variables: the transmembrane voltage V, and the probabilities m, h and n. No general solution of these equations has been discovered. A less ambitious but generally applicable method for studying such non-linear dynamical systems is to consider their behavior in the vicinity of a fixed point. This analysis shows that the Hodgkin–Huxley system undergoes a transition from stable quiescence to bursting oscillations as the stimulating current Iext is gradually increased; remarkably, the axon becomes stably quiescent again as the stimulating current is increased further still. A more general study of the types of qualitative behavior of axons predicted by the Hodgkin–Huxley equations has also been carried out. FitzHugh–Nagumo model Because of the complexity of the Hodgkin–Huxley equations, various simplifications have been developed that exhibit qualitatively similar behavior. The FitzHugh–Nagumo model is a typical example of such a simplified system. Based on the tunnel diode, the FHN model has only two independent variables, but exhibits a similar stability behavior to the full Hodgkin–Huxley equations. The equations are where g(V) is a function of the voltage V that has a region of negative slope in the middle, flanked by one maximum and one minimum (Figure FHN). A much-studied simple case of the FitzHugh–Nagumo model is the Bonhoeffer-van der Pol nerve model, which is described by the equations where the coefficient ε is assumed to be small. These equations can be combined into a second-order differential equation This van der Pol equation has stimulated much research in the mathematics of nonlinear dynamical systems. Op-amp circuits that realize the FHN and van der Pol models of the action potential have been developed by Keener. A hybrid of the Hodgkin–Huxley and FitzHugh–Nagumo models was developed by Morris and Lecar in 1981, and applied to the muscle fiber of barnacles. True to the barnacle's physiology, the Morris–Lecar model replaces the voltage-gated sodium current of the Hodgkin–Huxley model with a voltage-dependent calcium current. There is no inactivation (no h variable) and the calcium current equilibrates instantaneously, so that again, there are only two time-dependent variables: the transmembrane voltage V and the potassium gate probability n. The bursting, entrainment and other mathematical properties of this model have been studied in detail. The simplest models of the action potential are the "flush and fill" models (also called "integrate-and-fire" models), in which the input signal is summed (the "fill" phase) until it reaches a threshold, firing a pulse and resetting the summation to zero (the "flush" phase). All of these models are capable of exhibiting entrainment, which is commonly observed in nervous systems. Extracellular potentials and currents Whereas the above models simulate the transmembrane voltage and current at a single patch of membrane, other mathematical models pertain to the voltages and currents in the ionic solution surrounding the neuron. Such models are helpful in interpreting data from extracellular electrodes, which were common prior to the invention of the glass pipette electrode that allowed intracellular recording. The extracellular medium may be modeled as a normal isotropic ionic solution; in such solutions, the current follows the electric field lines, according to the continuum form of Ohm's Law where j and E are vectors representing the current density and electric field, respectively, and where σ is the conductivity. Thus, j can be found from E, which in turn may be found using Maxwell's equations. Maxwell's equations can be reduced to a relatively simple problem of electrostatics, since the ionic concentrations change too slowly (compared to the speed of light) for magnetic effects to be important. The electric potential φ(x) at any extracellular point x can be solved using Green's identities where the integration is over the complete surface of the membrane; is a position on the membrane, σinside and φinside are the conductivity and potential just within the membrane, and σoutside and φoutside the corresponding values just outside the membrane. Thus, given these σ and φ values on the membrane, the extracellular potential φ(x) can be calculated for any position x; in turn, the electric field E and current density j''' can be calculated from this potential field. See also Biological neuron models GHK current equation Models of neural computation Saltatory conduction Bioelectronics Cable theory References Further reading Mathematical modeling Capacitors Action potentials
Quantitative models of the action potential
[ "Physics", "Mathematics" ]
2,173
[ "Mathematical modeling", "Physical quantities", "Applied mathematics", "Capacitors", "Capacitance" ]