id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
763,553
https://en.wikipedia.org/wiki/Lunar%20space%20elevator
A lunar space elevator or lunar spacelift is a proposed transportation system for moving a mechanical climbing vehicle up and down a ribbon-shaped tethered cable that is set between the surface of the Moon "at the bottom" and a docking port suspended tens of thousands of kilometers above in space at the top. It is similar in concept to the better known Earth-based space elevator idea, but since the Moon's surface gravity is much lower than the Earth's, the engineering requirements for constructing a lunar elevator system can be met using materials and technology already available. For a lunar elevator, the cable or tether extends considerably farther out from the lunar surface into space than one that would be used in an Earth-based system. However, the main function of a space elevator system is the same in either case; both allow for a reusable, controlled means of transporting payloads of cargo, or possibly people, between a base station at the bottom of a gravity well and a docking port in outer space. A lunar elevator could significantly reduce the costs and improve reliability of soft-landing equipment on the lunar surface. For example, it would permit the use of mass-efficient (high specific impulse), low thrust drives such as ion drives which otherwise cannot land on the Moon. Since the docking port would be connected to the cable in a microgravity environment, these and other drives can reach the cable from low Earth orbit (LEO) with minimal launched fuel from Earth. With conventional rockets, the fuel needed to reach the lunar surface from LEO is many times the landed mass, thus the elevator can reduce launch costs for payloads bound for the lunar surface by a similar factor. Location There are two points in space where an elevator's docking port could maintain a stable, lunar-synchronous position: the Earth-Moon Lagrange points and . The 0.055 eccentricity of the lunar orbit means that these points are not fixed relative to the lunar surface : the is 56,315 km +/- 3,183 km away from the Earth-facing side of the Moon (at the lunar equator) and is 62,851 km +/- 3,539 km from the center of the Moon's far side, in the opposite direction. At these points, the effect of the Moon's gravity and the effect of the centrifugal force resulting from the elevator system's synchronous, rigid body rotation cancel each other out. The Lagrangian points and are points of unstable gravitational equilibrium, meaning that small inertial adjustments will be needed to ensure any object positioned there can remain stationary relative to the lunar surface. Both of these positions are substantially farther up than the 36,000 km from Earth to geostationary orbit. Furthermore, the weight of the limb of the cable system extending down to the Moon would have to be balanced by the cable extending further up, and the Moon's slow rotation means the upper limb would have to be much longer than for an Earth-based system, or be topped by a much more massive counterweight. To suspend a kilogram of cable or payload just above the surface of the Moon would require 1,000 kg of counterweight, 26,000 km beyond . (A smaller counterweight on a longer cable, e.g., 100 kg at a distance of 230,000 km — more than halfway to Earth — would have the same balancing effect.) Without the Earth's gravity to attract it, an cable's lowest kilogram would require 1,000 kg of counterweight at a distance of 120,000 km from the Moon. The average Earth-Moon distance is 384,400 km. The anchor point of a space elevator is normally considered to be at the equator. However, there are several possible cases to be made for locating a lunar base at one of the Moon's poles; a base on a peak of eternal light could take advantage of near-continuous solar power, for example, or small quantities of water and other volatiles may be trapped in permanently shaded crater bottoms. A space elevator could be anchored near a lunar pole, though not directly at it. A tramway could be used to bring the cable the rest of the way to the pole, with the Moon's low gravity allowing much taller support towers and wider spans between them than would be possible on Earth. Fabrication Because of the Moon's lower gravity and lack of atmosphere, a lunar elevator would have less stringent requirements for the tensile strength of the material making up its cable than an Earth-tethered cable. An Earth-based elevator would require high strength-to-weight materials that are theoretically possible, but not yet fabricated in practice (e.g., carbon nanotubes). A lunar elevator, however, could be constructed using commercially available mass-produced high-strength para-aramid fibres (such as Kevlar and M5) or ultra-high-molecular-weight polyethylene fibre. Compared to an Earth space elevator, there would be fewer geographic and political restrictions on the location of the surface connection. The connection point of a lunar elevator would not necessarily have to be directly under its center of gravity, and could even be near the poles, where evidence suggests there might be frozen water in deep craters that never see sunlight; if so, this might be collected and converted into rocket fuel. Cross-section profile Space elevator designs for Earth typically have a taper of the tether that provides a uniform stress profile rather than a uniform cross-section. Because the strength requirement of a lunar space elevator is much lower than that of an Earth space elevator, a uniform cross-section is possible for the lunar space elevator. The study done for NASA's Institute of Advanced Concepts states "Current composites have characteristic heights of a few hundred kilometers, which would require taper ratios of about 6 for Mars, 4 for the Moon, and about 6000 for the Earth. The mass of the Moon is small enough that a uniform cross-section lunar space elevator could be constructed, without any taper at all." A uniform cross-section could make it possible for a lunar space elevator to be built in a double-tether pulley configuration. This configuration would greatly simplify repairs of a space elevator compared to a tapered elevator configuration. However a pulley configuration would require a strut at the counterweight hundreds of kilometers long to separate the up-tether from the down-tether and keep them from tangling. A pulley configuration might also allow the system capacity to be gradually expanded by stitching new tether material on at the Lagrange point as the tether rotated. History The idea of space elevators has been around since 1960 when Yuri Artsutanov wrote a Sunday supplement to Pravda on how to build such a structure and the utility of geosynchronous orbit. His article however, was not known in the West. Then in 1966, John Isaacs, a leader of a group of American Oceanographers at Scripps Institute, published an article in Science about the concept of using thin wires hanging from a geostationary satellite. In that concept, the wires were to be thin (thin wires/tethers are now understood to be more susceptible to micrometeoroid damage). Like Artsutanov, Isaacs’ article also was not well known to the aerospace community. In 1972, James Cline submitted a paper to NASA describing a "mooncable" concept similar to a lunar elevator. NASA responded negatively to the idea citing technical risk and lack of funds. In 1975, Jerome Pearson independently came up with the Space elevator concept and published it in Acta Astronautica. That made the aerospace community at large aware of the space elevator for the first time. His article inspired Sir Arthur Clarke to write the novel The Fountains of Paradise (published in 1979, almost simultaneously with Charles Sheffield's novel on the same topic, The Web Between the Worlds). In 1978 Pearson extended his theory to the moon and changed to using the Lagrangian points instead of having it in geostationary orbit. In 1977, some papers of Soviet space pioneer Friedrich Zander were posthumously published, revealing that he conceived of a lunar space tower in 1910. In 2005 Jerome Pearson completed a study for NASA Institute of Advanced Concepts which showed the concept is technically feasible within the prevailing state of the art using existing commercially available materials. In October 2011 on the LiftPort website Michael Laine announced that LiftPort is pursuing a Lunar space elevator as an interim goal before attempting a terrestrial elevator. At the 2011 Annual Meeting of the Lunar Exploration Analysis Group (LEAG), LiftPort CTO Marshall Eubanks presented a paper on the prototype Lunar Elevator co-authored by Laine. In August 2012, Liftport announced that the project could actually start near 2020. In April 2019, LiftPort CEO Michael Laine reported no progress beyond the lunar elevator company's conceptualized design. Materials Unlike earth-anchored space elevators, the materials for lunar space elevators will not require a lot of strength. Lunar elevators can be made with materials available today. Carbon nanotubes aren’t required to build the structure. This would make it possible to build the elevator much sooner, since available carbon nanotube materials in sufficient quantities are still years away. One material that has great potential is M5 fiber. This is a synthetic fiber that is lighter than Kevlar or Spectra. According to Pearson, Levin, Oldson, and Wykes in their article The Lunar Space Elevator, an M5 ribbon 30 mm wide and 0.023 mm thick, would be able to support 2000 kg on the lunar surface (2005). It would also be able to hold 100 cargo vehicles, each with a mass of 580 kg, evenly spaced along the length of the elevator. Other materials that could be used are T1000G carbon fiber, Spectra 200, Dyneema (used on the YES2 spacecraft), or Zylon. All of these materials have breaking lengths of several hundred kilometers under 1g. The materials will be used to manufacture the ribbon-shaped, tethered cable which will connect from the or balance points to the surface of the moon. The climbing vehicles which will travel the length of these cables in a finished elevator system will not move very fast, thus simplifying some of the challenges of transferring cargo and maintaining structural integrity of the system. However, any small objects suspended in space for extended periods of time, like the tethered cables would be, are vulnerable to damage by micrometeoroids, so one possible method of improving their survivability would be to design a "multi-ribbon" system instead of just a single-tethered cable. Such a system would have interconnections at regular intervals, so that if one section of ribbon is damaged, parallel sections could carry the load until robotic vehicles could arrive to replace the severed ribbon. The interconnections would be spaced about 100 km apart, which is small enough to allow a robotic climber to carry the mass of the replacement 100 km of ribbon. Climbing vehicles One method of getting materials needed from the moon into orbit would be the use of robotic climbing vehicles. These vehicles would consist of two large wheels pressing against the ribbons of the elevator to provide enough friction for lift. The climbers could be set for horizontal or vertical ribbons. The wheels would be driven by electric motors, which would obtain their power from solar energy or beamed energy. The power required to climb the ribbon would depend upon the lunar gravity field, which drops off the first few percent of the distance to . The power that a climber would require to traverse the ribbon drops in proportion to proximity to the point. If a 540 kg climber traveled at a velocity of fifteen meters per second, by the time it was seven percent of the way to the point, the required power would drop to less than a hundred watts, versus 10 kilowatts at the surface. One problem with using a solar powered vehicle is the lack of sunlight during some parts of the trip. For half of every month, the solar arrays on the lower part of the ribbon would be in the shade. One way to fix this problem would be to launch the vehicle at the base with a certain velocity then at the peak of the trajectory, attach it to the ribbon. Possible uses Materials from Earth may be sent into orbit and then down to the Moon to be used by lunar bases and installations. Former U.S. President George W. Bush, in an address about his Vision for Space Exploration, suggested that the Moon may serve as a cost-effective construction, launching and fueling site for future space exploration missions. As President Bush noted, "(Lunar) soil contains raw materials that might be harvested and processed into rocket fuel or breathable air." For example, the proposed Ares V heavy-lift rocket system could cost-effectively deliver raw materials from Earth to a docking station, (connected to the lunar elevator as a counterweight,) where future spacecraft could be built and launched, while extracted lunar resources could be shipped up from a base on the Moon's surface, near the elevator's anchoring point. If the elevator was connected somehow to a lunar base built near the Moon's north pole, then workers could also mine the water ice which is known to exist there, providing an ample source of readily accessible water for the crew at the elevator's docking station. Also, since the total energy needed for transit between the Moon and Mars is considerably less than for between Earth and Mars, this concept could lower some of the engineering obstacles to sending humans to Mars. The lunar elevator could also be used to transport supplies and materials from the surface of the moon into the Earth's orbit and vice versa. According to Jerome Pearson, many of the Moon's material resources can be extracted and sent into Earth orbit more easily than if they were launched from the Earth's surface. For example, lunar regolith itself could be used as massive material to shield space stations or crewed spacecraft on long missions from solar flares, Van Allen radiation, and other kinds of cosmic radiation. The Moon's naturally occurring metals and minerals could be mined and used for construction. Lunar deposits of silicon, which could be used to build solar panels for massive satellite solar power stations, seem particularly promising. One disadvantage of the lunar elevator is that the speed of the climbing vehicles may be too slow to efficiently serve as a human transportation system. In contrast to an Earth-based elevator, the longer distance from the docking station to the lunar surface would mean that any "elevator car" would need to be able to sustain a crew for several days, even weeks, before it reached its destination. See also Colonization of the Moon Non-rocket spacelaunch Skyhook (structure) Space tether References External links An introductory review of the whole gamut of skyhook contraptions by Arthur C. Clarke Universe Today article on a Lunar space elevator Elevator Proposal by Jerome Pearson LiftPort company website and forums Jerome Pearson's company website on Space Elevators Simulations of lunar space elevators in samples 95 and 96 of spacetethers.com simulator Space elevator Exploration of the Moon
Lunar space elevator
[ "Astronomy", "Technology" ]
3,095
[ "Exploratory engineering", "Astronomical hypotheses", "Space elevator" ]
763,646
https://en.wikipedia.org/wiki/Piebald
A piebald or pied animal is one that has a pattern of unpigmented spots (white) on a pigmented background of hair, feathers or scales. Thus a piebald black and white dog is a black dog with white spots. The animal's skin under the white background is not pigmented. Location of the unpigmented spots is dependent on the migration of melanoblasts (primordial pigment cells) from the neural crest to paired bilateral locations in the skin of the early embryo. The resulting pattern appears symmetrical only if melanoblasts migrate to both locations of a pair and proliferate to the same degree in both locations. The appearance of symmetry can be obliterated if the proliferation of the melanocytes (pigment cells) within the developing spots is so great that the sizes of the spots increase to the point that some of the spots merge, leaving only small areas of the white background among the spots and at the tips of the extremities. Animals with this pattern may include birds, cats, cattle, dogs, foxes, horses, cetaceans, deer, pigs, and snakes. Some animals also exhibit colouration of the irises of the eye that match the surrounding skin (blue eyes for pink skin, brown for dark). The underlying genetic cause is related to a condition known as leucism. In medieval English "pied" indicated alternating contrasting colours making up the quarters of an item of costume or livery device in heraldry. Etymology The word "piebald" originates from a combination of "pie," from "magpie", and "bald", meaning "white patch" or spot. The reference is to the distinctive black-and-white plumage of the magpie. Horses In British English piebald (black and white) and skewbald (white and any colour other than black) are together known as coloured. In North American English, the term for this colouring pattern is pinto, with the specialized term "paint" referring specifically to a breed of horse with American Quarter Horse or Thoroughbred bloodlines in addition to being spotted, whereas pinto refers to a spotted horse of any breed. In American usage, horse enthusiasts usually do not use the term "piebald," but rather describe the colour shade of a pinto literally with terms such as "black and white" for a piebald, "brown and white," or "bay and white," for skewbalds, or color-specific modifiers such as "bay pinto", "sorrel pinto," "buckskin pinto," and such. Genetically, a piebald horse begins with a black base coat colour, and then the horse also has an allele for one of three basic spotting patterns overlaying the base colour. The most common coloured spotting pattern is called tobiano, and is a dominant gene. Tobiano creates spots that are large and rounded, usually with a somewhat vertical orientation, with white that usually crosses the back of the horse, white on the legs, with the head mostly dark. Three less common spotting genes are the sabino, frame, and splash overo genes, which create various patterns that are mostly dark, with jagged spotting, often with a horizontal orientation, white on the head. The frame variant has dark or minimally marked legs. The sabino pattern can be very minimal, usually adding white that runs up the legs onto the belly or flanks, with "lacy" or roaning at the edge of the white, plus white on the head that either extends past the eye, over the chin, or both. The genetics of overo and sabino are not yet fully understood, but they can appear in the offspring of two solid-coloured parents, whereas a tobiano must always have at least one tobiano parent. Dogs In many dog breeds the Piebald gene is common. The white parts of the fur interrupt the pigmented coat patterns. Dogs that may have a spotted or multicolored coat, are often called piebald if their body is almost entirely white or another solid color with spotting and patches on the head and neck. The allele is called sP on the S-locus and is localized with the MITF gene. It is recessive, therefore homozygous individuals show this coat pattern, whereas the heterozygous carriers can be of solid color. Other animals Many other animal species may also be "pied" or piebald including, but not limited to, birds and squirrels. A piebald Eastern gray squirrel named Pinto Bean gained prominence at the University of Illinois Urbana-Champaign after many students shared pictures and videos of it online. Snakes, especially ball pythons and corn snakes, may also exhibit seemingly varying patches of completely pigmentless scales along with patches of pigmented scales. In 2013, a piebald blood python was discovered in Sumatra. Some domesticated silver foxes born from the Russian Institute of Cytology and Genetics also carry this coloring. Bicolor cats carry the white spotting gene (incorrectly called the piebald gene). The same pattern that applies to cats also applies to dogs when the white spotting gene involved is indeed piebald and not another white-causing gene found in dogs. The piebald gene is also found in cows, ferrets, domestic goats, goldfish, guinea pigs, hamsters, rabbits, and fancy rats. Holstein and Simmental breeds of cattle typically exhibit piebaldism. See also Horse coat Equine coat color Equine coat color genetics Pinto horse Tricoloured (horse) Pigmentation Albinism Amelanism Dyschromia Erythrism Heterochromia iridum Leucism Melanism Piebaldism Skewbald Vitiligo Xanthochromism References Sources Chapter 146, "Introduction to Hypopigmentation" Chapter 147, "Albinism" Chapter 148, "Piebaldism" Chapter 149, "Vitiligo" External links (extremely rare wild piebald moose photographed in northwest Alberta, Canada) Animal coat colors Bird colours Cat coat types Disturbances of pigmentation Dog anatomy Horse coat colors
Piebald
[ "Biology" ]
1,267
[ "Disturbances of pigmentation", "Pigmentation" ]
763,780
https://en.wikipedia.org/wiki/Occupational%20noise
Occupational noise is the amount of acoustic energy received by an employee's auditory system when they are working in the industry. Occupational noise, or industrial noise, is often a term used in occupational safety and health, as sustained exposure can cause permanent hearing damage. Occupational noise is considered an occupational hazard traditionally linked to loud industries such as ship-building, mining, railroad work, welding, and construction, but can be present in any workplace where hazardous noise is present. Regulation In the US, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and regulations for noise in the workplace. National Institute for Occupational Safety and Health (NIOSH), Occupational Safety and Health Administration (OSHA), Mine Safety and Health Administration (MSHA), Federal Railroad Administration (FRA) have all set standards on hazardous occupational noise in their respective industries. Each industry is different, as workers' tasks and equipment differ, but most regulations agree that noise becomes hazardous when it exceeds 85 decibels, for an 8-hour time exposure (typical work shift). This relationship between allotted noise level and exposure time is known as an exposure action value (EAV) or permissible exposure limit (PEL). The EAV or PEL can be seen as equations which manipulate the allotted exposure time according to the intensity of the industrial noise. This equation works as an inverse, exponential, relationship. As the industrial noise intensity increases, the allotted exposure time, to still remain safe, decreases. Thus, a worker exposed to a noise level of 100 decibels for 15 minutes would be at the same risk level as a worker exposed to 85 decibels for 8 hours. Using this mathematical relationship, an employer can calculate whether or not their employees are being overexposed to noise. When it is suspected that an employee will reach or exceed the PEL, a monitoring program for that employee should be implemented by the employer. The above calculations of PEL and EAV are based on measurements taken to determine the intensity of that particular industrial noise. A-weighted measurements are commonly used to determine noise levels that can cause harm to the human ear. There are also special exposure meters available that integrate noise over a period of time to give an Leq value (equivalent sound pressure level), defined by standards. Regulations in different countries These numerical values do not fully reflect the real situation. For example, the OSHA standard sets the Action Level 85 dBA, and the PEL 90 dBA. But in practice, the Compliance Safety and Health Officer must record the excess of these values with a margin, in order to take into account the potential measurement error. And, instead of PEL 90 dBA, it turns out 92 dBA, and instead of AL 85 dBA, it's 87 dBA. Risks of occupational hearing loss Occupational noise, if experienced repeatedly, at high intensity, for an extended period of time, can cause noise-induced hearing loss (NIHL) which is then classified as occupational hearing loss. Most often, this is a type of sensorineural hearing loss. Noise, in the context of industrial noise, is hazardous to a person's hearing because of its loud intensity through repeated long-term exposure. In order for noise to cause hearing impairment for the worker, the noise has to be close enough, loud enough, and sustained long enough to damage the hair cells in the auditory system. These factors have been taken into account by the governing occupational health and safety organization to determine the unsafe noise exposure levels and durations for their respective industries. Noise can also affect the safety of the employee and others. Noise can be a causal factor in work accidents as it may mask hazards and warning signals and impede concentration. High intensity noise interferes with vital workplace communication which increases the chance of accidents and decreases productivity. Noise may also act synergistically with other hazards to increase the risk of harm to workers. In particular, toxic materials (e.g., some solvents, metals, asphyxiants and pesticides) have some ototoxic properties that may affect hearing function. Modern thinking in occupational safety and health further identifies noise as hazardous to workers' safety and health. This hazard is experienced in various places of employment and through a variety of sources. Reduction There are several ways to limit exposure to hazardous occupational noise. The hierarchy of controls is a guideline for reducing hazardous noise. Before starting a noise reduction program, base noise levels should first be recorded. After this the company can start to eliminate the noise source. If the noise source cannot be eliminated, the company must try to reduce the noise with alternative methods. This process is called Acoustic quieting. Acoustic quieting is the process of making machinery quieter by damping vibrations to prevent them from reaching the observer. The company can isolate the certain piece of machinery by placing materials on the machine or in between the machine and the worker to decreases the signal intensity that reaches the worker's ear. If elimination and substitution are not sufficient in reducing the noise exposure, engineering controls should be put in place by the employer. An engineering control usually changes the physical environment of a workplace. For noise reduction, an engineering control might be as simple as putting barriers in-between the noise source and the employee in order to disrupt the transmission path. An engineering control might also involve changing the machine that produces the noise. Ideally, most machines should be made with noise reduction in mind, but this doesn't always happen. Changing the machinery involved in an industrial process may not be possible, but is a good way to reduce the noise at its source. To decrease an employee's exposure to hazardous noise, the company can also take administrative control by limiting the employee's exposure time. This can be done by changing work shifts and switching employees out from the noise exposure area. An employer might also implement a training program so that employees can learn about the hazards of occupational noise. Other administrative controls might include restricting access to noisy areas as well as placing warning signs around those same areas. If all other controls fail to decrease the occupational noise exposure to an acceptable level, hearing protection should be used. There are several types of earplugs that can be used to attenuate the noise to a safe level. Some earplug types include: single-use earplugs, multiple-use ear plugs, and banded ear plugs. Depending on the type of work being done and the needs of the employees, earmuffs might also be a good option. While earmuffs might not have as high of a noise reduction rating (NNR) as earplugs, they can be useful if the noise exposure isn't very high, or if an employee cannot wear earplugs. Unfortunately, the ability of HPDs decrease the risk of health damage is close to zero in practice. Initiatives Since the hazards of occupational noise exposure were realized, programs and initiatives such as the US Buy Quiet program have been set up to regulate or discourage noise exposure. The Buy Quiet initiative promotes the purchase of quieter tools and equipment and encourages manufacturers to design quieter machines. Additionally, the Safe-In-Sound Award was created to recognize successes in hearing loss prevention programs or initiatives. See also Hearing conservation program Occupational hearing loss Noise-induced hearing loss Buy Quiet Earplug Earmuffs Protective clothing A-weighting ITU-R 468 noise weighting Weighting filter Equal-loudness contour Safe-In-Sound Award Excellence in Hearing Loss Prevention General: Health effects from noise Noise control Noise pollution Noise regulation Notes References External links Occupational Safety & Health Administration NIOSH Buy Quiet Topic Page Establishing an OSHA-Compliant Occupational Hearing Testing Program Noise Noise pollution Occupational hazards Industrial hygiene Sound Acoustics
Occupational noise
[ "Physics" ]
1,576
[ "Classical mechanics", "Acoustics" ]
763,951
https://en.wikipedia.org/wiki/Simultaneous%20localization%20and%20mapping
Simultaneous localization and mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agent's location within it. While this initially appears to be a chicken or the egg problem, there are several algorithms known to solve it in, at least approximately, tractable time for certain environments. Popular approximate solution methods include the particle filter, extended Kalman filter, covariance intersection, and GraphSLAM. SLAM algorithms are based on concepts in computational geometry and computer vision, and are used in robot navigation, robotic mapping and odometry for virtual reality or augmented reality. SLAM algorithms are tailored to the available resources and are not aimed at perfection but at operational compliance. Published approaches are employed in self-driving cars, unmanned aerial vehicles, autonomous underwater vehicles, planetary rovers, newer domestic robots and even inside the human body. Mathematical description of the problem Given a series of controls and sensor observations over discrete time steps , the SLAM problem is to compute an estimate of the agent's state and a map of the environment . All quantities are usually probabilistic, so the objective is to compute Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a transition function , Similarly the map can be updated sequentially by Like many inference problems, the solutions to inferring the two variables together can be found, to a local optimum solution, by alternating updates of the two beliefs in a form of an expectation–maximization algorithm. Algorithms Statistical techniques used to approximate the above equations include Kalman filters and particle filters (the algorithm behind Monte Carlo Localization). They provide an estimation of the posterior probability distribution for the pose of the robot and for the parameters of the map. Methods which conservatively approximate the above model using covariance intersection are able to avoid reliance on statistical independence assumptions to reduce algorithmic complexity for large-scale applications. Other approximation methods achieve improved computational efficiency by using simple bounded-region representations of uncertainty. Set-membership techniques are mainly based on interval constraint propagation. They provide a set which encloses the pose of the robot and a set approximation of the map. Bundle adjustment, and more generally maximum a posteriori estimation (MAP), is another popular technique for SLAM using image data, which jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM systems such as Google's ARCore which replaces their prior augmented reality computing platform named Tango, formerly Project Tango. MAP estimators compute the most likely explanation of the robot poses and the map given the sensor data, rather than trying to estimate the entire posterior probability. New SLAM algorithms remain an active research area, and are often driven by differing requirements and assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be viewed as combinations of choices from each of these aspects. Mapping Topological maps are a method of environment representation which capture the connectivity (i.e., topology) of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have been used to enforce global consistency in metric SLAM algorithms. In contrast, grid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically independent to simplify computation. Under such assumption, are set to 1 if the new map's cells are consistent with the observation at location and 0 if inconsistent. Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of highly detailed map data collected in advance. This can include map annotations to the level of marking locations of individual white line segments and curbs on the road. Location-tagged visual data such as Google's StreetView may also be used as part of maps. Essentially such systems simplify the SLAM problem to a simpler localization only task, perhaps allowing for moving objects such as cars and people only to be updated in the map at runtime. Sensing SLAM will always use several different types of sensors, and the powers and limits of various sensor types have been a major driver of new algorithms. Statistical independence is the mandatory requirement to cope with metric bias and with noise in measurements. Different types of sensors give rise to different SLAM algorithms which assumptions are most appropriate to the sensors. At one extreme, laser scans or visual features provide details of many points within an area, sometimes rendering SLAM inference unnecessary because shapes in these point clouds can be easily and unambiguously aligned at each step via image registration. At the opposite extreme, tactile sensors are extremely sparse as they contain only information about points very close to the agent, so they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall somewhere between these visual and tactile extremes. Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely identifiable objects in the world which location can be estimated by a sensor, such as Wi-Fi access points or radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model directly as a function of the location. Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D high definition light detection and ranging (lidar), 3D flash lidar, 2D or 3D sonar sensors, and one or more 2D cameras. Since the invention of local features, such as SIFT, there has been intense research into visual SLAM (VSLAM) using primarily visual (camera) sensors, because of the increasing ubiquity of cameras such as those in mobile devices. Follow up research includes. Both visual and lidar sensors are informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include tactile SLAM (sensing by local touch only), radar SLAM, acoustic SLAM, and Wi-Fi-SLAM (sensing by strengths of nearby Wi-Fi access points). Recent approaches apply quasi-optical wireless ranging for multi-lateration (real-time locating system (RTLS)) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless measures. A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of buildings by an indoor positioning system. For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision differential GPS sensors. From a SLAM perspective, these may be viewed as location sensors which likelihoods are so sharp that they completely dominate the inference. However, GPS sensors may occasionally decline or go down entirely, e.g. during times of military conflict, which are of particular interest to some robotics applications. Kinematics modeling The term represents the kinematics of the model, which usually include information about action commands given to a robot. As a part of the model, the kinematics of the robot is included, to improve estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the contributions from various sensors, various partial error models and finally comprises in a sharp virtual depiction as a map with the location and heading of the robot as some cloud of probability. Mapping is the final depicting of such model, the map is either such depiction or the abstract term for the model. For 2D robots, the kinematics are usually given by a mixture of rotation and "move forward" commands, which are implemented with additional motor noise. Unfortunately the distribution formed by independent noise in angular and linear directions is non-Gaussian, but is often approximated by a Gaussian. An alternative approach is to ignore the kinematic term and read odometry data from robot wheels after each command—such data may then be treated as one of the sensors rather than as kinematics. Moving objects Non-static environments, such as those containing other vehicles or pedestrians, continue to present research challenges. SLAM with DATMO is a model which tracks moving objects in a similar way to the agent itself. Loop closure Loop closure is the problem of recognizing a previously-visited location and updating beliefs accordingly. This can be a problem because model or algorithm errors can assign low priors to the location. Typical loop closure methods apply a second algorithm to compute some type of sensor measure similarity, and reset the location priors when a match is detected. For example, this can be done by storing and comparing bag of words vectors of scale-invariant feature transform (SIFT) features from each previously visited location. Exploration Active SLAM studies the combined problem of SLAM with deciding where to move next to build the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing regimes such as tactile SLAM. Active SLAM is generally performed by approximating the entropy of the map under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots coordinating themselves to explore optimally. Biological inspiration In neuroscience, the hippocampus appears to be involved in SLAM-like computations, giving rise to place cells, and has formed the basis for bio-inspired SLAM systems such as RatSLAM. Collaborative SLAM Collaborative SLAM combines sensors from multiple robots or users to generate 3D maps. This capability was demonstrated by a number of teams in the 2021 DARPA Subterranean Challenge. Specialized SLAM methods Acoustic SLAM An extension of the common SLAM problem has been applied to the acoustic domain, where environments are represented by the three-dimensional (3D) position of sound sources, termed aSLAM (Acoustic Simultaneous Localization and Mapping). Early implementations of this technique have used direction-of-arrival (DoA) estimates of the sound source location, and rely on principal techniques of sound localization to determine source locations. An observer, or robot must be equipped with a microphone array to enable use of Acoustic SLAM, so that DoA features are properly estimated. Acoustic SLAM has paved foundations for further studies in acoustic scene mapping, and can play an important role in human-robot interaction through speech. To map multiple, and occasionally intermittent sound sources, an acoustic SLAM system uses foundations in random finite set theory to handle the varying presence of acoustic landmarks. However, the nature of acoustically derived features leaves Acoustic SLAM susceptible to problems of reverberation, inactivity, and noise within an environment. Audiovisual SLAM Originally designed for human–robot interaction, Audio-Visual SLAM is a framework that provides the fusion of landmark features obtained from both the acoustic and visual modalities within an environment. Human interaction is characterized by features perceived in not only the visual modality, but the acoustic modality as well; as such, SLAM algorithms for human-centered robots and machines must account for both sets of features. An Audio-Visual framework estimates and maps positions of human landmarks through use of visual features like human pose, and audio features like human speech, and fuses the beliefs for a more robust map of the environment. For applications in mobile robotics (ex. drones, service robots), it is valuable to use low-power, lightweight equipment such as monocular cameras, or microelectronic microphone arrays. Audio-Visual SLAM can also allow for complimentary function of such sensors, by compensating the narrow field-of-view, feature occlusions, and optical degradations common to lightweight visual sensors with the full field-of-view, and unobstructed feature representations inherent to audio sensors. The susceptibility of audio sensors to reverberation, sound source inactivity, and noise can also be accordingly compensated through fusion of landmark beliefs from the visual modality. Complimentary function between the audio and visual modalities in an environment can prove valuable for the creation of robotics and machines that fully interact with human speech and human movement. Implementation methods Various SLAM algorithms are implemented in the open-source software Robot Operating System (ROS) libraries, often used together with the Point Cloud Library for 3D maps or visual features from OpenCV. EKF SLAM In robotics, EKF SLAM is a class of algorithms which uses the extended Kalman filter (EKF) for SLAM. Typically, EKF SLAM algorithms are feature based, and use the maximum likelihood algorithm for data association. In the 1990s and 2000s, EKF SLAM had been the de facto method for SLAM, until the introduction of FastSLAM. Associated with the EKF is the gaussian noise assumption, which significantly impairs EKF SLAM's ability to deal with uncertainty. With greater amount of uncertainty in the posterior, the linearization in the EKF fails. GraphSLAM In robotics, GraphSLAM is a SLAM algorithm which uses sparse information matrices produced by generating a factor graph of observation interdependencies (two observations are related if they contain data about the same landmark). It is based on optimization algorithms. History A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and estimation of spatial uncertainty in 1986. Other pioneering work in this field was conducted by the research group of Hugh F. Durrant-Whyte in the early 1990s. which showed that solutions to SLAM exist in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable and approximate the solution. The acronym SLAM was coined within the paper, "Localization of Autonomous Guided Vehicles" which first appeared in ISR in 1995. The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot vacuum cleaners and virtual reality headsets such as the Meta Quest 2 and PICO 4 for markerless inside-out tracking. See also Computational photography Kalman filter Inverse depth parametrization Mobile Robot Programming Toolkit Monte Carlo localization Multi Autonomous Ground-robotic International Challenge Neato Robotics Particle filter Recursive Bayesian estimation Robotic mapping Stanley (vehicle), DARPA Grand Challenge Stereophotogrammetry Structure from motion Tango (platform) Visual odometry References External links Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard and Dieter Fox with a clear overview of SLAM. SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping). Andrew Davison research page at the Department of Computing, Imperial College London about SLAM using vision. openslam.org A good collection of open source code and explanations of SLAM. Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping Vehicle moving in 1D, 2D and 3D. FootSLAM research page at German Aerospace Center (DLR) including the related Wi-Fi SLAM and PlaceSLAM approaches. SLAM lecture Online SLAM lecture based on Python. Computational geometry Robot navigation Applied machine learning Motion in computer vision Positioning
Simultaneous localization and mapping
[ "Physics", "Mathematics" ]
3,045
[ "Physical phenomena", "Point (geometry)", "Computational mathematics", "Position", "Motion (physics)", "Motion in computer vision", "Computational geometry", "Positioning" ]
8,097,563
https://en.wikipedia.org/wiki/LRK
Long Range Kinematic (LRK) technology is a sophisticated kinematic method developed by Magellan (formerly Thales) Navigation that optimises the advantages of dual-frequency GPS operation. Other conventional methods use the dual-frequency only during initialisation. LRK makes solving ambiguities during initialisation easy and continuous dual-frequency kinematic operation possible at distances up to 40 kilometres. Conventional dual-frequency kinematic operation is limited to about 10 kilometres, using a combined observation on GPS L1 and L2 frequencies to produce an initial wide lane solution, ambiguous to around 86 centimetres. During a second phase, the conventional kinematic method uses measurements from the L1 frequency only. This method only allows for kinematic operation as long as the de-correlation of atmospheric errors is compatible with a pure phase single-frequency solution. Similar to the KART process, LRK is a simple and reliable method that allows any initialisation mode, from a static or fixed reference point, to On The Fly ambiguity resolution, when performing dual-frequency GPS positioning. LRK technology reduces initialisation times to a few seconds by efficiently using L2 measurements in every mode of operation. LRK maintains optimal real-time positioning accuracy to within a centimetre at a range up to 40-50 kilometres, even with a reduced number of visible satellites. External links https://web.archive.org/web/20060821080822/http://products.thalesnavigation.com/en/products/aboutgps/rtk.asp Global Positioning System Navigation Navigational equipment
LRK
[ "Technology", "Engineering" ]
334
[ "Global Positioning System", "Aerospace engineering", "Wireless locating", "Aircraft instruments" ]
8,097,912
https://en.wikipedia.org/wiki/Mosher%27s%20acid
Mosher's acid, or α-methoxy-α-trifluoromethylphenylacetic acid (MTPA) is a carboxylic acid which was first used by Harry Stone Mosher as a chiral derivatizing agent. It is a chiral molecule, consisting of R and S enantiomers. Applications As a chiral derivatizing agent, it reacts with an alcohol or amine of unknown stereochemistry to form an ester or amide. The absolute configuration of the ester or amide is then determined by proton and/or 19F NMR spectroscopy. Mosher's acid chloride, the acid chloride form, is sometimes used because it has better reactivity. See also Pirkle's alcohol References Stereochemistry Carboxylic acids Trifluoromethyl compounds Phenyl compounds
Mosher's acid
[ "Physics", "Chemistry" ]
183
[ "Stereochemistry", "Carboxylic acids", "Functional groups", "Space", "nan", "Spacetime" ]
8,098,322
https://en.wikipedia.org/wiki/Biodrying
Biodrying is the process by which biodegradable waste is rapidly heated through initial stages of composting to remove moisture from a waste stream and hence reduce its overall weight. In biodrying processes, the drying rates are augmented by biological heat in addition to forced aeration. The major portion of biological heat, naturally available through the aerobic degradation of organic matter, is utilized to evaporate surface and bound water associated with the mixed sludge. This heat generation assists in reducing the moisture content of the biomass without the need for supplementary fossil fuels, and with minimal electricity consumption. It can take as little as 8 days to dry waste in this manner. This enables reduced costs of disposal if landfill is charged on a cost per tonne basis. Biodrying may be used as part of the production process for refuse-derived fuels. Biodrying does not however greatly affect the biodegradability of the waste and hence is not stabilised. Biodried waste will still break down in a landfill to produce landfill gas and hence potentially contribute to climate change. In the UK this waste will still impact upon councils LATS allowances. Whilst biodrying is increasingly applied within commercial mechanical biological treatment (MBT) plants, it is also still subject to on-going research and development. See also List of solid waste treatment technologies Waste management References Biodegradable waste management Industrial composting Waste treatment technology
Biodrying
[ "Chemistry", "Engineering" ]
288
[ "Water treatment", "Biodegradable waste management", "Biodegradation", "Environmental engineering", "Waste treatment technology" ]
8,098,634
https://en.wikipedia.org/wiki/CEN/TC%20125
CEN/TC 125 (CEN Technical Committee 125) is a technical decision making body within the CEN system working on standardization in the field of Masonry, including natural stone, in the European Union. External links CEN/TC 125 Masonry Building materials Construction standards Building engineering EN standards CEN technical committees
CEN/TC 125
[ "Physics", "Engineering" ]
63
[ "Masonry", "Construction standards", "Building engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Matter", "Building materials" ]
8,099,018
https://en.wikipedia.org/wiki/Riesz%20space
In mathematics, a Riesz space, lattice-ordered vector space or vector lattice is a partially ordered vector space where the order structure is a lattice. Riesz spaces are named after Frigyes Riesz who first defined them in his 1928 paper Sur la décomposition des opérations fonctionelles linéaires. Riesz spaces have wide-ranging applications. They are important in measure theory, in that important results are special cases of results for Riesz spaces. For example, the Radon–Nikodym theorem follows as a special case of the Freudenthal spectral theorem. Riesz spaces have also seen application in mathematical economics through the work of Greek-American economist and mathematician Charalambos D. Aliprantis. Definition Preliminaries If is an ordered vector space (which by definition is a vector space over the reals) and if is a subset of then an element is an upper bound (resp. lower bound) of if (resp. ) for all An element in is the least upper bound or supremum (resp. greater lower bound or infimum) of if it is an upper bound (resp. a lower bound) of and if for any upper bound (resp. any lower bound) of (resp. ). Definitions Preordered vector lattice A preordered vector lattice is a preordered vector space in which every pair of elements has a supremum. More explicitly, a preordered vector lattice is vector space endowed with a preorder, such that for any : Translation Invariance: implies Positive Homogeneity: For any scalar implies For any pair of vectors there exists a supremum (denoted ) in with respect to the order The preorder, together with items 1 and 2, which make it "compatible with the vector space structure", make a preordered vector space. Item 3 says that the preorder is a join semilattice. Because the preorder is compatible with the vector space structure, one can show that any pair also have an infimum, making also a meet semilattice, hence a lattice. A preordered vector space is a preordered vector lattice if and only if it satisfies any of the following equivalent properties: For any their supremum exists in For any their infimum exists in For any their infimum and their supremum exist in For any exists in Riesz space and vector lattices A Riesz space or a vector lattice is a preordered vector lattice whose preorder is a partial order. Equivalently, it is an ordered vector space for which the ordering is a lattice. Note that many authors required that a vector lattice be a partially ordered vector space (rather than merely a preordered vector space) while others only require that it be a preordered vector space. We will henceforth assume that every Riesz space and every vector lattice is an ordered vector space but that a preordered vector lattice is not necessarily partially ordered. If is an ordered vector space over whose positive cone (the elements ) is generating (that is, such that ), and if for every either or exists, then is a vector lattice. Intervals An order interval in a partially ordered vector space is a convex set of the form In an ordered real vector space, every interval of the form is balanced. From axioms 1 and 2 above it follows that and implies A subset is said to be order bounded if it is contained in some order interval. An order unit of a preordered vector space is any element such that the set is absorbing. The set of all linear functionals on a preordered vector space that map every order interval into a bounded set is called the order bound dual of and denoted by If a space is ordered then its order bound dual is a vector subspace of its algebraic dual. A subset of a vector lattice is called order complete if for every non-empty subset such that is order bounded in both and exist and are elements of We say that a vector lattice is order complete if is an order complete subset of Classification Finite-dimensional Riesz spaces are entirely classified by the Archimedean property: Theorem: Suppose that is a vector lattice of finite-dimension If is Archimedean ordered then it is (a vector lattice) isomorphic to under its canonical order. Otherwise, there exists an integer satisfying such that is isomorphic to where has its canonical order, is with the lexicographical order, and the product of these two spaces has the canonical product order. The same result does not hold in infinite dimensions. For an example due to Kaplansky, consider the vector space of functions on that are continuous except at finitely many points, where they have a pole of second order. This space is lattice-ordered by the usual pointwise comparison, but cannot be written as for any cardinal . On the other hand, epi-mono factorization in the category of -vector spaces also applies to Riesz spaces: every lattice-ordered vector space injects into a quotient of by a solid subspace. Basic properties Every Riesz space is a partially ordered vector space, but not every partially ordered vector space is a Riesz space. Note that for any subset of whenever either the supremum or infimum exists (in which case they both exist). If and then For all in a Riesz space Absolute value For every element in a Riesz space the absolute value of denoted by is defined to be where this satisfies and For any and any real number we have and Disjointness Two elements in a vector lattice are said to be lattice disjoint or disjoint if in which case we write Two elements are disjoint if and only if If are disjoint then and where for any element and We say that two sets and are disjoint if and are disjoint for all and all in which case we write If is the singleton set then we will write in place of For any set we define the disjoint complement to be the set Disjoint complements are always bands, but the converse is not true in general. If is a subset of such that exists, and if is a subset lattice in that is disjoint from then is a lattice disjoint from Representation as a disjoint sum of positive elements For any let and where note that both of these elements are and with Then and are disjoint, and is the unique representation of as the difference of disjoint elements that are For all and If and then Moreover, if and only if and Every Riesz space is a distributive lattice; that is, it has the following equivalent properties: for all and always imply Every Riesz space has the Riesz decomposition property. Order convergence There are a number of meaningful non-equivalent ways to define convergence of sequences or nets with respect to the order structure of a Riesz space. A sequence in a Riesz space is said to converge monotonely if it is a monotone decreasing (resp. increasing) sequence and its infimum (supremum) exists in and denoted (resp. ). A sequence in a Riesz space is said to converge in order to if there exists a monotone converging sequence in such that If is a positive element of a Riesz space then a sequence in is said to converge u-uniformly to if for any there exists an such that for all Subspaces The extra structure provided by these spaces provide for distinct kinds of Riesz subspaces. The collection of each kind structure in a Riesz space (for example, the collection of all ideals) forms a distributive lattice. Sublattices If is a vector lattice then a vector sublattice is a vector subspace of such that for all belongs to (where this supremum is taken in ). It can happen that a subspace of is a vector lattice under its canonical order but is a vector sublattice of Ideals A vector subspace of a Riesz space is called an ideal if it is solid, meaning if for and implies that The intersection of an arbitrary collection of ideals is again an ideal, which allows for the definition of a smallest ideal containing some non-empty subset of and is called the ideal generated by An Ideal generated by a singleton is called a principal ideal. Bands and σ-Ideals A band in a Riesz space is defined to be an ideal with the extra property, that for any element for which its absolute value is the supremum of an arbitrary subset of positive elements in that is actually in -Ideals are defined similarly, with the words 'arbitrary subset' replaced with 'countable subset'. Clearly every band is a -ideal, but the converse is not true in general. The intersection of an arbitrary family of bands is again a band. As with ideals, for every non-empty subset of there exists a smallest band containing that subset, called A band generated by a singleton is called a principal band. Projection bands A band in a Riesz space, is called a projection band, if meaning every element can be written uniquely as a sum of two elements, with and There then also exists a positive linear idempotent, or , such that The collection of all projection bands in a Riesz space forms a Boolean algebra. Some spaces do not have non-trivial projection bands (for example, ), so this Boolean algebra may be trivial. Completeness A vector lattice is complete if every subset has both a supremum and an infimum. A vector lattice is Dedekind complete if each set with an upper bound has a supremum and each set with a lower bound has an infimum. An order complete, regularly ordered vector lattice whose canonical image in its order bidual is order complete is called minimal and is said to be of minimal type. Subspaces, quotients, and products Sublattices If is a vector subspace of a preordered vector space then the canonical ordering on induced by 's positive cone is the preorder induced by the pointed convex cone where this cone is proper if is proper (that is, if ). A sublattice of a vector lattice is a vector subspace of such that for all belongs to (importantly, note that this supremum is taken in and not in ). If with then the 2-dimensional vector subspace of defined by all maps of the form (where ) is a vector lattice under the induced order but is a sublattice of This despite being an order complete Archimedean ordered topological vector lattice. Furthermore, there exist vector a vector sublattice of this space such that has empty interior in but no positive linear functional on can be extended to a positive linear functional on Quotient lattices Let be a vector subspace of an ordered vector space having positive cone let be the canonical projection, and let Then is a cone in that induces a canonical preordering on the quotient space If is a proper cone in then makes into an ordered vector space. If is -saturated then defines the canonical order of Note that provides an example of an ordered vector space where is not a proper cone. If is a vector lattice and is a solid vector subspace of then defines the canonical order of under which is a vector lattice and the canonical map is a vector lattice homomorphism. Furthermore, if is order complete and is a band in then is isomorphic with Also, if is solid then the order topology of is the quotient of the order topology on If is a topological vector lattice and is a closed solid sublattice of then is also a topological vector lattice. Product If is any set then the space of all functions from into is canonically ordered by the proper cone Suppose that is a family of preordered vector spaces and that the positive cone of is Then is a pointed convex cone in which determines a canonical ordering on ; is a proper cone if all are proper cones. Algebraic direct sum The algebraic direct sum of is a vector subspace of that is given the canonical subspace ordering inherited from If are ordered vector subspaces of an ordered vector space then is the ordered direct sum of these subspaces if the canonical algebraic isomorphism of onto (with the canonical product order) is an order isomorphism. Spaces of linear maps A cone in a vector space is said to be generating if is equal to the whole vector space. If and are two non-trivial ordered vector spaces with respective positive cones and then is generating in if and only if the set is a proper cone in which is the space of all linear maps from into In this case the ordering defined by is called the canonical ordering of More generally, if is any vector subspace of such that is a proper cone, the ordering defined by is called the canonical ordering of A linear map between two preordered vector spaces and with respective positive cones and is called positive if If and are vector lattices with order complete and if is the set of all positive linear maps from into then the subspace of is an order complete vector lattice under its canonical order; furthermore, contains exactly those linear maps that map order intervals of into order intervals of Positive functionals and the order dual A linear function on a preordered vector space is called positive if implies The set of all positive linear forms on a vector space, denoted by is a cone equal to the polar of The order dual of an ordered vector space is the set, denoted by defined by Although there do exist ordered vector spaces for which set equality does hold. Vector lattice homomorphism Suppose that and are preordered vector lattices with positive cones and and let be a map. Then is a preordered vector lattice homomorphism if is linear and if any one of the following equivalent conditions hold: preserves the lattice operations for all for all for all for all and is a solid subset of if then is order preserving. A pre-ordered vector lattice homomorphism that is bijective is a pre-ordered vector lattice isomorphism. A pre-ordered vector lattice homomorphism between two Riesz spaces is called a vector lattice homomorphism; if it is also bijective, then it is called a vector lattice isomorphism. If is a non-zero linear functional on a vector lattice with positive cone then the following are equivalent: is a surjective vector lattice homomorphism. for all and is a solid hyperplane in <li> generates an extreme ray of the cone in An extreme ray of the cone is a set where is non-zero, and if is such that then for some such that A vector lattice homomorphism from into is a topological homomorphism when and are given their respective order topologies. Projection properties There are numerous projection properties that Riesz spaces may have. A Riesz space is said to have the (principal) projection property if every (principal) band is a projection band. The so-called main inclusion theorem relates the following additional properties to the (principal) projection property: A Riesz space is... Dedekind Complete (DC) if every nonempty set, bounded above, has a supremum; Super Dedekind Complete (SDC) if every nonempty set, bounded above, has a countable subset with identical supremum; Dedekind -complete if every countable nonempty set, bounded above, has a supremum; and Archimedean property if, for every pair of positive elements and , whenever the inequality holds for all integers , . Then these properties are related as follows. SDC implies DC; DC implies both Dedekind -completeness and the projection property; Both Dedekind -completeness and the projection property separately imply the principal projection property; and the principal projection property implies the Archimedean property. None of the reverse implications hold, but Dedekind -completeness and the projection property together imply DC. Examples The space of continuous real valued functions with compact support on a topological space with the pointwise partial order defined by when for all is a Riesz space. It is Archimedean, but usually does not have the principal projection property unless satisfies further conditions (for example, being extremally disconnected). Any space with the (almost everywhere) pointwise partial order is a Dedekind complete Riesz space. The space with the lexicographical order is a non-Archimedean Riesz space. Properties Riesz spaces are lattice ordered groups Every Riesz space is a distributive lattice See also Notes References Bibliography Bourbaki, Nicolas; Elements of Mathematics: Integration. Chapters 1–6; Riesz, Frigyes; Sur la décomposition des opérations fonctionelles linéaires, Atti congress. internaz. mathematici (Bologna, 1928), 3, Zanichelli (1930) pp. 143–148 External links Riesz space at the Encyclopedia of Mathematics Functional analysis Ordered groups
Riesz space
[ "Mathematics" ]
3,533
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Ordered groups", "Order theory" ]
8,099,272
https://en.wikipedia.org/wiki/Acoustic%20contrast%20factor
In acoustics, the acoustic contrast factor is a number that describes the relationship between the densities and the sound velocities of two media, or equivalently (because of the form of the expression), the relationship between the densities and compressibilities of two media. It is most often used in the context of biomedical ultrasonic imaging techniques using acoustic contrast agents and in the field of ultrasonic manipulation of particles (acoustophoresis) much smaller than the wavelength using ultrasonic standing waves. In the latter context, the acoustic contrast factor is the number which, depending on its sign, tells whether a given type of particle in a given medium will be attracted to the pressure nodes or anti-nodes. Example - particle in a medium In an ultrasonic standing wave field, a small spherical particle (, where is the particle radius, and is the wavelength) suspended in an inviscid fluid will move under the effect of an acoustic radiation force. The direction of its movement is governed by the physical properties of the particle and the surrounding medium, expressed in the form of an acoustophoretic contrast factor . Given the compressibilities and and densities and of the medium and particle, respectively, the acoustic contrast factor can be expressed as: For a positive value of , the particles will be attracted to the pressure nodes. For a negative value of , the particles will be attracted to the pressure anti-nodes. See also Acoustic impedance Acoustic tweezers References Acoustics
Acoustic contrast factor
[ "Physics" ]
307
[ "Classical mechanics", "Acoustics" ]
4,694,434
https://en.wikipedia.org/wiki/Boolean%20model%20of%20information%20retrieval
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms (a bag-of-words model). Retrieval is based on whether or not the documents contain the query terms and whether they satisfy the boolean conditions described by the query. Definitions An index term is a word or expression, which may be stemmed, describing or characterizing a document, such as a keyword given for a journal article. Letbe the set of all such index terms. A document is any subset of . Letbe the set of all documents. is a series of words or small phrases (index terms). Each of those words or small phrases are named , where is the number of the term in the series/list. You can think of as "Terms" and as "index term n". The words or small phrases (index terms ) can exist in documents. These documents then form a series/list where each individual documents are called . These documents () can contain words or small phrases (index terms ) such as could contain the terms and from . There is an example of this in the following section. Index terms generally want to represent words which have more meaning to them and corresponds to what the content of an article or document could talk about. Terms like "the" and "like" would appear in nearly all documents whereas "Bayesian" would only be a small fraction of documents. Therefor, rarer terms like "Bayesian" are a better choice to be selected in the sets. This relates to Entropy (information theory). There are multiple types of operations that can be applied to index terms used in queries to make them more generic and more relevant. One such is Stemming. A query is a Boolean expression in normal form:where is true for when . (Equivalently, could be expressed in disjunctive normal form.) Any queries are a selection of index terms ( or ) picked from a set of terms which are combined using Boolean operators to form a set of conditions. These conditions are then applied to a set of documents which contain the same index terms () from the set . We seek to find the set of documents that satisfy . This operation is called retrieval and consists of the following two steps: 1. For each in , find the set of documents that satisfy :2. Then the set of documents that satisfy Q is given by:Where means OR and means AND as Boolean operators. Example Let the set of original (real) documents be, for example where = "Bayes' principle: The principle that, in estimating a parameter, one should initially assume that each possible value has equal probability (a uniform prior distribution)." = "Bayesian decision theory: A mathematical theory of decision-making which presumes utility and probability functions, and according to which the act to be chosen is the Bayes act, i.e. the one with highest subjective expected utility. If one had unlimited time and calculating power with which to make every decision, this procedure would be the best way to make any decision." = "Bayesian epistemology: A philosophical theory which holds that the epistemic status of a proposition (i.e. how well proven or well established it is) is best measured by a probability and that the proper way to revise this probability is given by Bayesian conditionalisation or similar procedures. A Bayesian epistemologist would use probability to define, and explore the relationship between, concepts such as epistemic status, support or explanatory power." Let the set of terms be: Then, the set of documents is as follows: where Let the query be ("probability" AND "decision-making"): Then to retrieve the relevant documents: Firstly, the following sets and of documents are obtained (retrieved):Where corresponds to the documents which contain the term "probability" and contain the term "decision-making". Finally, the following documents are retrieved in response to : Where the query looks for documents that are contained in both sets using the intersection operator. This means that the original document is the answer to . If there is more than one document with the same representation (the same subset of index terms ), every such document is retrieved. Such documents are indistinguishable in the BIR (in other words, equivalent). Advantages Clean formalism Easy to implement Intuitive concept If the resulting document set is either too small or too big, it is directly clear which operators will produce respectively a bigger or smaller set. It gives (expert) users a sense of control over the system. It is immediately clear why a document has been retrieved given a query. Disadvantages Exact matching may retrieve too few or too many documents Hard to translate a query into a Boolean expression Ineffective for Search-Resistant Concepts All terms are equally weighted More like data retrieval than information retrieval Retrieval based on binary decision criteria with no notion of partial matching No ranking of the documents is provided (absence of a grading scale) Information need has to be translated into a Boolean expression, which most users find awkward The Boolean queries formulated by the users are most often too simplistic The model frequently returns either too few or too many documents in response to a user query Data structures and algorithms From a pure formal mathematical point of view, the BIR is straightforward. From a practical point of view, however, several further problems should be solved that relate to algorithms and data structures, such as, for example, the choice of terms (manual or automatic selection or both), stemming, hash tables, inverted file structure, and so on. Hash sets Another possibility is to use hash sets. Each document is represented by a hash table which contains every single term of that document. Since hash table size increases and decreases in real time with the addition and removal of terms, each document will occupy much less space in memory. However, it will have a slowdown in performance because the operations are more complex than with bit vectors. On the worst-case performance can degrade from O(n) to O(n2). On the average case, the performance slowdown will not be that much worse than bit vectors and the space usage is much more efficient. Signature file Each document can be summarized by Bloom filter representing the set of words in that document, stored in a fixed-length bitstring, called a signature. The signature file contains one such superimposed code bitstring for every document in the collection. Each query can also be summarized by a Bloom filter representing the set of words in the query, stored in a bitstring of the same fixed length. The query bitstring is tested against each signature. The signature file approached is used in BitFunnel. Inverted file An inverted index file contains two parts: a vocabulary containing all the terms used in the collection, and for each distinct term an inverted index that lists every document that mentions that term. References Mathematical modeling Information retrieval techniques
Boolean model of information retrieval
[ "Mathematics" ]
1,472
[ "Applied mathematics", "Mathematical modeling" ]
4,695,005
https://en.wikipedia.org/wiki/Space%20Nursing%20Society
The Space Nursing Society is an international space advocacy organization devoted to space nursing and space exploration by registered nurses. The society is an affiliated, non-profit special interest group associated with the National Space Society. The society was founded in 1991 and has members from around the world including Australia, Canada, Czech Republic, England, Germany, Greece, Scotland and the United States. The society serves as a forum for the discussion and study of issues related to nursing in space and the impact of these studies on nursing on Earth. See also Space colonization Vision for Space Exploration References External links National Space Society official website Ad Astra online edition Organizations established in 1991 Non-profit organizations based in California Space organizations Nursing organizations Space nursing
Space Nursing Society
[ "Astronomy" ]
142
[ "Outer space", "Astronomy stubs", "Astronomy organizations", "Space organizations", "Outer space stubs" ]
4,700,845
https://en.wikipedia.org/wiki/Entropy%20%28classical%20thermodynamics%29
In classical thermodynamics, entropy () is a property of a thermodynamic system that expresses the direction or outcome of spontaneous changes in the system. The term was introduced by Rudolf Clausius in the mid-19th century to explain the relationship of the internal energy that is available or unavailable for transformations in form of heat and work. Entropy predicts that certain processes are irreversible or impossible, despite not violating the conservation of energy. The definition of entropy is central to the establishment of the second law of thermodynamics, which states that the entropy of isolated systems cannot decrease with time, as they always tend to arrive at a state of thermodynamic equilibrium, where the entropy is highest. Entropy is therefore also considered to be a measure of disorder in the system. Ludwig Boltzmann explained the entropy as a measure of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which correspond to the macroscopic state (macrostate) of the system. He showed that the thermodynamic entropy is , where the factor has since been known as the Boltzmann constant. Concept Differences in pressure, density, and temperature of a thermodynamic system tend to equalize over time. For example, in a room containing a glass of melting ice, the difference in temperature between the warm room and the cold glass of ice and water is equalized by energy flowing as heat from the room to the cooler ice and water mixture. Over time, the temperature of the glass and its contents and the temperature of the room achieve a balance. The entropy of the room has decreased. However, the entropy of the glass of ice and water has increased more than the entropy of the room has decreased. In an isolated system, such as the room and ice water taken together, the dispersal of energy from warmer to cooler regions always results in a net increase in entropy. Thus, when the system of the room and ice water system has reached thermal equilibrium, the entropy change from the initial state is at its maximum. The entropy of the thermodynamic system is a measure of the progress of the equalization. Many irreversible processes result in an increase of entropy. One of them is mixing of two or more different substances, occasioned by bringing them together by removing a wall that separates them, keeping the temperature and pressure constant. The mixing is accompanied by the entropy of mixing. In the important case of mixing of ideal gases, the combined system does not change its internal energy by work or heat transfer; the entropy increase is then entirely due to the spreading of the different substances into their new common volume. From a macroscopic perspective, in classical thermodynamics, the entropy is a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. Entropy is a key ingredient of the Second law of thermodynamics, which has important consequences e.g. for the performance of heat engines, refrigerators, and heat pumps. Definition According to the Clausius equality, for a closed homogeneous system, in which only reversible processes take place, With being the uniform temperature of the closed system and the incremental reversible transfer of heat energy into that system. That means the line integral is path-independent. A state function , called entropy, may be defined which satisfies Entropy measurement The thermodynamic state of a uniform closed system is determined by its temperature and pressure . A change in entropy can be written as The first contribution depends on the heat capacity at constant pressure through This is the result of the definition of the heat capacity by and . The second term may be rewritten with one of the Maxwell relations and the definition of the volumetric thermal-expansion coefficient so that With this expression the entropy at arbitrary and can be related to the entropy at some reference state at and according to In classical thermodynamics, the entropy of the reference state can be put equal to zero at any convenient temperature and pressure. For example, for pure substances, one can take the entropy of the solid at the melting point at 1 bar equal to zero. From a more fundamental point of view, the third law of thermodynamics suggests that there is a preference to take at (absolute zero) for perfectly ordered materials such as crystals. is determined by followed a specific path in the P-T diagram: integration over at constant pressure , so that , and in the second integral one integrates over at constant temperature , so that . As the entropy is a function of state the result is independent of the path. The above relation shows that the determination of the entropy requires knowledge of the heat capacity and the equation of state (which is the relation between P,V, and T of the substance involved). Normally these are complicated functions and numerical integration is needed. In simple cases it is possible to get analytical expressions for the entropy. In the case of an ideal gas, the heat capacity is constant and the ideal gas law gives that , with the number of moles and R the molar ideal-gas constant. So, the molar entropy of an ideal gas is given by In this expression CP now is the molar heat capacity. The entropy of inhomogeneous systems is the sum of the entropies of the various subsystems. The laws of thermodynamics hold rigorously for inhomogeneous systems even though they may be far from internal equilibrium. The only condition is that the thermodynamic parameters of the composing subsystems are (reasonably) well-defined. Temperature-entropy diagrams Entropy values of important substances may be obtained from reference works or with commercial software in tabular form or as diagrams. One of the most common diagrams is the temperature-entropy diagram (TS-diagram). For example, Fig.2 shows the TS-diagram of nitrogen, depicting the melting curve and saturated liquid and vapor values with isobars and isenthalps. Entropy change in irreversible transformations We now consider inhomogeneous systems in which internal transformations (processes) can take place. If we calculate the entropy S1 before and S2 after such an internal process the Second Law of Thermodynamics demands that S2 ≥ S1 where the equality sign holds if the process is reversible. The difference is the entropy production due to the irreversible process. The Second law demands that the entropy of an isolated system cannot decrease. Suppose a system is thermally and mechanically isolated from the environment (isolated system). For example, consider an insulating rigid box divided by a movable partition into two volumes, each filled with gas. If the pressure of one gas is higher, it will expand by moving the partition, thus performing work on the other gas. Also, if the gases are at different temperatures, heat can flow from one gas to the other provided the partition allows heat conduction. Our above result indicates that the entropy of the system as a whole will increase during these processes. There exists a maximum amount of entropy the system may possess under the circumstances. This entropy corresponds to a state of stable equilibrium, since a transformation to any other equilibrium state would cause the entropy to decrease, which is forbidden. Once the system reaches this maximum-entropy state, no part of the system can perform work on any other part. It is in this sense that entropy is a measure of the energy in a system that cannot be used to do work. An irreversible process degrades the performance of a thermodynamic system, designed to do work or produce cooling, and results in entropy production. The entropy generation during a reversible process is zero. Thus entropy production is a measure of the irreversibility and may be used to compare engineering processes and machines. Thermal machines Clausius' identification of S as a significant quantity was motivated by the study of reversible and irreversible thermodynamic transformations. A heat engine is a thermodynamic system that can undergo a sequence of transformations which ultimately return it to its original state. Such a sequence is called a cyclic process, or simply a cycle. During some transformations, the engine may exchange energy with its environment. The net result of a cycle is mechanical work done by the system (which can be positive or negative, the latter meaning that work is done on the engine), heat transferred from one part of the environment to another. In the steady state, by the conservation of energy, the net energy lost by the environment is equal to the work done by the engine. If every transformation in the cycle is reversible, the cycle is reversible, and it can be run in reverse, so that the heat transfers occur in the opposite directions and the amount of work done switches sign. Heat engines Consider a heat engine working between two temperatures TH and Ta. With Ta we have ambient temperature in mind, but, in principle it may also be some other low temperature. The heat engine is in thermal contact with two heat reservoirs which are supposed to have a very large heat capacity so that their temperatures do not change significantly if heat QH is removed from the hot reservoir and Qa is added to the lower reservoir. Under normal operation TH > Ta and QH, Qa, and W are all positive. As our thermodynamical system we take a big system which includes the engine and the two reservoirs. It is indicated in Fig.3 by the dotted rectangle. It is inhomogeneous, closed (no exchange of matter with its surroundings), and adiabatic (no exchange of heat with its surroundings). It is not isolated since per cycle a certain amount of work W is produced by the system given by the first law of thermodynamics We used the fact that the engine itself is periodic, so its internal energy has not changed after one cycle. The same is true for its entropy, so the entropy increase S2 − S1 of our system after one cycle is given by the reduction of entropy of the hot source and the increase of the cold sink. The entropy increase of the total system S2 - S1 is equal to the entropy production Si due to irreversible processes in the engine so The Second law demands that Si ≥ 0. Eliminating Qa from the two relations gives The first term is the maximum possible work for a heat engine, given by a reversible engine, as one operating along a Carnot cycle. Finally This equation tells us that the production of work is reduced by the generation of entropy. The term TaSi gives the lost work, or dissipated energy, by the machine. Correspondingly, the amount of heat, discarded to the cold sink, is increased by the entropy generation These important relations can also be obtained without the inclusion of the heat reservoirs. See the article on entropy production. Refrigerators The same principle can be applied to a refrigerator working between a low temperature TL and ambient temperature. The schematic drawing is exactly the same as Fig.3 with TH replaced by TL, QH by QL, and the sign of W reversed. In this case the entropy production is and the work needed to extract heat QL from the cold source is The first term is the minimum required work, which corresponds to a reversible refrigerator, so we have i.e., the refrigerator compressor has to perform extra work to compensate for the dissipated energy due to irreversible processes which lead to entropy production. See also Entropy Enthalpy Entropy production Fundamental thermodynamic relation Thermodynamic free energy History of entropy Entropy (statistical views) References Further reading E.A. Guggenheim Thermodynamics, an advanced treatment for chemists and physicists North-Holland Publishing Company, Amsterdam, 1959. C. Kittel and H. Kroemer Thermal Physics W.H. Freeman and Company, New York, 1980. Goldstein, Martin, and Inge F., 1993. The Refrigerator and the Universe. Harvard Univ. Press. A gentle introduction at a lower level than this entry. Thermodynamic entropy
Entropy (classical thermodynamics)
[ "Physics" ]
2,496
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
4,701,125
https://en.wikipedia.org/wiki/Entropy%20%28statistical%20thermodynamics%29
The concept entropy was first developed by German physicist Rudolf Clausius in the mid-nineteenth century as a thermodynamic property that predicts that certain spontaneous processes are irreversible or impossible. In statistical mechanics, entropy is formulated as a statistical property using probability theory. The statistical entropy perspective was introduced in 1870 by Austrian physicist Ludwig Boltzmann, who established a new field of physics that provided the descriptive linkage between the macroscopic observation of nature and the microscopic view based on the rigorous treatment of large ensembles of microscopic states that constitute thermodynamic systems. Boltzmann's principle Ludwig Boltzmann defined entropy as a measure of the number of possible microscopic states (microstates) of a system in thermodynamic equilibrium, consistent with its macroscopic thermodynamic properties, which constitute the macrostate of the system. A useful illustration is the example of a sample of gas contained in a container. The easily measurable parameters volume, pressure, and temperature of the gas describe its macroscopic condition (state). At a microscopic level, the gas consists of a vast number of freely moving atoms or molecules, which randomly collide with one another and with the walls of the container. The collisions with the walls produce the macroscopic pressure of the gas, which illustrates the connection between microscopic and macroscopic phenomena. A microstate of the system is a description of the positions and momenta of all its particles. The large number of particles of the gas provides an infinite number of possible microstates for the sample, but collectively they exhibit a well-defined average of configuration, which is exhibited as the macrostate of the system, to which each individual microstate contribution is negligibly small. The ensemble of microstates comprises a statistical distribution of probability for each microstate, and the group of most probable configurations accounts for the macroscopic state. Therefore, the system can be described as a whole by only a few macroscopic parameters, called the thermodynamic variables: the total energy E, volume V, pressure P, temperature T, and so forth. However, this description is relatively simple only when the system is in a state of equilibrium. Equilibrium may be illustrated with a simple example of a drop of food coloring falling into a glass of water. The dye diffuses in a complicated manner, which is difficult to precisely predict. However, after sufficient time has passed, the system reaches a uniform color, a state much easier to describe and explain. Boltzmann formulated a simple relationship between entropy and the number of possible microstates of a system, which is denoted by the symbol Ω. The entropy S is proportional to the natural logarithm of this number: The proportionality constant kB is one of the fundamental constants of physics and is named the Boltzmann constant in honor of its discoverer. Boltzmann's entropy describes the system when all the accessible microstates are equally likely. It is the configuration corresponding to the maximum of entropy at equilibrium. The randomness or disorder is maximal, and so is the lack of distinction (or information) of each microstate. Entropy is a thermodynamic property just like pressure, volume, or temperature. Therefore, it connects the microscopic and the macroscopic world view. Boltzmann's principle is regarded as the foundation of statistical mechanics. Gibbs entropy formula The macroscopic state of a system is characterized by a distribution on the microstates. The entropy of this distribution is given by the Gibbs entropy formula, named after J. Willard Gibbs. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if is the energy of microstate i, and is the probability that it occurs during the system's fluctuations, then the entropy of the system is Entropy changes for systems in a canonical state A system with a well-defined temperature, i.e., one in thermal equilibrium with a thermal reservoir, has a probability of being in a microstate i given by Boltzmann's distribution. Changes in the entropy caused by changes in the external constraints are then given by: where we have twice used the conservation of probability, . Now, is the expectation value of the change in the total energy of the system. If the changes are sufficiently slow, so that the system remains in the same microscopic state, but the state slowly (and reversibly) changes, then is the expectation value of the work done on the system through this reversible process, dwrev. But from the first law of thermodynamics, . Therefore, In the thermodynamic limit, the fluctuation of the macroscopic quantities from their average values becomes negligible; so this reproduces the definition of entropy from classical thermodynamics, given above. The quantity is the Boltzmann constant. The remaining factor of the equation, the entire summation is dimensionless, since the value is a probability and therefore dimensionless, and is the natural logarithm. Hence the SI derived units on both sides of the equation are same as heat capacity: This definition remains meaningful even when the system is far away from equilibrium. Other definitions assume that the system is in thermal equilibrium, either as an isolated system, or as a system in exchange with its surroundings. The set of microstates (with probability distribution) on which the sum is done is called a statistical ensemble. Each type of statistical ensemble (micro-canonical, canonical, grand-canonical, etc.) describes a different configuration of the system's exchanges with the outside, varying from a completely isolated system to a system that can exchange one or more quantities with a reservoir, like energy, volume or molecules. In every ensemble, the equilibrium configuration of the system is dictated by the maximization of the entropy of the union of the system and its reservoir, according to the second law of thermodynamics (see the statistical mechanics article). Neglecting correlations (or, more generally, statistical dependencies) between the states of individual particles will lead to an incorrect probability distribution on the microstates and hence to an overestimate of the entropy. Such correlations occur in any system with nontrivially interacting particles, that is, in all systems more complex than an ideal gas. This S is almost universally called simply the entropy. It can also be called the statistical entropy or the thermodynamic entropy without changing the meaning. Note the above expression of the statistical entropy is a discretized version of Shannon entropy. The von Neumann entropy formula is an extension of the Gibbs entropy formula to the quantum mechanical case. It has been shown that the Gibbs Entropy is equal to the classical "heat engine" entropy characterized by , and the generalized Boltzmann distribution is a sufficient and necessary condition for this equivalence. Furthermore, the Gibbs Entropy is the only entropy that is equivalent to the classical "heat engine" entropy under the following postulates: Ensembles The various ensembles used in statistical thermodynamics are linked to the entropy by the following relations: is the microcanonical partition function is the canonical partition function is the grand canonical partition function Order through chaos and the second law of thermodynamics We can think of Ω as a measure of our lack of knowledge about a system. To illustrate this idea, consider a set of 100 coins, each of which is either heads up or tails up. In this example, let us suppose that the macrostates are specified by the total number of heads and tails, while the microstates are specified by the facings of each individual coin (i.e., the exact order in which heads and tails occur). For the macrostates of 100 heads or 100 tails, there is exactly one possible configuration, so our knowledge of the system is complete. At the opposite extreme, the macrostate which gives us the least knowledge about the system consists of 50 heads and 50 tails in any order, for which there are (100 choose 50) ≈ 1029 possible microstates. Even when a system is entirely isolated from external influences, its microstate is constantly changing. For instance, the particles in a gas are constantly moving, and thus occupy a different position at each moment of time; their momenta are also constantly changing as they collide with each other or with the container walls. Suppose we prepare the system in an artificially highly ordered equilibrium state. For instance, imagine dividing a container with a partition and placing a gas on one side of the partition, with a vacuum on the other side. If we remove the partition and watch the subsequent behavior of the gas, we will find that its microstate evolves according to some chaotic and unpredictable pattern, and that on average these microstates will correspond to a more disordered macrostate than before. It is possible, but extremely unlikely, for the gas molecules to bounce off one another in such a way that they remain in one half of the container. It is overwhelmingly probable for the gas to spread out to fill the container evenly, which is the new equilibrium macrostate of the system. This is an example illustrating the second law of thermodynamics: the total entropy of any isolated thermodynamic system tends to increase over time, approaching a maximum value. Since its discovery, this idea has been the focus of a great deal of thought, some of it confused. A chief point of confusion is the fact that the Second Law applies only to isolated systems. For example, the Earth is not an isolated system because it is constantly receiving energy in the form of sunlight. In contrast, the universe may be considered an isolated system, so that its total entropy is constantly increasing. (Needs clarification. See: Second law of thermodynamics#cite note-Grandy 151-21) Counting of microstates In classical statistical mechanics, the number of microstates is actually uncountably infinite, since the properties of classical systems are continuous. For example, a microstate of a classical ideal gas is specified by the positions and momenta of all the atoms, which range continuously over the real numbers. If we want to define Ω, we have to come up with a method of grouping the microstates together to obtain a countable set. This procedure is known as coarse graining. In the case of the ideal gas, we count two states of an atom as the "same" state if their positions and momenta are within δx and δp of each other. Since the values of δx and δp can be chosen arbitrarily, the entropy is not uniquely defined. It is defined only up to an additive constant. (As we will see, the thermodynamic definition of entropy is also defined only up to a constant.) To avoid coarse graining one can take the entropy as defined by the H-theorem. However, this ambiguity can be resolved with quantum mechanics. The quantum state of a system can be expressed as a superposition of "basis" states, which can be chosen to be energy eigenstates (i.e. eigenstates of the quantum Hamiltonian). Usually, the quantum states are discrete, even though there may be an infinite number of them. For a system with some specified energy E, one takes Ω to be the number of energy eigenstates within a macroscopically small energy range between E and . In the thermodynamical limit, the specific entropy becomes independent on the choice of δE. An important result, known as Nernst's theorem or the third law of thermodynamics, states that the entropy of a system at zero absolute temperature is a well-defined constant. This is because a system at zero temperature exists in its lowest-energy state, or ground state, so that its entropy is determined by the degeneracy of the ground state. Many systems, such as crystal lattices, have a unique ground state, and (since ) this means that they have zero entropy at absolute zero. Other systems have more than one state with the same, lowest energy, and have a non-vanishing "zero-point entropy". For instance, ordinary ice has a zero-point entropy of , because its underlying crystal structure possesses multiple configurations with the same energy (a phenomenon known as geometrical frustration). The third law of thermodynamics states that the entropy of a perfect crystal at absolute zero () is zero. This means that nearly all molecular motion should cease. The oscillator equation for predicting quantized vibrational levels shows that even when the vibrational quantum number is 0, the molecule still has vibrational energy: where is the Planck constant, is the characteristic frequency of the vibration, and is the vibrational quantum number. Even when (the zero-point energy), does not equal 0, in adherence to the Heisenberg uncertainty principle. See also Boltzmann constant Configuration entropy Conformational entropy Enthalpy Entropy Entropy (classical thermodynamics) Entropy (energy dispersal) Entropy of mixing Entropy (order and disorder) Entropy (information theory) History of entropy Information theory Thermodynamic free energy Tsallis entropy References Thermodynamic entropy
Entropy (statistical thermodynamics)
[ "Physics" ]
2,723
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
4,701,197
https://en.wikipedia.org/wiki/Entropy%20as%20an%20arrow%20of%20time
Entropy is one of the few quantities in the physical sciences that require a particular direction for time, sometimes called an arrow of time. As one goes "forward" in time, the second law of thermodynamics says, the entropy of an isolated system can increase, but not decrease. Thus, entropy measurement is a way of distinguishing the past from the future. In thermodynamic systems that are not isolated, local entropy can decrease over time, accompanied by a compensating entropy increase in the surroundings; examples include objects undergoing cooling, living systems, and the formation of typical crystals. Much like temperature, despite being an abstract concept, everyone has an intuitive sense of the effects of entropy. For example, it is often very easy to tell the difference between a video being played forwards or backwards. A video may depict a wood fire that melts a nearby ice block; played in reverse, it would show a puddle of water turning a cloud of smoke into unburnt wood and freezing itself in the process. Surprisingly, in either case, the vast majority of the laws of physics are not broken by these processes, with the second law of thermodynamics being one of the only exceptions. When a law of physics applies equally when time is reversed, it is said to show T-symmetry; in this case, entropy is what allows one to decide if the video described above is playing forwards or in reverse as intuitively we identify that only when played forwards the entropy of the scene is increasing. Because of the second law of thermodynamics, entropy prevents macroscopic processes showing T-symmetry. When studying at a microscopic scale, the above judgements cannot be made. Watching a single smoke particle buffeted by air, it would not be clear if a video was playing forwards or in reverse, and, in fact, it would not be possible as the laws which apply show T-symmetry. As it drifts left or right, qualitatively it looks no different; it is only when the gas is studied at a macroscopic scale that the effects of entropy become noticeable (see Loschmidt's paradox). On average it would be expected that the smoke particles around a struck match would drift away from each other, diffusing throughout the available space. It would be an astronomically improbable event for all the particles to cluster together, yet the movement of any one smoke particle cannot be predicted. By contrast, certain subatomic interactions involving the weak nuclear force violate the conservation of parity, but only very rarely. According to the CPT theorem, this means they should also be time irreversible, and so establish an arrow of time. This, however, is neither linked to the thermodynamic arrow of time, nor has anything to do with the daily experience of time irreversibility. Overview The second law of thermodynamics allows for the entropy to remain the same regardless of the direction of time. If the entropy is constant in either direction of time, there would be no preferred direction. However, the entropy can only be a constant if the system is in the highest possible state of disorder, such as a gas that always was, and always will be, uniformly spread out in its container. The existence of a thermodynamic arrow of time implies that the system is highly ordered in one time direction only, which would by definition be the "past". Thus this law is about the boundary conditions rather than the equations of motion. The second law of thermodynamics is statistical in nature, and therefore its reliability arises from the huge number of particles present in macroscopic systems. It is not impossible, in principle, for all 6 × 1023 atoms in a mole of a gas to spontaneously migrate to one half of a container; it is only fantastically unlikely—so unlikely that no macroscopic violation of the Second Law has ever been observed. The thermodynamic arrow is often linked to the cosmological arrow of time, because it is ultimately about the boundary conditions of the early universe. According to the Big Bang theory, the Universe was initially very hot with energy distributed uniformly. For a system in which gravity is important, such as the universe, this is a low-entropy state (compared to a high-entropy state of having all matter collapsed into black holes, a state to which the system may eventually evolve). As the Universe grows, its temperature drops, which leaves less energy [per unit volume of space] available to perform work in the future than was available in the past. Additionally, perturbations in the energy density grow (eventually forming galaxies and stars). Thus the Universe itself has a well-defined thermodynamic arrow of time. But this does not address the question of why the initial state of the universe was that of low entropy. If cosmic expansion were to halt and reverse due to gravity, the temperature of the Universe would once again grow hotter, but its entropy would also continue to increase due to the continued growth of perturbations and the eventual black hole formation, until the latter stages of the Big Crunch when entropy would be lower than now. An example of apparent irreversibility Consider the situation in which a large container is filled with two separated liquids, for example a dye on one side and water on the other. With no barrier between the two liquids, the random jostling of their molecules will result in them becoming more mixed as time passes. However, if the dye and water are mixed then one does not expect them to separate out again when left to themselves. A movie of the mixing would seem realistic when played forwards, but unrealistic when played backwards. If the large container is observed early on in the mixing process, it might be found only partially mixed. It would be reasonable to conclude that, without outside intervention, the liquid reached this state because it was more ordered in the past, when there was greater separation, and will be more disordered, or mixed, in the future. Now imagine that the experiment is repeated, this time with only a few molecules, perhaps ten, in a very small container. One can easily imagine that by watching the random jostling of the molecules it might occur—by chance alone—that the molecules became neatly segregated, with all dye molecules on one side and all water molecules on the other. That this can be expected to occur from time to time can be concluded from the fluctuation theorem; thus it is not impossible for the molecules to segregate themselves. However, for a large number of molecules it is so unlikely that one would have to wait, on average, many times longer than the current age of the universe for it to occur. Thus a movie that showed a large number of molecules segregating themselves as described above would appear unrealistic and one would be inclined to say that the movie was being played in reverse. See Boltzmann's second law as a law of disorder. Mathematics of the arrow The mathematics behind the arrow of time, entropy, and basis of the second law of thermodynamics derive from the following set-up, as detailed by Carnot (1824), Clapeyron (1832), and Clausius (1854): Here, as common experience demonstrates, when a hot body T1, such as a furnace, is put into physical contact, such as being connected via a body of fluid (working body), with a cold body T2, such as a stream of cold water, energy will invariably flow from hot to cold in the form of heat Q, and given time the system will reach equilibrium. Entropy, defined as Q/T, was conceived by Rudolf Clausius as a function to measure the molecular irreversibility of this process, i.e. the dissipative work the atoms and molecules do on each other during the transformation. In this diagram, one can calculate the entropy change ΔS for the passage of the quantity of heat Q from the temperature T1, through the "working body" of fluid (see heat engine), which was typically a body of steam, to the temperature T2. Moreover, one could assume, for the sake of argument, that the working body contains only two molecules of water. Next, if we make the assignment, as originally done by Clausius: Then the entropy change or "equivalence-value" for this transformation is: which equals: and by factoring out Q, we have the following form, as was derived by Clausius: Thus, for example, if Q was 50 units, T1 was initially 100 degrees, and T2 was 1 degree, then the entropy change for this process would be 49.5. Hence, entropy increased for this process, the process took a certain amount of "time", and one can correlate entropy increase with the passage of time. For this system configuration, subsequently, it is an "absolute rule". This rule is based on the fact that all natural processes are irreversible by virtue of the fact that molecules of a system, for example two molecules in a tank, not only do external work (such as to push a piston), but also do internal work on each other, in proportion to the heat used to do work (see: Mechanical equivalent of heat) during the process. Entropy accounts for the fact that internal inter-molecular friction exists. Correlations An important difference between the past and the future is that in any system (such as a gas of particles) its initial conditions are usually such that its different parts are uncorrelated, but as the system evolves and its different parts interact with each other, they become correlated. For example, whenever dealing with a gas of particles, it is always assumed that its initial conditions are such that there is no correlation between the states of different particles (i.e. the speeds and locations of the different particles are completely random, up to the need to conform with the macrostate of the system). This is closely related to the second law of thermodynamics: For example, in a finite system interacting with finite heat reservoirs, entropy is equivalent to system-reservoir correlations, and thus both increase together. Take for example (experiment A) a closed box that is, at the beginning, half-filled with ideal gas. As time passes, the gas obviously expands to fill the whole box, so that the final state is a box full of gas. This is an irreversible process, since if the box is full at the beginning (experiment B), it does not become only half-full later, except for the very unlikely situation where the gas particles have very special locations and speeds. But this is precisely because we always assume that the initial conditions in experiment B are such that the particles have random locations and speeds. This is not correct for the final conditions of the system in experiment A, because the particles have interacted between themselves, so that their locations and speeds have become dependent on each other, i.e. correlated. This can be understood if we look at experiment A backwards in time, which we'll call experiment C: now we begin with a box full of gas, but the particles do not have random locations and speeds; rather, their locations and speeds are so particular, that after some time they all move to one half of the box, which is the final state of the system (this is the initial state of experiment A, because now we're looking at the same experiment backwards!). The interactions between particles now do not create correlations between the particles, but in fact turn them into (at least seemingly) random, "canceling" the pre-existing correlations. The only difference between experiment C (which defies the Second Law of Thermodynamics) and experiment B (which obeys the Second Law of Thermodynamics) is that in the former the particles are uncorrelated at the end, while in the latter the particles are uncorrelated at the beginning. In fact, if all the microscopic physical processes are reversible (see discussion below), then the Second Law of Thermodynamics can be proven for any isolated system of particles with initial conditions in which the particles states are uncorrelated. To do this, one must acknowledge the difference between the measured entropy of a system—which depends only on its macrostate (its volume, temperature etc.)—and its information entropy, which is the amount of information (number of computer bits) needed to describe the exact microstate of the system. The measured entropy is independent of correlations between particles in the system, because they do not affect its macrostate, but the information entropy does depend on them, because correlations lower the randomness of the system and thus lowers the amount of information needed to describe it. Therefore, in the absence of such correlations the two entropies are identical, but otherwise the information entropy is smaller than the measured entropy, and the difference can be used as a measure of the amount of correlations. Now, by Liouville's theorem, time-reversal of all microscopic processes implies that the amount of information needed to describe the exact microstate of an isolated system (its information-theoretic joint entropy) is constant in time. This joint entropy is equal to the marginal entropy (entropy assuming no correlations) plus the entropy of correlation (mutual entropy, or its negative mutual information). If we assume no correlations between the particles initially, then this joint entropy is just the marginal entropy, which is just the initial thermodynamic entropy of the system, divided by the Boltzmann constant. However, if these are indeed the initial conditions (and this is a crucial assumption), then such correlations form with time. In other words, there is a decreasing mutual entropy (or increasing mutual information), and for a time that is not too long—the correlations (mutual information) between particles only increase with time. Therefore, the thermodynamic entropy, which is proportional to the marginal entropy, must also increase with time (note that "not too long" in this context is relative to the time needed, in a classical version of the system, for it to pass through all its possible microstates—a time that can be roughly estimated as , where is the time between particle collisions and S is the system's entropy. In any practical case this time is huge compared to everything else). Note that the correlation between particles is not a fully objective quantity. One cannot measure the mutual entropy, one can only measure its change, assuming one can measure a microstate. Thermodynamics is restricted to the case where microstates cannot be distinguished, which means that only the marginal entropy, proportional to the thermodynamic entropy, can be measured, and, in a practical sense, always increases. Arrow of time in various phenomena Phenomena that occur differently according to their time direction can ultimately be linked to the second law of thermodynamics, for example ice cubes melt in hot coffee rather than assembling themselves out of the coffee and a block sliding on a rough surface slows down rather than speeds up. The idea that we can remember the past and not the future is called the "psychological arrow of time" and it has deep connections with Maxwell's demon and the physics of information; memory is linked to the second law of thermodynamics if one views it as correlation between brain cells (or computer bits) and the outer world: Since such correlations increase with time, memory is linked to past events, rather than to future events. Current research Current research focuses mainly on describing the thermodynamic arrow of time mathematically, either in classical or quantum systems, and on understanding its origin from the point of view of cosmological boundary conditions. Dynamical systems Some current research in dynamical systems indicates a possible "explanation" for the arrow of time. There are several ways to describe the time evolution of a dynamical system. In the classical framework, one considers an ordinary differential equation, where the parameter is explicitly time. By the very nature of differential equations, the solutions to such systems are inherently time-reversible. However, many of the interesting cases are either ergodic or mixing, and it is strongly suspected that mixing and ergodicity somehow underlie the fundamental mechanism of the arrow of time. While the strong suspicion may be but a fleeting sense of intuition, it cannot be denied that, when there are multiple parameters, the field of partial differential equations comes into play. In such systems there is the Feynman–Kac formula in play, which assures for specific cases, a one-to-one correspondence between specific linear stochastic differential equation and partial differential equation. Therefore, any partial differential equation system is tantamount to a random system of a single parameter, which is not reversible due to the aforementioned correspondence. Mixing and ergodic systems do not have exact solutions, and thus proving time irreversibility in a mathematical sense is () impossible. The concept of "exact" solutions is an anthropic one. Does "exact" mean the same as closed form in terms of already know expressions, or does it mean simply a single finite sequence of strokes of a/the writing utensil/human finger? There are myriad of systems known to humanity that are abstract and have recursive definitions but no non-self-referential notation currently exists. As a result of this complexity, it is natural to look elsewhere for different examples and perspectives. Some progress can be made by studying discrete-time models or difference equations. Many discrete-time models, such as the iterated functions considered in popular fractal-drawing programs, are explicitly not time-reversible, as any given point "in the present" may have several different "pasts" associated with it: indeed, the set of all pasts is known as the Julia set. Since such systems have a built-in irreversibility, it is inappropriate to use them to explain why time is not reversible. There are other systems that are chaotic, and are also explicitly time-reversible: among these is the baker's map, which is also exactly solvable. An interesting avenue of study is to examine solutions to such systems not by iterating the dynamical system over time, but instead, to study the corresponding Frobenius-Perron operator or transfer operator for the system. For some of these systems, it can be explicitly, mathematically shown that the transfer operators are not trace-class. This means that these operators do not have a unique eigenvalue spectrum that is independent of the choice of basis. In the case of the baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time. That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not. Furthermore, the transfer operator can be diagonalized in one of two inequivalent ways: one that describes the forward-time evolution of the system, and one that describes the backwards-time evolution. As of 2006, this type of time-symmetry breaking has been demonstrated for only a very small number of exactly-solvable, discrete-time systems. The transfer operator for more complex systems has not been consistently formulated, and its precise definition is mired in a variety of subtle difficulties. In particular, it has not been shown that it has a broken symmetry for the simplest exactly-solvable continuous-time ergodic systems, such as Hadamard's billiards, or the Anosov flow on the tangent space of PSL(2,R). Quantum mechanics Research on irreversibility in quantum mechanics takes several different directions. One avenue is the study of rigged Hilbert spaces, and in particular, how discrete and continuous eigenvalue spectra intermingle. For example, the rational numbers are completely intermingled with the real numbers, and yet have a unique, distinct set of properties. It is hoped that the study of Hilbert spaces with a similar inter-mingling will provide insight into the arrow of time. Another distinct approach is through the study of quantum chaos by which attempts are made to quantize systems as classically chaotic, ergodic or mixing. The results obtained are not dissimilar from those that come from the transfer operator method. For example, the quantization of the Boltzmann gas, that is, a gas of hard (elastic) point particles in a rectangular box reveals that the eigenfunctions are space-filling fractals that occupy the entire box, and that the energy eigenvalues are very closely spaced and have an "almost continuous" spectrum (for a finite number of particles in a box, the spectrum must be, of necessity, discrete). If the initial conditions are such that all of the particles are confined to one side of the box, the system very quickly evolves into one where the particles fill the entire box. Even when all of the particles are initially on one side of the box, their wave functions do, in fact, permeate the entire box: they constructively interfere on one side, and destructively interfere on the other. Irreversibility is then argued by noting that it is "nearly impossible" for the wave functions to be "accidentally" arranged in some unlikely state: such arrangements are a set of zero measure. Because the eigenfunctions are fractals, much of the language and machinery of entropy and statistical mechanics can be imported to discuss and argue the quantum case. Cosmology Some processes that involve high energy particles and are governed by the weak force (such as K-meson decay) defy the symmetry between time directions. However, all known physical processes do preserve a more complicated symmetry (CPT symmetry), and are therefore unrelated to the second law of thermodynamics, or to the day-to-day experience of the arrow of time. A notable exception is the wave function collapse in quantum mechanics, an irreversible process which is considered either real (by the Copenhagen interpretation) or apparent only (by the many-worlds interpretation of quantum mechanics). In either case, the wave function collapse always follows quantum decoherence, a process which is understood to be a result of the second law of thermodynamics. The universe was in a uniform, high density state at its very early stages, shortly after the Big Bang. The hot gas in the early universe was near thermodynamic equilibrium (see Horizon problem); in systems where gravitation plays a major role, this is a state of low entropy, due to the negative heat capacity of such systems (this is in contrary to non-gravitational systems where thermodynamic equilibrium is a state of maximum entropy). Moreover, due to its small volume compared to future epochs, the entropy was even lower as gas expansion increases its entropy. Thus the early universe can be considered to be highly ordered. Note that the uniformity of this early near-equilibrium state has been explained by the theory of cosmic inflation. According to this theory the universe (or, rather, its accessible part, a radius of 46 billion light years around Earth) evolved from a tiny, totally uniform volume (a portion of a much bigger universe), which expanded greatly; hence it was highly ordered. Fluctuations were then created by quantum processes related to its expansion, in a manner supposed to be such that these fluctuations went through quantum decoherence, so that they became uncorrelated for any practical use. This is supposed to give the desired initial conditions needed for the Second Law of Thermodynamics; different decoherent states ultimately evolved to different specific arrangements of galaxies and stars. The universe is apparently an open universe, so that its expansion will never terminate, but it is an interesting thought experiment to imagine what would have happened had the universe been closed. In such a case, its expansion would stop at a certain time in the distant future, and then begin to shrink. Moreover, a closed universe is finite. It is unclear what would happen to the second law of thermodynamics in such a case. One could imagine at least two different scenarios, though in fact only the first one is plausible, as the other requires a highly smooth cosmic evolution, contrary to what is observed: The broad consensus among the scientific community today is that smooth initial conditions lead to a highly non-smooth final state, and that this is in fact the source of the thermodynamic arrow of time. Gravitational systems tend to gravitationally collapse to compact bodies such as black holes (a phenomenon unrelated to wavefunction collapse), so the universe would end in a Big Crunch that is very different than a Big Bang run in reverse, since the distribution of the matter would be highly non-smooth; as the universe shrinks, such compact bodies merge to larger and larger black holes. It may even be that it is impossible for the universe to have both a smooth beginning and a smooth ending. Note that in this scenario the energy density of the universe in the final stages of its shrinkage is much larger than in the corresponding initial stages of its expansion (there is no destructive interference, unlike in the second scenario described below), and consists of mostly black holes rather than free particles. A highly controversial view is that instead, the arrow of time will reverse. The quantum fluctuations—which in the meantime have evolved into galaxies and stars—will be in superposition in such a way that the whole process described above is reversed—i.e., the fluctuations are erased by destructive interference and total uniformity is achieved once again. Thus the universe ends in a Big Crunch, which is similar to its beginning in the Big Bang. Because the two are totally symmetric, and the final state is very highly ordered, entropy must decrease close to the end of the universe, so that the second law of thermodynamics reverses when the universe shrinks. This can be understood as follows: in the very early universe, interactions between fluctuations created entanglement (quantum correlations) between particles spread all over the universe; during the expansion, these particles became so distant that these correlations became negligible (see quantum decoherence). At the time the expansion halts and the universe starts to shrink, such correlated particles arrive once again at contact (after circling around the universe), and the entropy starts to decrease—because highly correlated initial conditions may lead to a decrease in entropy. Another way of putting it, is that as distant particles arrive, more and more order is revealed because these particles are highly correlated with particles that arrived earlier. In this scenario, the cosmological arrow of time is the reason for both the thermodynamic arrow of time and the quantum arrow of time. Both will slowly disappear as the universe will come to a halt, and will later be reversed. In the first and more consensual scenario, it is the difference between the initial state and the final state of the universe that is responsible for the thermodynamic arrow of time. This is independent of the cosmological arrow of time. See also Arrow of time Cosmic inflation Entropy H-theorem History of entropy Loschmidt's paradox References Further reading (technical). Dover has reprinted the monograph in 2003 (). For a short paper listing "the essential points of that argument, correcting presentation points that were confusing ... and emphasizing conclusions more forcefully than previously" see External links Thermodynamic Asymmetry in Time at the online Stanford Encyclopedia of Philosophy Thermodynamic entropy Asymmetry
Entropy as an arrow of time
[ "Physics" ]
5,730
[ "Physical quantities", "Thermodynamic entropy", "Entropy", "Asymmetry", "Statistical mechanics", "Symmetry" ]
1,831,103
https://en.wikipedia.org/wiki/Foamcore
Foamcore, foam board, or paper-faced foam board is a lightweight and easily cut material used for mounting of photographic prints, as backing for picture framing, for making scale models, and in painting. It consists of a board of polystyrene foam clad with an outer facing of paper on either side, typically white clay-coated paper or brown kraft paper. History The original white foamcore board was made in thicknesses for the graphic arts industry by Monsanto Company under the trade name "Fome-Cor®" starting in 1961. Construction, variants and composition The surface of the regular board, like many other types of paper, is slightly acidic. However, for modern archival picture framing and art mounting purposes it can be produced in a neutral, acid-free version with a buffered surface paper, in a wide range of sizes and thicknesses. Foam-cored materials are also now available with a cladding of solid (non-foamed) polystyrene and other rigid plastic sheeting, some with a textured finish. Foamcore does not adhere well to some glues, such as superglue, and certain types of paint. The foam tends to melt away and dissolve. Some glue works well in casual settings, however, the water in the glue can warp the fibers in the outer layers. Best results are typically obtained from higher-end spray adhesives. A hot glue gun can be used as a substitute, although the high viscosity of hot glues can affect finished projects in the form of board warping, bubbles, or other unsightly blemishes. Self-adhesive foam boards, intended for art and document mounting are also available, though these can be very tricky to use properly; this is because the glue sets very fast. It is considered cheaper to buy plain foam board and use re-positionable spray mount adhesive. Specialty constructions have been developed for engineering uses. Uses Foamcore is commonly used to produce architectural models, prototype small objects and to produce patterns for casting. Scenery for scale model displays, dioramas, and computer games are often produced by hobbyists from foamcore. Foamcore is also often used by photographers as a reflector to bounce light, in the design industry to mount presentations of new products, and in picture framing as a backing material; the latter use includes some archival picture framing methods, which utilize the acid-free versions of the material. Another use is with aero-modellers for building radio-controlled aircraft. Researchers at the University of Manchester created their Giant Foamboard Quadcopter (GFQ) claimed to be the largest possible Civil Aviation Authority licensed drone with an all-up weight (UWT) just below the maximum of 25 Kg (c.55 lbs). See also Corrugated fiberboard (Cardboard) Closed-cell PVC foamboard Arts and crafts Mat (picture framing) References Visual arts materials Composite materials
Foamcore
[ "Physics" ]
595
[ "Materials", "Composite materials", "Matter" ]
1,831,357
https://en.wikipedia.org/wiki/Hyetograph
A hyetograph is a graphical representation of the distribution of rainfall intensity over time. For instance, in the 24-hour rainfall distributions as developed by the Soil Conservation Service (now the NRCS or National Resources Conservation Service), rainfall intensity progressively increases until it reaches a maximum and then gradually decreases. Where this maximum occurs and how fast the maximum is reached is what differentiates one distribution from another. One important aspect to understand is that the distributions are for design storms, not necessarily actual storms. In other words, a real storm may not behave in this same fashion. The maximum intensity may not be reached as uniformly as shown in the SCS hyetographs. See also Voronoi diagram - a method adaptable for calculating the average precipitation over an area External links Precipitation Maps for USA Hydrology
Hyetograph
[ "Chemistry", "Engineering", "Environmental_science" ]
164
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
1,831,487
https://en.wikipedia.org/wiki/Eddy%20current%20brake
An eddy current brake, also known as an induction brake, Faraday brake, electric brake or electric retarder, is a device used to slow or stop a moving object by generating eddy currents and thus dissipating its kinetic energy as heat. Unlike friction brakes, where the drag force that stops the moving object is provided by friction between two surfaces pressed together, the drag force in an eddy current brake is an electromagnetic force between a magnet and a nearby conductive object in relative motion, due to eddy currents induced in the conductor through electromagnetic induction. A conductive surface moving past a stationary magnet develops circular electric currents called eddy currents induced in it by the magnetic field, as described by Faraday's law of induction. By Lenz's law, the circulating currents create their own magnetic field that opposes the field of the magnet. Thus the moving conductor experiences a drag force from the magnet that opposes its motion, proportional to its velocity. The kinetic energy of the moving object is dissipated as heat generated by the current flowing through the electrical resistance of the conductor. In an eddy current brake the magnetic field may be created by a permanent magnet or an electromagnet. With an electromagnet system, the braking force can be turned on and off (or varied) by varying the electric current in the electromagnet windings. Another advantage is that since the brake does not work by friction, there are no brake shoe surfaces to wear, eliminating replacement as with friction brakes. A disadvantage is that since the braking force is proportional to the relative velocity of the brake, the brake has no holding force when the moving object is stationary, as provided by static friction in a friction brake, hence in vehicles it must be supplemented by a friction brake. In some cases, energy in the form of momentum stored within a motor or other machine is used to energize any electromagnets involved. The result is a motor or other machine that rapidly comes to rest when power is removed. Care must be taken in such designs to ensure that components involved are not stressed beyond operational limits during such deceleration, which may greatly exceed design forces of acceleration during normal operation. Eddy current brakes are used to slow high-speed trains and roller coasters, as a complement for friction brakes in semi-trailer trucks to help prevent brake wear and overheating, to stop powered tools quickly when power is turned off, and in electric meters used by electric utilities. Mechanism and principle An eddy current brake consists of a conductive piece of metal, either a straight bar or a disk, which moves through the magnetic field of a magnet, either a permanent magnet or an electromagnet. When it moves past the stationary magnet, the magnet exerts a drag force on the metal which opposes its motion, due to circular electric currents called eddy currents induced in the metal by the magnetic field. Note that the conductive sheet [?] is not made of ferromagnetic metal such as iron or steel; usually copper or aluminum are used, which are not attracted to a magnet. The brake does not work by the simple attraction of a ferromagnetic metal to the magnet. See the diagram at right. It shows a metal sheet (C) moving to the right under a magnet. The magnetic field (B, green arrows) of the magnet's north pole N passes down through the sheet. Since the metal is moving, the magnetic flux through the sheet is changing. At the part of the sheet under the leading edge of the magnet (left side) the magnetic field through the sheet is increasing as it gets nearer the magnet. From Faraday's law of induction, this field induces a counterclockwise flow of electric current (I, red), in the sheet. This is the eddy current. In contrast, at the trailing edge of the magnet (right side) the magnetic field through the sheet is decreasing, inducing a clockwise eddy current in the sheet. Another way to understand the action is to see that the free charge carriers (electrons) in the metal sheet are moving to the right, so the magnetic field exerts a sideways force on them due to the Lorentz force. Since the velocity v of the charges is to the right and the magnetic field B is directed down, from the right hand rule the Lorentz force on positive charges qv×B is toward the rear in the diagram (to the left when facing in the direction of motion of the sheet) This causes a current I toward the rear under the magnet, which circles around through parts of the sheet outside the magnetic field in two currents, clockwise to the right and counterclockwise to the left, to the front of the magnet again. The mobile charge carriers in the metal, the electrons, actually have a negative charge, so their motion is opposite in direction to the conventional current shown. As described by Ampere's circuital law, each of these circular currents creates a counter magnetic field (blue arrows), which in accordance with Lenz's law opposes the change in magnetic field, causing a drag force on the sheet which is the braking force exerted by the brake. At the leading edge of the magnet (left side) by the right hand rule the counterclockwise current creates a magnetic field pointed up, opposing the magnet's field, causing a repulsive force between the sheet and the leading edge of the magnet. In contrast, at the trailing edge (right side), the clockwise current causes a magnetic field pointed down, in the same direction as the magnet's field, creating an attractive force between the sheet and the trailing edge of the magnet. Both of these forces oppose the motion of the sheet. The kinetic energy which is consumed overcoming this drag force is dissipated as heat by the currents flowing through the resistance of the metal, so the metal gets warm under the magnet. The braking force of an eddy current brake is exactly proportional to the velocity V, so it acts similar to viscous friction in a liquid. The braking force decreases as the velocity decreases. When the conductive sheet is stationary, the magnetic field through each part of it is constant, not changing with time, so no eddy currents are induced, and there is no force between the magnet and the conductor. Thus an eddy current brake has no holding force. Eddy current brakes come in two geometries: In a linear eddy current brake, the conductive piece is a straight rail or track that the magnet moves along. In a circular, disk or rotary eddy current brake, the conductor is a flat disk rotor that turns between the poles of the magnet. The physical working principle is the same for both. Disk eddy current brakes Disk electromagnetic brakes are used on vehicles such as trains, and power tools such as circular saws, to stop the blade quickly when the power is turned off. A disk eddy current brake consists of a conductive non-ferromagnetic metal disc (rotor) attached to the axle of the vehicle's wheel, with an electromagnet located with its poles on each side of the disk, so the magnetic field passes through the disk. The electromagnet allows the braking force to be varied. When no current is passed through the electromagnet's winding, there is no braking force. When the driver steps on the brake pedal, current is passed through the electromagnet windings, creating a magnetic field. The greater the current in the winding, the greater the eddy currents and the stronger the braking force. Power tool brakes use permanent magnets, which are moved adjacent to the disk by a linkage when the power is turned off. The kinetic energy of the vehicle's motion is dissipated in Joule heating by the eddy currents passing through the disk's resistance, so like conventional friction disk brakes, the disk becomes hot. Unlike in the linear brake below, the metal of the disk passes repeatedly through the magnetic field, so disk eddy current brakes get hotter than linear eddy current brakes. Japanese Shinkansen trains had employed circular eddy current brake system on trailer cars since 100 Series Shinkansen. The N700 Series Shinkansen abandoned eddy current brakes in favour of regenerative brakes, since 14 of the 16 cars in the trainset used electric motors. In regenerative brakes, the motor that drives the wheel is used as a generator to produce electric current, which can be used to charge a battery, enabling the energy to be reused. Dynamometer eddy current absorbers Most chassis dynamometers and many engine dynos use an eddy-current brake as a means of providing an electrically adjustable load on the engine. They are often referred to as an "absorber" in such applications. Inexpensive air-cooled versions are typically used on chassis dynamometers, where their inherently high-inertia steel rotors are an asset rather than a liability. Conversely, performance engine dynamometers tend to utilize low-inertia, high RPM, liquid-cooled configurations. Downsides of eddy-current absorbers in such applications, compared to expensive AC-motor based dynamometers, is their inability to provide stall-speed (zero RPM) loading or to motor the engine - for starting or motoring (downhill simulation). Since they do not actually absorb energy, provisions to transfer their radiated heat out of the test cell area must be provided. Either a high-volume air-ventilation or water-to-air heat exchanger adds additional cost and complexity. In contrast, high-end AC-motor dynamometers cleanly return the engine's power to the grid. Linear eddy current brakes Linear eddy current brakes are used on some rail vehicles, such as trains. They are used on roller coasters, to stop cars smoothly at the end of the ride. The linear eddy current brake consists of a magnetic yoke with electrical coils positioned along the rail, which are being magnetized alternating as south and north magnetic poles. This magnet does not touch the rail, but is held at a constant small distance from the rail of approximately 7 mm (the eddy current brake should not be confused with another device, the magnetic brake, which exerts its braking force by friction of a brake shoe with the rail).(Unlike mechanical brakes, which are based on friction and kinetic energy, eddy current brakes rely on electromagnetism to stop objects from moving.) It works the same as a disk eddy current brake, by inducing closed loops of eddy current in the conductive rail, which generate counter magnetic fields which oppose the motion of the train. The kinetic energy of the moving vehicle is converted to heat by the eddy current flowing through the electrical resistance of the rail, which leads to a warming of the rail. An advantage of the linear brake is that since each section of rail passes only once through the magnetic field of the brake, in contrast to the disk brake in which each section of the disk passes repeatedly through the brake, the rail doesn't get as hot as a disk, so the linear brake can dissipate more energy and have a higher power rating than disk brakes. The eddy current brake does not have any mechanical contact with the rail, thus no wear, and creates neither noise nor odor. The eddy current brake is unusable at low speeds, but can be used at high speeds for emergency braking and service braking. The TSI (Technical Specifications for Interoperability) of the EU for trans-European high-speed rail recommends that all newly built high-speed lines should make the eddy current brake possible. Modern roller coasters use this type of braking. To avoid the risk posed by power outages, they utilize permanent magnets instead of electromagnets, thus not requiring a power supply. This application lacks the possibility of adjusting braking strength as easily as with electromagnets. Lab experiment In physics education a simple experiment is sometimes used to illustrate eddy currents and the principle behind magnetic braking. When a strong magnet is dropped down a vertical, non-ferrous, conducting pipe, eddy currents are induced in the pipe, and these retard the descent of the magnet, so it falls slower than it would if free-falling. As one set of authors explained In typical experiments, students measure the slower time of fall of the magnet through a copper tube compared with a cardboard tube, and may use an oscilloscope to observe the pulse of eddy current induced in a loop of wire wound around the pipe when the magnet falls through. See also Dynamic braking - either rheostatic (dissipating the train's energy as heat in resistor banks within the train, or regenerative where the energy is returned to the electrical supply system) Electromagnetic brakes (or electro-mechanical brakes) – use the magnetic force to press the brake mechanically on the rail Linear induction motor can be used as a regenerative brake Notes References Brakes Electromagnetic components Electrodynamics Railway brakes Physics experiments Electromagnetic brakes and clutches fr:Frein à courants de Foucault nl:Wervelstroomrem
Eddy current brake
[ "Physics", "Mathematics", "Engineering" ]
2,652
[ "Physics experiments", "Electromagnetic brakes and clutches", "Experimental physics", "Mechanical engineering", "Electrodynamics", "Dynamical systems" ]
1,832,184
https://en.wikipedia.org/wiki/MacConkey%20agar
MacConkey agar is a selective and differential culture medium for bacteria. It is designed to selectively isolate gram-negative and enteric (normally found in the intestinal tract) bacteria and differentiate them based on lactose fermentation. Lactose fermenters turn red or pink on MacConkey agar, and nonfermenters do not change color. The media inhibits growth of gram-positive organisms with crystal violet and bile salts, allowing for the selection and isolation of gram-negative bacteria. The media detects lactose fermentation by enteric bacteria with the pH indicator neutral red. Contents It contains bile salts (to inhibit most gram-positive bacteria), crystal violet dye (which also inhibits certain gram-positive bacteria), and neutral red dye (which turns pink if the microbes are fermenting lactose). Composition: Peptone – 17 g Proteose peptone – 3 g Lactose – 10 g Bile salts – 1.5 g Sodium chloride – 5 g Neutral red – 0.03 g Crystal violet – 0.001 g Agar – 13.5 g Water – add to make 1 litre; adjust pH to 7.1 +/− 0.2 Sodium taurocholate There are many variations of MacConkey agar depending on the need. If the spreading or swarming of Proteus species is not required, sodium chloride is omitted. Crystal violet at a concentration of 0.0001% (0.001 g per litre) is included when needing to check if gram-positive bacteria are inhibited. MacConkey with sorbitol is used to isolate E. coli O157, an enteric pathogen. History The medium was developed by Alfred Theodore MacConkey while working as a bacteriologist for the Royal Commission on Sewage Disposal. Uses Using neutral red pH indicator, the agar distinguishes those gram-negative bacteria that can ferment the sugar lactose (Lac+) from those that cannot (Lac-). This medium is also known as an "indicator medium" and a "low selective medium". Presence of bile salts inhibits swarming by Proteus species. Lac positive By utilizing the lactose available in the medium, Lac+ bacteria such as Escherichia coli, Enterobacter and Klebsiella will produce acid, which lowers the pH of the agar below 6.8 and results in the appearance of pink colonies. The bile salts precipitate in the immediate neighborhood of the colony, causing the medium surrounding the colony to become hazy. Lac negative Organisms unable to ferment lactose will form normal-colored (i.e., un-dyed) colonies. The medium may also turn yellow. Examples of non-lactose fermenting bacteria include Salmonella, Proteus, and Shigella spp. Slow Some organisms ferment lactose slowly or weakly, and are sometimes put in their own category. These include Serratia and Citrobacter. Mucoid colonies Some organisms, especially Klebsiella and Enterobacter, produce mucoid colonies which appear very moist and sticky and slimy. This phenomenon happens because the organism is producing a capsule, which is predominantly made from the lactose sugar in the agar. Variant A variant, sorbitol-MacConkey agar, (with the addition of additional selective agents) can assist in the isolation and differentiation of enterohemorrhagic E. coli serotype O157:H7, by the presence of colorless circular colonies that are non-sorbitol fermenting. See also R2a agar MRS agar (culture medium designed to grow gram-positive bacteria and differentiate them for lactose fermentation). References Biochemistry detection reactions Microbiological media Cell culture media
MacConkey agar
[ "Chemistry", "Biology" ]
790
[ "Biochemistry detection reactions", "Microbiology equipment", "Biochemical reactions", "Microbiology techniques", "Microbiological media" ]
1,832,436
https://en.wikipedia.org/wiki/Phase%20synchronization
Phase synchronization is the process by which two or more cyclic signals tend to oscillate with a repeating sequence of relative phase angles. Phase synchronisation is usually applied to two waveforms of the same frequency with identical phase angles with each cycle. However it can be applied if there is an integer relationship of frequency, such that the cyclic signals share a repeating sequence of phase angles over consecutive cycles. These integer relationships are called Arnold tongues which follow from bifurcation of the circle map. One example of phase synchronization of multiple oscillators can be seen in the behavior of Southeast Asian fireflies. At dusk, the flies begin to flash periodically with random phases and a gaussian distribution of native frequencies. As night falls, the flies, sensitive to one another's behavior, begin to synchronize their flashing. After some time all the fireflies within a given tree (or even larger area) will begin to flash simultaneously in a burst. Thinking of the fireflies as biological oscillators, we can define the phase to be 0° during the flash and +-180° exactly halfway until the next flash. Thus, when they begin to flash in unison, they synchronize in phase. One way to keep a local oscillator "phase synchronized" with a remote transmitter uses a phase-locked loop. See also Algebraic connectivity Coherence (physics) Kuramoto model Synchronization (alternating current) References Sync by S. H. Strogatz (2002). Synchronization - A universal concept in nonlinear sciences by A. Pikovsky, M. Rosenblum, J. Kurths (2001) External links A tutorial on calculating Phase locking and Phase synchronization in Matlab. Wave mechanics Synchronization
Phase synchronization
[ "Physics", "Engineering" ]
373
[ "Physical phenomena", "Telecommunications engineering", "Classical mechanics", "Waves", "Wave mechanics", "Synchronization" ]
1,832,781
https://en.wikipedia.org/wiki/Superhelix
A superhelix is a molecular structure in which a helix is itself coiled into a helix. This is significant to both proteins and genetic material, such as overwound circular DNA. The earliest significant reference in molecular biology is from 1971, by F. B. Fuller: A geometric invariant of a space curve, the writhing number, is defined and studied. For the central curve of a twisted cord the writhing number measures the extent to which coiling of the central curve has relieved local twisting of the cord. This study originated in response to questions that arise in the study of supercoiled double-stranded DNA rings. About the writhing number, mathematician W. F. Pohl says: It is well known that the writhing number is a standard measure of the global geometry of a closed space curve. Contrary to intuition, a topological property, the linking number, arises from the geometric properties twist and writhe according to the following relationship: Lk= T + W, where Lk is the linking number, W is the writhe and T is the twist of the coil. The linking number refers to the number of times that one strand wraps around the other. In DNA this property does not change and can only be modified by specialized enzymes called topoisomerases. See also DNA supercoil (superhelical DNA) Coiled coil Knot theory References External links DNA Structure and Topology at Molecular Biochemistry II: The Bello Lectures. Helices Molecular biology Molecular topology
Superhelix
[ "Chemistry", "Mathematics", "Biology" ]
296
[ "Biochemistry", "Molecular topology", "Topology", "Molecular biology" ]
1,834,678
https://en.wikipedia.org/wiki/Fuel%20element%20failure
A fuel element failure is a rupture in a nuclear reactor's fuel cladding that allows the nuclear fuel or fission products, either in the form of dissolved radioisotopes or hot particles, to enter the reactor coolant or storage water. The de facto standard nuclear fuel is uranium dioxide or a mixed uranium/plutonium dioxide. This has a higher melting point than the actinide metals. Uranium dioxide resists corrosion in water and provides a stable matrix for many of the fission products; however, to prevent fission products (such as the noble gases) from leaving the uranium dioxide matrix and entering the coolant, the pellets of fuel are normally encased in tubes of a corrosion-resistant metal alloy (normally Zircaloy for water-cooled reactors). Those elements are then assembled into bundles to allow good handling and cooling. As the fuel fissions, the radioactive fission products are also contained by the cladding, and the entire fuel element can then be disposed of as nuclear waste when the reactor is refueled. If, however, the cladding is damaged, those fission products (which are not immobile in the uranium dioxide matrix) can enter the reactor coolant or storage water and can be carried out of the core, into the rest of the primary cooling circuit, increasing contamination levels there. In the EU, some work has been done in which fuel is overheated in a special research reactor named PHEBUS. During these experiments the emissions of radioactivity from the fuel are measured and afterwards the fuel is subjected to Post Irradiation Examination to discover more about what happened to it. References Nuclear safety and security Nuclear chemistry Radioactive waste
Fuel element failure
[ "Physics", "Chemistry", "Technology" ]
340
[ "Nuclear chemistry", "Radioactive waste", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Environmental impact of nuclear power", "nan", "Nuclear physics", "Hazardous waste", "Radioactivity" ]
1,834,808
https://en.wikipedia.org/wiki/Hardenability
Hardenability is the depth to which a steel is hardened after putting it through a heat treatment process. It should not be confused with hardness, which is a measure of a sample's resistance to indentation or scratching. It is an important property for welding, since it is inversely proportional to weldability, that is, the ease of welding a material. Process When a hot steel work-piece is quenched, the area in contact with the water immediately cools and its temperature equilibrates with the quenching medium. The inner depths of the material however, do not cool so rapidly, and in work-pieces that are large, the cooling rate may be slow enough to allow the austenite to transform fully into a structure other than martensite or bainite. This results in a work-piece that does not have the same crystal structure throughout its entire depth; with a softer core and harder "shell". The softer core is some combination of ferrite and cementite, such as pearlite. The hardenability of ferrous alloys, i.e. steels, is a function of the carbon content and other alloying elements and the grain size of the austenite. The relative importance of the various alloying elements is calculated by finding the equivalent carbon content of the material. The fluid used for quenching the material influences the cooling rate due to varying thermal conductivities and specific heats. Substances like brine and water cool the steel much more quickly than oil or air. If the fluid is agitated cooling occurs even more quickly. The geometry of the part also affects the cooling rate: of two samples of equal volume, the one with higher surface area will cool faster. Testing The hardenability of a ferrous alloy is measured by a Jominy test: a round metal bar of standard size (indicated in the top image) is transformed to 100% austenite through heat treatment, and is then quenched on one end with room-temperature water. The cooling rate will be highest at the end being quenched, and will decrease as distance from the end increases. Subsequent to cooling a flat surface is ground on the test piece and the hardenability is then found by measuring the hardness along the bar. The farther away from the quenched end that the hardness extends, the higher the hardenability. This information is plotted on a hardenability graph. The Jominy end-quench test was invented by Walter E. Jominy (1893-1976) and A.L. Boegehold, metallurgists in the Research Laboratories Division of General Motors Corp., in 1937. For his pioneering work in heat treating, Jominy was recognized by the American Society for Metals (ASM) with its Albert Sauveur Achievement Award in 1944. Jominy served as president of ASM in 1951. References External links Description of hardenability and testing Hardenability vs. hardness Jominy test simulation More in-depth Jominy test information. Video of an imperfect test (YouTube) Metallurgy
Hardenability
[ "Chemistry", "Materials_science", "Engineering" ]
621
[ "Metallurgy", "Materials science", "nan" ]
1,834,883
https://en.wikipedia.org/wiki/Vadose%20zone
The vadose zone, also termed the unsaturated zone, is the part of Earth between the land surface and the top of the phreatic zone, the position at which the groundwater (the water in the soil's pores) is at atmospheric pressure ("vadose" is from the Latin word for "shallow"). Hence, the vadose zone extends from the top of the ground surface to the water table. Water in the vadose zone has a pressure head less than atmospheric pressure, and is retained by a combination of adhesion (funiculary groundwater), and capillary action (capillary groundwater). If the vadose zone envelops soil, the water contained therein is termed soil moisture. In fine grained soils, capillary action can cause the pores of the soil to be fully saturated above the water table at a pressure less than atmospheric. The vadose zone does not include the area that is still saturated above the water table, often referred to as the capillary fringe. Movement of water within the vadose zone is studied within soil physics and hydrology, particularly hydrogeology, and is of importance to agriculture, contaminant transport, and flood control. The Richards equation is often used to mathematically describe the flow of water, which is based partially on Darcy's law. Groundwater recharge, which is an important process that refills aquifers, generally occurs through the vadose zone from precipitation. In hydrology The vadose zone is the undersaturated portion of the subsurface that lies above the groundwater table. The soil and rock in the vadose zone are not fully saturated with water; that is, the pores within them contain air as well as water. The portion of the vadose zone that is inhabited by soil microorganism, fungi and plant roots may sometimes be called the soil carbon sponge. In some places, the vadose zone is absent, as is common where there are lakes and marshes, and in some places, it is hundreds of meters thick, as is common in arid regions. Unlike the aquifers of the underlying water-saturated phreatic zone, the vadose zone is not a source of readily available water for human consumption. It is of great importance in providing water and nutrients that are vital to the soil carbon sponge and the biosphere. It is intensively used for the cultivation of plants, construction of buildings, and disposal of waste. The vadose zone is often the main factor controlling water movement from the land surface to the aquifer. Thus, it strongly affects the rate of aquifer recharge and is critical for the use and management of groundwater. Flow rates and chemical reactions in the vadose zone also control whether, where, and how fast contaminants enter groundwater supplies. Understanding of vadose-zone processes is therefore crucial in determining the amount and quality of groundwater that is available for human use. In speleology In speleology, cave passages formed in the vadose zone tend to be canyon-like in shape, as the water dissolves bedrock on the floor of the passage. Passages created in completely water-filled conditions are called phreatic passages and tend to be circular in cross-section. See also References Further reading Aquifers Hydrology Hydrogeology Soil mechanics Soil physics
Vadose zone
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
705
[ "Hydrology", "Applied and interdisciplinary physics", "Soil mechanics", "Soil physics", "Aquifers", "Environmental engineering", "Hydrogeology" ]
1,835,001
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring. The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals. Statement Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in : Statement 1: Let be an ideal in , and a finitely generated module over . If , then there exists with such that . This is proven below. A useful mnemonic for Nakayama's lemma is "". This summarizes the following alternative formulation: Statement 2: Let be an ideal in , and a finitely generated module over . If , then there exists an such that for all . Proof: Take in Statement 1. The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears. Statement 3: If is a finitely generated module over , is the Jacobson radical of , and , then . Proof: (with as in Statement 1) is in the Jacobson radical so is invertible. More generally, one has that is a superfluous submodule of when is finitely generated. Statement 4: If is a finitely generated module over , is a submodule of , and = , then = . Proof: Apply Statement 3 to . The following result manifests Nakayama's lemma in terms of generators. Statement 5: If is a finitely generated module over and the images of elements 1,..., of in generate as an -module, then 1,..., also generate as an -module. Proof: Apply Statement 4 to . If one assumes instead that is complete and is separated with respect to the -adic topology for an ideal in , this last statement holds with in place of and without assuming in advance that is finitely generated. Here separatedness means that the -adic topology satisfies the T1 separation axiom, and is equivalent to Consequences Local rings In the special case of a finitely generated module over a local ring with maximal ideal , the quotient is a vector space over the field . Statement 5 then implies that a basis of lifts to a minimal set of generators of . Conversely, every minimal set of generators of is obtained in this way, and any two such sets of generators are related by an invertible matrix with entries in the ring. Geometric interpretation In this form, Nakayama's lemma takes on concrete geometrical significance. Local rings arise in geometry as the germs of functions at a point. Finitely generated modules over local rings arise quite often as germs of sections of vector bundles. Working at the level of germs rather than points, the notion of finite-dimensional vector bundle gives way to that of a coherent sheaf. Informally, Nakayama's lemma says that one can still regard a coherent sheaf as coming from a vector bundle in some sense. More precisely, let be a coherent sheaf of -modules over an arbitrary scheme . The stalk of at a point , denoted by , is a module over the local ring and the fiber of at is the vector space . Nakayama's lemma implies that a basis of the fiber lifts to a minimal set of generators of . That is: Any basis of the fiber of a coherent sheaf at a point comes from a minimal basis of local sections. Reformulating this geometrically, if is a locally free -module representing a vector bundle , and if we take a basis of the vector bundle at a point in the scheme , this basis can be lifted to a basis of sections of the vector bundle in some neighborhood of the point. We can organize this data diagrammaticallywhere is an n-dimensional vector space, to say a basis in (which is a basis of sections of the bundle ) can be lifted to a basis of sections for some neighborhood of . Going up and going down The going up theorem is essentially a corollary of Nakayama's lemma. It asserts: Let be an integral extension of commutative rings, and a prime ideal of . Then there is a prime ideal in such that . Moreover, can be chosen to contain any prime of such that . Module epimorphisms Nakayama's lemma makes precise one sense in which finitely generated modules over a commutative ring are like vector spaces over a field. The following consequence of Nakayama's lemma gives another way in which this is true: If is a finitely generated -module and is a surjective endomorphism, then is an isomorphism. Over a local ring, one can say more about module epimorphisms: Suppose that is a local ring with maximal ideal , and are finitely generated -modules. If is an -linear map such that the quotient is surjective, then is surjective. Homological versions Nakayama's lemma also has several versions in homological algebra. The above statement about epimorphisms can be used to show: Let be a finitely generated module over a local ring. Then is projective if and only if it is free. This can be used to compute the Grothendieck group of any local ring as . A geometrical and global counterpart to this is the Serre–Swan theorem, relating projective modules and coherent sheaves. More generally, one has Let be a local ring and a finitely generated module over . Then the projective dimension of over is equal to the length of every minimal free resolution of . Moreover, the projective dimension is equal to the global dimension of , which is by definition the smallest integer such that Here is the residue field of and is the tor functor. Inverse function theorem Nakayama's lemma is used to prove a version of the inverse function theorem in algebraic geometry: Let be a projective morphism between quasi-projective varieties. Then is an isomorphism if and only if it is a bijection and the differential is injective for all . Proof A standard proof of the Nakayama lemma uses the following technique due to . Let M be an R-module generated by n elements, and φ: M → M an R-linear map. If there is an ideal I of R such that φ(M) ⊂ IM, then there is a monic polynomial with pk ∈ Ik, such that as an endomorphism of M. This assertion is precisely a generalized version of the Cayley–Hamilton theorem, and the proof proceeds along the same lines. On the generators xi of M, one has a relation of the form where aij ∈ I. Thus The required result follows by multiplying by the adjugate of the matrix (φδij − aij) and invoking Cramer's rule. One finds then det(φδij − aij) = 0, so the required polynomial is To prove Nakayama's lemma from the Cayley–Hamilton theorem, assume that IM = M and take φ to be the identity on M. Then define a polynomial p(x) as above. Then has the required property: and . Noncommutative case A version of the lemma holds for right modules over non-commutative unital rings R. The resulting theorem is sometimes known as the Jacobson–Azumaya theorem. Let J(R) be the Jacobson radical of R. If U is a right module over a ring, R, and I is a right ideal in R, then define U·I to be the set of all (finite) sums of elements of the form u·i, where · is simply the action of R on U. Necessarily, U·I is a submodule of U. If V is a maximal submodule of U, then U/V is simple. So U·J(R) is necessarily a subset of V, by the definition of J(R) and the fact that U/V is simple. Thus, if U contains at least one (proper) maximal submodule, U·J(R) is a proper submodule of U. However, this need not hold for arbitrary modules U over R, for U need not contain any maximal submodules. Naturally, if U is a Noetherian module, this holds. If R is Noetherian, and U is finitely generated, then U is a Noetherian module over R, and the conclusion is satisfied. Somewhat remarkable is that the weaker assumption, namely that U is finitely generated as an R-module (and no finiteness assumption on R), is sufficient to guarantee the conclusion. This is essentially the statement of Nakayama's lemma. Precisely, one has: Nakayama's lemma: Let U be a finitely generated right module over a (unital) ring R. If U is a non-zero module, then U·J(R) is a proper submodule of U. Proof Let be a finite subset of , minimal with respect to the property that it generates . Since is non-zero, this set is nonempty. Denote every element of by for . Since generates ,. Suppose , to obtain a contradiction. Then every element can be expressed as a finite combination for some . Each can be further decomposed as for some . Therefore, we have . Since is a (two-sided) ideal in , we have for every , and thus this becomes for some , . Putting and applying distributivity, we obtain . Choose some . If the right ideal were proper, then it would be contained in a maximal right ideal and both and would belong to , leading to a contradiction (note that by the definition of the Jacobson radical). Thus and has a right inverse in . We have . Therefore, . Thus is a linear combination of the elements from . This contradicts the minimality of and establishes the result. Graded version There is also a graded version of Nakayama's lemma. Let R be a ring that is graded by the ordered semigroup of non-negative integers, and let denote the ideal generated by positively graded elements. Then if M is a graded module over R for which for i sufficiently negative (in particular, if M is finitely generated and R does not contain elements of negative degree) such that , then . Of particular importance is the case that R is a polynomial ring with the standard grading, and M is a finitely generated module. The proof is much easier than in the ungraded case: taking i to be the least integer such that , we see that does not appear in , so either , or such an i does not exist, i.e., . See also Module theory Serre–Swan theorem Notes References . . . . . . . Links How to understand Nakayama's Lemma and its Corollaries Theorems in ring theory Algebraic geometry Commutative algebra Lemmas in algebra
Nakayama's lemma
[ "Mathematics" ]
2,542
[ "Theorems in algebra", "Lemmas in algebra", "Fields of abstract algebra", "Algebraic geometry", "Commutative algebra", "Lemmas" ]
1,836,020
https://en.wikipedia.org/wiki/Laser%20ablation
Laser ablation or photoablation (also called laser blasting) is the process of removing material from a solid (or occasionally liquid) surface by irradiating it with a laser beam. At low laser flux, the material is heated by the absorbed laser energy and evaporates or sublimates. At high laser flux, the material is typically converted to a plasma. Usually, laser ablation refers to removing material with a pulsed laser, but it is possible to ablate material with a continuous wave laser beam if the laser intensity is high enough. While relatively long laser pulses (e.g. nanosecond pulses) can heat and thermally alter or damage the processed material, ultrashort laser pulses (e.g. femtoseconds) cause only minimal material damage during processing due to the ultrashort light-matter interaction and are therefore also suitable for micromaterial processing. Excimer lasers of deep ultra-violet light are mainly used in photoablation; the wavelength of laser used in photoablation is approximately 200 nm. Fundamentals The depth over which the laser energy is absorbed, and thus the amount of material removed by a single laser pulse, depends on the material's optical properties and the laser wavelength and pulse length. The total mass ablated from the target per laser pulse is usually referred to as ablation rate. Such features of laser radiation as laser beam scanning velocity and the covering of scanning lines can significantly influence the ablation process. Laser pulses can vary over a very wide range of duration (milliseconds to femtoseconds) and fluxes, and can be precisely controlled. This makes laser ablation very valuable for both research and industrial applications. Applications The simplest application of laser ablation is to remove material from a solid surface in a controlled fashion. Laser machining and particularly laser drilling are examples; pulsed lasers can drill extremely small, deep holes through very hard materials. Very short laser pulses remove material so quickly that the surrounding material absorbs very little heat, so laser drilling can be done on delicate or heat-sensitive materials, including tooth enamel (laser dentistry). Several workers have employed laser ablation and gas condensation to produce nano particles of metal, metal oxides and metal carbides. Also, laser energy can be selectively absorbed by coatings, particularly on metal, so CO2 or Nd:YAG pulsed lasers can be used to clean surfaces, remove paint or coating, or prepare surfaces for painting without damaging the underlying surface. High power lasers clean a large spot with a single pulse. Lower power lasers use many small pulses which may be scanned across an area. In some industries laser ablation may be referred to as laser cleaning. One of the advantages is that no solvents are used, therefore it is environmentally friendly and operators are not exposed to chemicals (assuming nothing harmful is vaporized). It is relatively easy to automate. The running costs are lower than dry media or dry-ice blasting, although the capital investment costs are much higher. The process is gentler than abrasive techniques, e.g. carbon fibres within a composite material are not damaged. Heating of the target is minimal. Another class of applications uses laser ablation to process the material removed into new forms either not possible or difficult to produce by other means. A recent example is the production of carbon nanotubes. Laser cleaning is also used for efficient rust removal from iron objects; oil or grease removal from various surfaces; restoration of paintings, sculptures, frescoes. Laser ablation is one of preferred techniques for rubber mold cleaning due to minimal surface damage to the mold. In March 1995 Guo et al. were the first to report the use of a laser to ablate a block of pure graphite, and later graphite mixed with catalytic metal. The catalytic metal can consist of elements such as cobalt, niobium, platinum, nickel, copper, or a binary combination thereof. The composite block is formed by making a paste of graphite powder, carbon cement, and the metal. The paste is next placed in a cylindrical mold and baked for several hours. After solidification, the graphite block is placed inside an oven with a laser pointed at it, and argon gas is pumped along the direction of the laser point. The oven temperature is approximately 1200 °C. As the laser ablates the target, carbon nanotubes form and are carried by the gas flow onto a cool copper collector. Like carbon nanotubes formed using the electric-arc discharge technique, carbon nanotube fibers are deposited in a haphazard and tangled fashion. Single-walled nanotubes are formed from the block of graphite and metal catalyst particles, whereas multi-walled nanotubes form from the pure graphite starting material. A variation of this type of application is to use laser ablation to create coatings by ablating the coating material from a source and letting it deposit on the surface to be coated; this is a special type of physical vapor deposition called pulsed laser deposition (PLD), and can create coatings from materials that cannot readily be evaporated any other way. This process is used to manufacture some types of high temperature superconductor and laser crystals. Remote laser spectroscopy uses laser ablation to create a plasma from the surface material; the composition of the surface can be determined by analyzing the wavelengths of light emitted by the plasma. Laser ablation is also used to create pattern, removing selectively coating from dichroic filter. This products are used in stage lighting for high dimensional projections, or for calibration of machine vision's instruments. Propulsion Finally, laser ablation can be used to transfer momentum to a surface, since the ablated material applies a pulse of high pressure to the surface underneath it as it expands. The effect is similar to hitting the surface with a hammer. This process is used in industry to work-harden metal surfaces, and is one damage mechanism for a laser weapon. It is also the basis of pulsed laser propulsion for spacecraft. Manufacturing Processes are currently being developed to use laser ablation in the removal of thermal barrier coating on high-pressure gas turbine components. Due to the low heat input, TBC removal can be completed with minimal damage to the underlying metallic coatings and parent material. 2D materials production Laser ablation in the liquid phase is an efficient method to exfoliate bulk materials into their 2-dimensional (2D) forms, such as black phosphorus. By changing the solvent and laser energy, the thickness and lateral size of the 2D materials can be controlled. Chemical analysis Laser ablation is used as a sampling method for elemental and isotopic analysis, and replaces traditional laborious procedures generally required for digesting solid samples in acid solutions. Laser ablation sampling is detected by monitoring the photons emitted at the sample surface - a technology referred to as LIBS (Laser Induced Breakdown Spectroscopy) and LAMIS (Laser Ablation Molecular Isotopic Spectrometry), or by transporting the ablated mass particles to a secondary excitation source, like the inductively coupled plasma. Both mass spectroscopy (MS) and optical emission spectroscopy (OES) can be coupled with the ICP. The benefits of laser ablation sampling for chemical analysis include no sample preparation, no waste, minimal sample requirements, no vacuum requirements, rapid sample-analysis turn-around time, spatial (depth and lateral) resolution, and chemical mapping. Laser ablation chemical analysis is viable for practically all industries, such as mining, geochemistry, energy, environmental, industrial processing, food safety, forensic and biological. Commercial instruments are available for all markets to measure every element and isotope within a sample. Some instruments combine both optical and mass detection to extend the analysis coverage, and dynamic range in sensitivity. Biology Laser ablation is used in science to destroy nerves and other tissues to study their function. For example, a species of pond snail, Helisoma trivolvis, can have their sensory neurons laser ablated off when the snail is still an embryo to prevent use of those nerves. Another example is the trochophore larva of Platynereis dumerilii, where the larval eye was ablated and the larvae was not phototactic, anymore. However phototaxis in the nectochaete larva of Platynereis dumerilii is not mediated by the larval eyes, because the larva is still phototactic, even if the larval eyes are ablated. But if the adult eyes are ablated, then the nectochaete is not phototactic anymore and thus phototaxis in the nectochaete larva is mediated by the adult eyes. Laser ablation can also be used to destroy individual cells during embryogenesis of an organism, like Platynereis dumerilii, to study the effect of missing cells during development. Medicine There are several laser types used in medicine for ablation, including argon, carbon dioxide (CO2), dye, erbium, excimer, Nd:YAG, and others. Laser ablation is used in a variety of medical specialties including ophthalmology, general surgery, neurosurgery, ENT, dentistry, oral and maxillofacial surgery, and veterinary. Laser scalpels are used for ablation in both hard- and soft-tissue surgeries. Some of the most common procedures where laser ablation is used include LASIK, skin resurfacing, cavity preparation, biopsies, and tumor and lesion removal. In hard-tissue surgeries, the short pulsed lasers, such as Er:YAG or Nd:YAG, ablate tissue under stress or inertial confinement conditions. In soft-tissue surgeries, the CO2 laser beam ablates and cauterizes simultaneously, making it the most practical and most common soft-tissue laser. Laser ablation can be used on benign and malignant lesions in various organs, which is called laser-induced interstitial thermotherapy. The main applications currently involve the reduction of benign thyroid nodules and destruction of primary and secondary malignant liver lesions. Laser ablation is also used to treat chronic venous insufficiency. See also ablative brain surgery. Mechanism Material dynamics A well-established framework for laser ablation is called the two-temperature model by Kaganov and Anisimov. In it, the energy from the laser pulse is absorbed by the solid material, directly stimulating the motion of the electrons and transferring heat to the lattice, which underlies the crystalline structure of the solid. Thus, the two variables are: the electron temperature itself and the lattice temperature . Their differential equations, as a function of the depth , are given by Here, and are the specific heat of the electrons and the lattice respectively, is the electron thermal conductivity, is the thermal coupling between the electron and (lattice) phonon systems, and is the laser pulse energy absorbed by the bulk, usually characterized by the fluence. Some approximations can be made depending on the laser parameters and their relation to the time scales of the thermal processes in the target, which vary between the target being metallic or a dielectric. One of the most important experimental parameters for characterization of a target is the ablation threshold, which is the minimum fluence at which a particular atom or molecule is observed in the ablation plume. This threshold depends on the wavelength of the laser, and can be simulated assuming the Lennard-Jones potential between the atoms in the lattice, and only during a particular time of the temperature evolution called the hydrodynamic stage. Typically, however, this value is experimentally determined. The two-temperature model can be extended on a case-by-case basis. One notable extension involves the generation of plasma. For ultra-short pulses (which suggest a large fluence) it has been proposed that Coulomb explosion also plays a role because the laser energy is high enough to generate ions in the ablation plume. A value for the electric field has been determined for the Coulomb-explosion threshold, and is given by where is the sublimation energy per atom, is the atomic lattice density and is the dielectric permittivity. Plume dynamics Some applications of pulsed laser ablation focus on the machining and the finish of the ablated material, but other applications are interested in the material ejected from the target. In this case, the characteristics of the ablation plume are more important to model. Anisimov's theory considered an elliptical gas cloud growing in vacuum. In this model, thermal expansion dominates the initial dynamics, with little influence from the kinetic energy, but the mathematical expression is subject to assumptions and conditions in the experimental setup. Parameters such as surface finish, preconditioning of a spot on the target, or the angle of the laser beam with respect to the normal of the target surface are factors to take into account when observing the angle of divergence of the plume dynamics or its yield. See also Asteroid laser ablation Dental laser Laser induced breakdown spectroscopy LASEK LASIK Laser bonding Laser cutting Laser engraving Laser scalpel Laser surgery Soft-tissue laser surgery List of laser articles Matrix-assisted laser desorption/ionization Parts cleaning Optical breakdown photoionization mode (OB) at Photoionization mode Soft retooling References Bibliography Oxford Concise Medical Dictionary,2002,6th edition, Ablation Laser applications Plasma technology and applications Laser medicine Medical treatments
Laser ablation
[ "Physics" ]
2,755
[ "Plasma technology and applications", "Plasma physics" ]
1,222,595
https://en.wikipedia.org/wiki/Tame%20group
In mathematical group theory, a tame group is a certain kind of group defined in model theory. Formally, we define a bad field as a structure of the form (K, T), where K is an algebraically closed field and T is an infinite, proper, distinguished subgroup of K, such that (K, T) is of finite Morley rank in its full language. A group G is then called a tame group if no bad field is interpretable in G. References A. V. Borovik, Tame groups of odd and even type, pp. 341–-366, in Algebraic Groups and their Representations, R. W. Carter and J. Saxl, eds. (NATO ASI Series C: Mathematical and Physical Sciences, vol. 517), Kluwer Academic Publishers, Dordrecht, 1998. Model theory Infinite group theory Properties of groups
Tame group
[ "Mathematics" ]
180
[ "Mathematical structures", "Mathematical logic", "Properties of groups", "Algebraic structures", "Model theory" ]
1,224,005
https://en.wikipedia.org/wiki/Heck%20reaction
The Heck reaction (also called the Mizoroki–Heck reaction) is the chemical reaction of an unsaturated halide (or triflate) with an alkene in the presence of a base and a palladium catalyst to form a substituted alkene. It is named after Tsutomu Mizoroki and Richard F. Heck. Heck was awarded the 2010 Nobel Prize in Chemistry, which he shared with Ei-ichi Negishi and Akira Suzuki, for the discovery and development of this reaction. This reaction was the first example of a carbon-carbon bond-forming reaction that followed a Pd(0)/Pd(II) catalytic cycle, the same catalytic cycle that is seen in other Pd(0)-catalyzed cross-coupling reactions. The Heck reaction is a way to substitute alkenes. History The original reaction by Tsutomu Mizoroki (1971) describes the coupling between iodobenzene and styrene in methanol to form stilbene at 120 °C (autoclave) with potassium acetate base and palladium chloride catalysis. This work was an extension of earlier work by Fujiwara (1967) on the Pd(II)-mediated coupling of arenes (Ar–H) and alkenes and earlier work by Heck (1969) on the coupling of arylmercuric halides (ArHgCl) with alkenes using a stoichiometric amount of a palladium(II) species. In 1972 Heck acknowledged the Mizoroki publication and detailed independently discovered work. Heck's reaction conditions differ in terms of the catalyst (palladium acetate), catalyst loading (0.01 eq.), base (hindered amine), and absence of solvent. In 1974 Heck showed that phosphine ligands facilitated the reaction. Catalyst and substrates The reaction is catalyzed by palladium complexes. Typical catalysts and precatalysts include tetrakis(triphenylphosphine)palladium(0), palladium chloride, and palladium(II) acetate. Typical supporting ligands are triphenylphosphine, PHOX, and BINAP. Typical bases are triethylamine, potassium carbonate, and sodium acetate. The aryl electrophile can be a halide (Br, Cl) or a triflate as well as benzyl or vinyl halides. The alkene must contain at least one sp2-C-H bond. Electron-withdrawing substituents enhance the reaction, thus acrylates are ideal. Reaction mechanism The mechanism of this vinylation involves organopalladium intermediates. The required palladium(0) compound is often generated in situ from a palladium(II) precursor. For instance, palladium(II) acetate is reduced by triphenylphosphine to bis(triphenylphosphine)palladium(0) (1) concomitant with oxidation of triphenylphosphine to triphenylphosphine oxide. Step A is an oxidative addition in which palladium inserts itself in the aryl-bromide bond. The resulting palladium(II) complex then binds alkene (3). In step B the alkene inserts into the Pd-C bond in a syn addition step. Step C involves a beta-hydride elimination (here the arrows are showing the opposite) with the formation of a new palladium - alkene π complex (5). This complex is destroyed in the next step. The Pd(0) complex is regenerated by reductive elimination of the palladium(II) compound by potassium carbonate in the final step, D. In the course of the reaction the carbonate is stoichiometrically consumed and palladium is truly a catalyst and used in catalytic amounts. A similar palladium cycle but with different scenes and actors is observed in the Wacker process. This cycle is not limited to vinyl compounds, in the Sonogashira coupling one of the reactants is an alkyne and in the Suzuki coupling the alkene is replaced by an aryl boronic acid and in the Stille reaction by an aryl stannane. The cycle also extends to the other group 10 element nickel for example in the Negishi coupling between aryl halides and organozinc compounds. Platinum forms strong bonds with carbon and does not have a catalytic activity in this type of reaction. Stereoselectivity This coupling reaction is stereoselective with a propensity for trans coupling as the palladium halide group and the bulky organic residue move away from each other in the reaction sequence in a rotation step. The Heck reaction is applied industrially in the production of naproxen and the sunscreen component octyl methoxycinnamate. The naproxen synthesis includes a coupling between a brominated naphthalene compound with ethylene: Variations Ionic liquid Heck reaction In the presence of an ionic liquid a Heck reaction proceeds in absence of a phosphorus ligand. In one modification palladium acetate and the ionic liquid (bmim)PF6 are immobilized inside the cavities of reversed-phase silica gel. In this way the reaction proceeds in water and the catalyst is re-usable. Heck oxyarylation In the Heck oxyarylation modification the palladium substituent in the syn-addition intermediate is displaced by a hydroxyl group and the reaction product contains a dihydrofuran ring. Amino-Heck reaction In the amino-Heck reaction a nitrogen to carbon bond is formed. In one example, an oxime with a strongly electron withdrawing group reacts intramolecularly with the end of a diene to form a pyridine compound. The catalyst is tetrakis(triphenylphosphine)palladium(0) and the base is triethylamine. See also Hiyama coupling Stille reaction Suzuki reaction Sonogashira coupling Intramolecular Heck reaction Negishi Coupling References External links The Heck reaction at organic-chemistry.org Article Carbon-carbon bond forming reactions Substitution reactions Palladium Name reactions
Heck reaction
[ "Chemistry" ]
1,305
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
1,224,067
https://en.wikipedia.org/wiki/Stille%20reaction
The Stille reaction is a chemical reaction widely used in organic synthesis. The reaction involves the coupling of two organic groups, one of which is carried as an organotin compound (also known as organostannanes). A variety of organic electrophiles provide the other coupling partner. The Stille reaction is one of many palladium-catalyzed coupling reactions. : Allyl, alkenyl, aryl, benzyl, acyl : halides (Cl, Br, I), pseudohalides (OTf, OPO(OR)2), OAc The R1 group attached to the trialkyltin is normally sp2-hybridized, including vinyl, and aryl groups. These organostannanes are also stable to both air and moisture, and many of these reagents either are commercially available or can be synthesized from literature precedent. However, these tin reagents tend to be highly toxic. X is typically a halide, such as Cl, Br, or I, yet pseudohalides such as triflates and sulfonates and phosphates can also be used. Several reviews have been published. History The first example of a palladium catalyzed coupling of aryl halides with organotin reagents was reported by Colin Eaborn in 1976. This reaction yielded from 7% to 53% of diaryl product. This process was expanded to the coupling of acyl chlorides with alkyl-tin reagents in 1977 by Toshihiko Migita, yielding 53% to 87% ketone product. In 1977, Migita published further work on the coupling of allyl-tin reagents with both aryl (C) and acyl (D) halides. The greater ability of allyl groups to migrate to the palladium catalyst allowed the reactions to be performed at lower temperatures. Yields for aryl halides ranged from 4% to 100%, and for acyl halides from 27% to 86%. Reflecting the early contributions of Migita and Kosugi, the Stille reaction is sometimes called the Migita–Kosugi–Stille coupling. John Kenneth Stille subsequently reported the coupling of a variety of alkyl tin reagents in 1978 with numerous aryl and acyl halides under mild reaction conditions with much better yields (76%–99%). Stille continued his work in the 1980s on the synthesis of a multitude of ketones using this broad and mild process and elucidated a mechanism for this transformation. By the mid-1980s, over 65 papers on the topic of coupling reactions involving tin had been published, continuing to explore the substrate scope of this reaction. While initial research in the field focused on the coupling of alkyl groups, most future work involved the much more synthetically useful coupling of vinyl, alkenyl, aryl, and allyl organostannanes to halides. Due to these organotin reagent's stability to air and their ease of synthesis, the Stille reaction became common in organic synthesis. Mechanism The mechanism of the Stille reaction has been extensively studied. The catalytic cycle involves an oxidative addition of a halide or pseudohalide (2) to a palladium catalyst (1), transmetalation of 3 with an organotin reagent (4), and reductive elimination of 5 to yield the coupled product (7) and the regenerated palladium catalyst (1). However, the detailed mechanism of the Stille coupling is extremely complex and can occur via numerous reaction pathways. Like other palladium-catalyzed coupling reactions, the active palladium catalyst is believed to be a 14-electron Pd(0) complex, which can be generated in a variety of ways. Use of an 18- or 16- electron Pd(0) source , can undergo ligand dissociation to form the active species. Second, phosphines can be added to ligandless palladium(0). Finally, as pictured, reduction of a Pd(II) source (8) , , , , etc.) by added phosphine ligands or organotin reagents is also common Oxidative addition Oxidative addition to the 14-electron Pd(0) complex is proposed. This process gives a 16-electron Pd(II) species. It has been suggested that anionic ligands, such as OAc, accelerate this step by the formation of [Pd(OAc)(PR3)n]−, making the palladium species more nucleophillic. In some cases, especially when an sp3-hybridized organohalide is used, an SN2 type mechanism tends to prevail, yet this is not as commonly seen in the literature. However, despite normally forming a cis-intermediate after a concerted oxidative addition, this product is in rapid equilibrium with its trans-isomer. There are multiple reasons why isomerization is favored here. First, a bulky ligand set is usually used in these processes, such as phosphines, and it is highly unfavorable for them to adopt a cis orientation relative to each other, resulting in isomerization to the more favorable trans product. An alternative explanation for this phenomenon, dubbed antisymbiosis or transphobia, is by invocation of the sdn model. Under this theory, palladium is a hypervalent species. Hence R1 and the trans ligand, being trans to each other, will compete with one palladium orbital for bonding. This 4-electron 3-center bond is weakest when two strong donating groups are present, which heavily compete for the palladium orbital. Relative to any ligand normally used, the C-donor R1 ligand has a much higher trans effect. This trans influence is a measure of how competitive ligands trans to each other will compete for palladium's orbital. The usual ligand set, phosphines, and C-donors (R1) are both soft ligands, meaning that they will form strong bonds to palladium, and heavily compete with each other for bonding. Since halides or pseudohalides are significantly more electronegative, their bonding with palladium will be highly polarized, with most of the electron density on the X group, making them low trans effect ligands. Hence, it will be highly favorable for R1 to be trans to X, since the R1 group will be able to form a stronger bond to the palladium. Transmetallation The transmetallation of the trans intermediate from the oxidative addition step is believed to proceed via a variety of mechanisms depending on the substrates and conditions. The most common type of transmetallation for the Stille coupling involves an associative mechanism. This pathway implies that the organostannane, normally a tin atom bonded to an allyl, alkenyl, or aryl group, can coordinate to the palladium via one of these double bonds. This produces a fleeting pentavalent, 18-electron species, which can then undergo ligand detachment to form a square planar complex again. Despite the organostannane being coordinated to the palladium through the R2 group, R2 must be formally transferred to the palladium (the R2-Sn bond must be broken), and the X group must leave with the tin, completing the transmetalation. This is believed to occur through two mechanisms. First, when the organostannane initially adds to the trans metal complex, the X group can coordinate to the tin, in addition to the palladium, producing a cyclic transition state. Breakdown of this adduct results in the loss of R3Sn-X and a trivalent palladium complex with R1 and R2 present in a cis relationship. Another commonly seen mechanism involves the same initial addition of the organostannane to the trans palladium complex as seen above; however, in this case, the X group does not coordinate to the tin, producing an open transition state. After the α-carbon relative to tin attacks the palladium, the tin complex will leave with a net positive charge. In the scheme below, please note that the double bond coordinating to tin denotes R2, so any alkenyl, allyl, or aryl group. Furthermore, the X group can dissociate at any time during the mechanism and bind to the Sn+ complex at the end. Density functional theory calculations predict that an open mechanism will prevail if the 2 ligands remain attached to the palladium and the X group leaves, while the cyclic mechanism is more probable if a ligand dissociates prior to the transmetalation. Hence, good leaving groups such as triflates in polar solvents favor the cyclic transition state, while bulky phosphine ligands will favor the open transition state. A less common pathway for transmetalation is through a dissociative or solvent assisted mechanism. Here, a ligand from the tetravalent palladium species dissociates, and a coordinating solvent can add onto the palladium. When the solvent detaches, to form a 14-electron trivalent intermediate, the organostannane can add to the palladium, undergoing an open or cyclic type process as above. Reductive elimination step In order for R1-R2 to reductively eliminate, these groups must occupy mutually cis coordination sites. Any trans-adducts must therefore isomerize to the cis intermediate or the coupling will be frustrated. A variety of mechanisms exist for reductive elimination and these are usually considered to be concerted. First, the 16-electron tetravalent intermediate from the transmetalation step can undergo unassisted reductive elimination from a square planar complex. This reaction occurs in two steps: first, the reductive elimination is followed by coordination of the newly formed sigma bond between R1 and R2 to the metal, with ultimate dissociation yielding the coupled product. The previous process, however, is sometimes slow and can be greatly accelerated by dissociation of a ligand to yield a 14-electron T shaped intermediate. This intermediate can then rearrange to form a Y-shaped adduct, which can undergo faster reductive elimination. Finally, an extra ligand can associate to the palladium to form an 18-electron trigonal bipyramidal structure, with R1 and R2 cis to each other in equatorial positions. The geometry of this intermediate makes it similar to the Y-shaped above. The presence of bulky ligands can also increase the rate of elimination. Ligands such as phosphines with large bite angles cause steric repulsion between L and R1 and R2, resulting in the angle between L and the R groups to increase and the angle between R1 and R2 to hence decrease, allowing for quicker reductive elimination. Kinetics The rate at which organostannanes transmetalate with palladium catalysts is shown below. Sp2-hybridized carbon groups attached to tin are the most commonly used coupling partners, and sp3-hybridized carbons require harsher conditions and terminal alkynes may be coupled via a C-H bond through the Sonogashira reaction. As the organic tin compound, a trimethylstannyl or tributylstannyl compound is normally used. Although trimethylstannyl compounds show higher reactivity compared with tributylstannyl compounds and have much simpler 1H-NMR spectra, the toxicity of the former is much larger. Optimizing which ligands are best at carrying out the reaction with high yield and turnover rate can be difficult. This is because the oxidative addition requires an electron rich metal, hence favoring electron donating ligands. However, an electron deficient metal is more favorable for the transmetalation and reductive elimination steps, making electron withdrawing ligands the best here. Therefore, the optimal ligand set heavily depends on the individual substrates and conditions used. These can change the rate determining step, as well as the mechanism for the transmetalation step. Normally, ligands of intermediate donicity, such as phosphines, are utilized. Rate enhancements can be seen when moderately electron-poor ligands, such as tri-2-furylphosphine or triphenylarsenine are used. Likewise, ligands of high donor number can slow down or inhibit coupling reactions. These observations imply that normally, the rate-determining step for the Stille reaction is transmetalation. Additives The most common additive to the Stille reaction is stoichiometric or co-catalytic copper(I), specifically copper iodide, which can enhance rates up by >103 fold. It has been theorized that in polar solvents copper transmetalate with the organostannane. The resulting organocuprate reagent could then transmetalate with the palladium catalyst. Furthermore, in ethereal solvents, the copper could also facilitate the removal of a phosphine ligand, activating the Pd center. Lithium chloride has been found to be a powerful rate accelerant in cases where the X group dissociates from palladium (i.e. the open mechanism). The chloride ion is believed to either displace the X group on the palladium making the catalyst more active for transmetalation or by coordination to the Pd(0) adduct to accelerate the oxidative addition. Also, LiCl salt enhances the polarity of the solvent, making it easier for this normally anionic ligand (–Cl, –Br, –OTf, etc.) to leave. This additive is necessary when a solvent like THF is used; however, utilization of a more polar solvent, such as NMP, can replace the need for this salt additive. However, when the coupling's transmetalation step proceeds via the cyclic mechanism, addition of lithium chloride can actually decrease the rate. As in the cyclic mechanism, a neutral ligand, such as phosphine, must dissociate instead of the anionic X group. Finally, sources of fluoride ions, such as cesium fluoride, also effect on the catalytic cycle. First, fluoride can increase the rates of reactions of organotriflates, possibly by the same effect as lithium chloride. Furthermore, fluoride ions can act as scavengers for tin byproducts, making them easier to remove via filtration. Competing side reactions The most common side reactivity associated with the Stille reaction is homocoupling of the stannane reagents to form an R2-R2 dimer. It is believed to proceed through two possible mechanisms. First, reaction of two equivalents of organostannane with the Pd(II) precatalyst will yield the homocoupled product after reductive elimination. Second, the Pd(0) catalyst can undergo a radical process to yield the dimer. The organostannane reagent used is traditionally tetravalent at tin, normally consisting of the sp2-hybridized group to be transferred and three "non-transferable" alkyl groups. As seen above, alkyl groups are normally the slowest at migrating onto the palladium catalyst. It has also been found that at temperatures as low as 50 °C, aryl groups on both palladium and a coordinated phosphine can exchange. While normally not detected, they can be a potential minor product in many cases. Finally, a rather rare and exotic side reaction is known as cine substitution. Here, after initial oxidative addition of an aryl halide, this Pd-Ar species can insert across a vinyl tin double bond. After β-hydride elimination, migratory insertion, and protodestannylation, a 1,2-disubstituted olefin can be synthesized. Numerous other side reactions can occur, and these include E/Z isomerization, which can potentially be a problem when an alkenylstannane is utilized. The mechanism of this transformation is currently unknown. Normally, organostannanes are quite stable to hydrolysis, yet when very electron-rich aryl stannanes are used, this can become a significant side reaction. Scope Electrophile Vinyl halides are common coupling partners in the Stille reaction, and reactions of this type are found in numerous natural product total syntheses. Normally, vinyl iodides and bromides are used. Vinyl chlorides are insufficiently reactive toward oxidative addition to Pd(0). Iodides are normally preferred: they will typically react faster and under milder conditions than will bromides. This difference is demonstrated below by the selective coupling of a vinyl iodide in the presence of a vinyl bromide. Normally, the stereochemistry of the alkene is retained throughout the reaction, except under harsh reaction conditions. A variety of alkenes may be used, and these include both α- and β-halo-α,β unsaturated ketones, esters, and sulfoxides (which normally need a copper (I) additive to proceed), and more (see example below). Vinyl triflates are also sometimes used. Some reactions require the addition of LiCl and others are slowed down, implying that two mechanistic pathways are present. Another class of common electrophiles are aryl and heterocyclic halides. As for the vinyl substrates, bromides and iodides are more common despite their greater expense. A multitude of aryl groups can be chosen, including rings substituted with electron donating substituents, biaryl rings, and more. Halogen-substituted heterocycles have also been used as coupling partners, including pyridines, furans, thiophenes, thiazoles, indoles, imidazoles, purines, uracil, cytosines, pyrimidines, and more (See below for table of heterocycles; halogens can be substituted at a variety of positions on each). Below is an example of the use of Stille coupling to build complexity on heterocycles of nucleosides, such as purines. Aryl triflates and sulfonates are also couple to a wide variety of organostannane reagents. Triflates tend to react comparably to bromides in the Stille reaction. Acyl chlorides are also used as coupling partners and can be used with a large range of organostannane, even alkyl-tin reagents, to produce ketones (see example below). However, it is sometimes difficult to introduce acyl chloride functional groups into large molecules with sensitive functional groups. An alternative developed to this process is the Stille-carbonylative cross-coupling reaction, which introduces the carbonyl group via carbon monoxide insertion. Allylic, benzylic, and propargylic halides can also be coupled. While commonly employed, allylic halides proceed via an η3 transition state, allowing for coupling with the organostannane at either the α or γ position, occurring predominantly at the least substituted carbon (see example below). Alkenyl epoxides (adjacent epoxides and alkenes) can also undergo this same coupling through an η3 transition state as, opening the epoxide to an alcohol. While allylic and benzylic acetates are commonly used, propargylic acetates are unreactive with organostannanes. Stannane Organostannane reagents are common. Several are commercially available. Stannane reagents can be synthesized by the reaction of a Grignard or organolithium reagent with trialkyltin chlorides. For example, vinyltributyltin is prepared by the reaction of vinylmagnesium bromide with tributyltin chloride. Hydrostannylation of alkynes or alkenes provides many derivatives. Organotin reagents are air and moisture stable. Some reactions can even take place in water. They can be purified by chromatography. They are tolerant to most functional groups. Some organotin compounds are heavily toxic, especially trimethylstannyl derivatives. The use of vinylstannane, or alkenylstannane reagents is widespread. In regards to limitations, both very bulky stannane reagents and stannanes with substitution on the α-carbon tend to react sluggishly or require optimization. For example, in the case below, the α-substituted vinylstannane only reacts with a terminal iodide due to steric hindrance. Arylstannane reagents are also common and both electron donating and electron withdrawing groups actually increase the rate of the transmetalation. This again implies that two mechanisms of transmetalation can occur. The only limitation to these reagents are substituents at the ortho-position as small as methyl groups can decrease the rate of reaction. A wide variety of heterocycles (see Electrophile section) can also be used as coupling partners (see example with a thiazole ring below). Alkynylstannanes, the most reactive of stannanes, have also been used in Stille couplings. They are not usually needed as terminal alkynes can couple directly to palladium catalysts through their C-H bond via Sonogashira coupling. Allylstannanes have been reported to have worked, yet difficulties arise, like with allylic halides, with the difficulty in control regioselectivity for α and γ addition. Distannane and acyl stannane reagents have also been used in Stille couplings. Applications The Stille reaction has been used in the synthesis of a variety of polymers. However, the most widespread use of the Stille reaction is its use in organic syntheses, and specifically, in the synthesis of natural products. Natural product total synthesis Larry Overman's 19-step enantioselective total synthesis of quadrigemine C involves a double Stille cross metathesis reaction. The complex organostannane is coupled onto two aryl iodide groups. After a double Heck cyclization, the product is achieved. Panek's 32 step enantioselective total synthesis of ansamycin antibiotic (+)-mycotrienol makes use of a late stage tandem Stille type macrocycle coupling. Here, the organostannane has two terminal tributyl tin groups attacked to an alkene. This organostannane "stitches" the two ends of the linear starting material into a macrocycle, adding the missing two methylene units in the process. After oxidation of the aromatic core with ceric ammonium nitrate (CAN) and deprotection with hydrofluoric acid yields the natural product in 54% yield for the 3 steps. Stephen F. Martin and coworkers' 21 step enantioselective total synthesis of the manzamine antitumor alkaloid Ircinal A makes use of a tandem one-pot Stille/Diels-Alder reaction. An alkene group is added to vinyl bromide, followed by an in situ Diels-Alder cycloaddition between the added alkene and the alkene in the pyrrolidine ring. Numerous other total syntheses utilize the Stille reaction, including those of oxazolomycin, lankacidin C, onamide A, calyculin A, lepicidin A, ripostatin A, and lucilactaene. The image below displays the final natural product, the organohalide (blue), the organostannane (red), and the bond being formed (green and circled). From these examples, it is clear that the Stille reaction can be used both at the early stages of the synthesis (oxazolomycin and calyculin A), at the end of a convergent route (onamide A, lankacidin C, ripostatin A), or in the middle (lepicidin A and lucilactaene). The synthesis of ripostatin A features two concurrent Stille couplings followed by a ring-closing metathesis. The synthesis of lucilactaene features a middle subunit, having a borane on one side and a stannane on the other, allowing for Stille reactionfollowed by a subsequent Suzuki coupling. Variations In addition to performing the reaction in a variety of organic solvents, conditions have been devised which allow for a broad range of Stille couplings in aqueous solvent. In the presence of Cu(I) salts, palladium-on-carbon has been shown to be an effective catalyst. In the realm of green chemistry a Stille reaction is reported taking place in a low melting and highly polar mixture of a sugar such as mannitol, a urea such as dimethylurea and a salt such as ammonium chloride . The catalyst system is with triphenylarsine: Stille–carbonylative cross-coupling A common alteration to the Stille coupling is the incorporation of a carbonyl group between R1 and R2, serving as an efficient method to form ketones. This process is extremely similar to the initial exploration by Migita and Stille (see History) of coupling organostannane to acyl chlorides. However, these moieties are not always readily available and can be difficult to form, especially in the presence of sensitive functional groups. Furthermore, controlling their high reactivity can be challenging. The Stille-carbonylative cross-coupling employs the same conditions as the Stille coupling, except with an atmosphere of carbon monoxide (CO) being used. The CO can coordinate to the palladium catalyst (9) after initial oxidative addition, followed by CO insertion into the Pd-R1 bond (10), resulting in subsequent reductive elimination to the ketone (12). The transmetalation step is normally the rate-determining step. Larry Overman and coworkers make use of the Stille-carbonylative cross-coupling in their 20-step enantioselective total synthesis of strychnine. The added carbonyl is later converted to a terminal alkene via a Wittig reaction, allowing for the key tertiary nitrogen and the pentacyclic core to be formed via an aza-Cope-Mannich reaction. Giorgio Ortar et al. explored how the Stille-carbonylative cross-coupling could be used to synthesize benzophenone phosphores. These were embedded into 4-benzoyl-L-phenylalanine peptides and used for their photoaffinity labelling properties to explore various peptide-protein interactions. Louis Hegedus' 16-step racemic total synthesis of Jatraphone involved a Stille-carbonylative cross-coupling as its final step to form the 11-membered macrocycle. Instead of a halide, a vinyl triflate is used there as the coupling partner. Stille–Kelly coupling Using the seminal publication by Eaborn in 1976, which forms arylstannanes from arylhalides and distannanes, T. Ross Kelly applied this process to the intramolecular coupling of arylhalides. This tandem stannylation/aryl halide coupling was used for the syntheses of a variety of dihydrophenanthrenes. Most of the internal rings formed are limited to 5 or 6 members, however some cases of macrocyclization have been reported. Unlike a normal Stille coupling, chlorine does not work as a halogen, possibly due to its lower reactivity in the halogen sequence (its shorter bond length and stronger bond dissociation energy makes it more difficult to break via oxidative addition). Starting in the middle of the scheme below and going clockwise, the palladium catalyst (1) oxidatively adds to the most reactive C-X bond (13) to form 14, followed by transmetalation with distannane (15) to yield 16 and reductive elimination to yield an arylstannane (18). The regenerated palladium catalyst (1) can oxidative add to the second C-X bond of 18 to form 19, followed by intramolecular transmetalation to yield 20, followed by reductive elimination to yield the coupled product (22). Jie Jack Lie et al. made use of the Stille-Kelly coupling in their synthesis of a variety of benzo[4,5]furopyridines ring systems. They invoke a three-step process, involving a Buchwald-Hartwig amination, another palladium-catalyzed coupling reaction, followed by an intramolecular Stille-Kelly coupling. Note that the aryl-iodide bond will oxidatively add to the palladium faster than either of the aryl-bromide bonds. See also Organotin chemistry Organostannane addition Palladium-catalyzed coupling reactions Suzuki reaction Negishi coupling Heck reaction Hiyama coupling References External links Stille reaction handout from the Myers group. Stille reaction at organic-chemistry.org Carbon-carbon bond forming reactions Palladium Name reactions
Stille reaction
[ "Chemistry" ]
6,178
[ "Name reactions", "Carbon-carbon bond forming reactions", "Coupling reactions", "Organic reactions" ]
1,224,079
https://en.wikipedia.org/wiki/Suzuki%20reaction
The Suzuki reaction or Suzuki coupling is an organic reaction that uses a palladium complex catalyst to cross-couple a boronic acid to an organohalide. It was first published in 1979 by Akira Suzuki, and he shared the 2010 Nobel Prize in Chemistry with Richard F. Heck and Ei-ichi Negishi for their contribution to the discovery and development of noble metal catalysis in organic synthesis. This reaction is sometimes telescoped with the related Miyaura borylation; the combination is the Suzuki–Miyaura reaction. It is widely used to synthesize polyolefins, styrenes, and substituted biphenyls. The general scheme for the Suzuki reaction is shown below, where a carbon–carbon single bond is formed by coupling a halide (R1-X) with an organoboron species (R2-BY2) using a palladium catalyst and a base. The organoboron species is usually synthesized by hydroboration or carboboration, allowing for rapid generation of molecular complexity. Several reviews have been published describing advancements and the development of the Suzuki reaction. Reaction mechanism The mechanism of the Suzuki reaction is best viewed from the perspective of the palladium catalyst. The catalytic cycle is initiated by the formation of an active Pd0 catalytic species, A. This participates in the oxidative addition of palladium to the halide reagent 1 to form the organopalladium intermediate B. Reaction (metathesis) with base gives intermediate C, which via transmetalation with the boron-ate complex D (produced by reaction of the boronic acid reagent 2 with base) forms the transient organopalladium species E. Reductive elimination step leads to the formation of the desired product 3 and restores the original palladium catalyst A which completes the catalytic cycle. The Suzuki coupling takes place in the presence of a base and for a long time the role of the base was not fully understood. The base was first believed to form a trialkyl borate (R3B-OR), in the case of a reaction of a trialkylborane (BR3) and alkoxide (−OR); this species could be considered as being more nucleophilic and then more reactive towards the palladium complex present in the transmetalation step. Duc and coworkers investigated the role of the base in the reaction mechanism for the Suzuki coupling and they found that the base has three roles: Formation of the palladium complex [ArPd(OR)L2], formation of the trialkyl borate and the acceleration of the reductive elimination step by reaction of the alkoxide with the palladium complex. Oxidative addition In most cases the oxidative addition is the rate determining step of the catalytic cycle. During this step, the palladium catalyst is oxidized from palladium(0) to palladium(II). The catalytically active palladium species A is coupled with the aryl halide substrate 1 to yield an organopalladium complex B. As seen in the diagram below, the oxidative addition step breaks the carbon-halogen bond where the palladium is now bound to both the halogen (X) as well as the R1 group. Oxidative addition proceeds with retention of stereochemistry with vinyl halides, while giving inversion of stereochemistry with allylic and benzylic halides. The oxidative addition initially forms the cis–palladium complex, which rapidly isomerizes to the trans-complex. The Suzuki coupling occurs with retention of configuration on the double bonds for both the organoboron reagent or the halide. However, the configuration of that double bond, cis or trans is determined by the cis-to-trans isomerization of the palladium complex in the oxidative addition step where the trans palladium complex is the predominant form. When the organoboron is attached to a double bond and it is coupled to an alkenyl halide the product is a diene as shown below. Transmetalation Transmetalation is an organometallic reaction where ligands are transferred from one species to another. In the case of the Suzuki coupling the ligands are transferred from the organoboron species D to the palladium(II) complex C where the base that was added in the prior step is exchanged with the R2 substituent on the organoboron species to give the new palladium(II) complex E. The exact mechanism of transmetalation for the Suzuki coupling remains to be discovered. The organoboron compounds do not undergo transmetalation in the absence of base and it is therefore widely believed that the role of the base is to activate the organoboron compound as well as facilitate the formation of R1-Pdll-OtBu intermediate (C) from oxidative addition product R1-Pdll-X (B). Reductive elimination The final step is the reductive elimination step where the palladium(II) complex (E) eliminates the product (3) and regenerates the palladium(0) catalyst (A). Using deuterium labelling, Ridgway et al. have shown the reductive elimination proceeds with retention of stereochemistry. The ligand plays an important role in the Suzuki reaction. Typically, the phosphine ligand is used in the Suzuki reaction. Phosphine ligand increases the electron density at the metal center of the complex and therefore helps in the oxidative addition step. In addition, the bulkiness of substitution of the phosphine ligand helps in the reductive elimination step. However, N-heterocyclic carbene ligands have recently been used in this cross coupling, due to the instability of the phosphine ligand under Suzuki reaction conditions. N-Heterocyclic carbenes are more electron rich and bulky than the phosphine ligand. Therefore, both the steric and electronic factors of the N-heterocyclic carbene ligand help to stabilize active Pd(0) catalyst. Advantages The advantages of Suzuki coupling over other similar reactions include availability of common boronic acids, mild reaction conditions, and its less toxic nature. Boronic acids are less toxic and safer for the environment than organotin and organozinc compounds. It is easy to remove the inorganic by-products from the reaction mixture. Further, this reaction is preferable because it uses relatively cheap and easily prepared reagents. Being able to use water as a solvent makes this reaction more economical, eco-friendly, and practical to use with a variety of water-soluble reagents. A wide variety of reagents can be used for the Suzuki coupling, e.g., aryl or vinyl boronic acids and aryl or vinyl halides. Work has also extended the scope of the reaction to incorporate alkyl bromides. In addition to many different type of halides being possible for the Suzuki coupling reaction, the reaction also works with pseudohalides such as triflates (OTf), as replacements for halides. The relative reactivity for the coupling partner with the halide or pseudohalide is: R2–I > R2–OTf > R2–Br >> R2–Cl. Boronic esters and organotrifluoroborate salts may be used instead of boronic acids. The catalyst can also be a palladium nanomaterial-based catalyst. With a novel organophosphine ligand (SPhos), a catalyst loading of down to 0.001 mol% has been reported. These advances and the overall flexibility of the process have made the Suzuki coupling widely accepted for chemical synthesis. Applications Industrial applications The Suzuki coupling reaction is scalable and cost-effective for use in the synthesis of intermediates for pharmaceuticals or fine chemicals. The Suzuki reaction was once limited by high levels of catalyst and the limited availability of boronic acids. Replacements for halides were also found, increasing the number of coupling partners for the halide or pseudohalide as well. Scaled up reactions have been carried out in the synthesis of a number of important biological compounds such as CI-1034 which used triflate and boronic acid coupling partners which was run on an 80 kilogram scale with a 95% yield. Another example is the coupling of 3-pyridylborane and 1-bromo-3-(methylsulfonyl)benzene that formed an intermediate that was used in the synthesis of a potential central nervous system agent. The coupling reaction to form the intermediate produced 278 kilograms in a 92.5% yield. Significant efforts have been put into the development of heterogeneous catalysts for the Suzuki CC reaction, motivated by the performance gains in the industrial process (eliminating the catalyst separation from the substrate), and recently a Pd single atom heterogeneous catalyst has been shown to outperform the industry default homogeneous Pd(PPh3)4 catalyst. Synthetic applications The Suzuki coupling has been frequently used in syntheses of complex compounds. The Suzuki coupling has been used on a citronellal derivative for the synthesis of caparratriene, a natural product that is highly active against leukemia: Variations Metal catalyst Various catalytic uses of metals other than palladium (especially nickel) have been developed. The first nickel catalyzed cross-coupling reaction was reported by Percec and co-workers in 1995 using aryl mesylates and boronic acids. Even though a higher amount of nickel catalyst was needed for the reaction, around 5 mol %, nickel is not as expensive or as precious a metal as palladium. The nickel catalyzed Suzuki coupling reaction also allowed a number of compounds that did not work or worked worse for the palladium catalyzed system than the nickel-catalyzed system. The use of nickel catalysts has allowed for electrophiles that proved challenging for the original Suzuki coupling using palladium, including substrates such as phenols, aryl ethers, esters, phosphates, and fluorides. Investigation into the nickel catalyzed cross-coupling continued and increased the scope of the reaction after these first examples were shown and the research interest grew. Miyaura and Inada reported in 2000 that a cheaper nickel catalyst could be utilized for the cross-coupling, using triphenylphosphine (PPh3) instead of the more expensive ligands previously used. However, the nickel-catalyzed cross-coupling still required high catalyst loadings (3-10%), required excess ligand (1-5 equivalents) and remained sensitive to air and moisture. Advancements by Han and co-workers have tried to address that problem by developing a method using low amounts of nickel catalyst (<1 mol%) and no additional equivalents of ligand. It was also reported by Wu and co-workers in 2011 that a highly active nickel catalyst for the cross-coupling of aryl chlorides could be used that only required 0.01-0.1 mol% of nickel catalyst. They also showed that the catalyst could be recycled up to six times with virtually no loss in catalytic activity. The catalyst was recyclable because it was a phosphine nickel nanoparticle catalyst (G3DenP-Ni) that was made from dendrimers. Advantages and disadvantages apply to both the palladium and nickel-catalyzed Suzuki coupling reactions. Apart from Pd and Ni catalyst system, cheap and non-toxic metal sources like iron and copper have been used in Suzuki coupling reaction. The Bedford research group and the Nakamura research group have extensively worked on developing the methodology of iron catalyzed Suzuki coupling reaction. Ruthenium is another metal source that has been used in Suzuki coupling reaction. Amide coupling Nickel catalysis can construct C-C bonds from amides. Despite the inherently inert nature of amides as synthons, the following methodology can be used to prepare C-C bonds. The coupling procedure is mild and tolerant of myriad functional groups, including: amines, ketones, heterocycles, groups with acidic protons. This technique can also be used to prepare bioactive molecules and to unite heterocycles in controlled ways through shrewd sequential cross-couplings. A general review of the reaction scheme is given below. The synthesis of a tubulin-binding compound (antiproliferative agent) was carried out using a trimethoxybenzamide and an indolyl pinacolatoboron coupling partner on a gram scale. Organoboranes Aryl boronic acids are comparatively cheaper than other organoboranes and a wide variety of aryl boronic acids are commercially available. Hence, it has been widely used in Suzuki reaction as an organoborane partner. Aryltrifluoroborate salts are another class of organoboranes that are frequently used because they are less prone to protodeboronation compared to aryl boronic acids. They are easy to synthesize and can be easily purified. Aryltrifluoroborate salts can be formed from boronic acids by the treatment with potassium hydrogen fluoride which can then be used in the Suzuki coupling reaction. Solvent variations The Suzuki coupling reaction is different from other coupling reactions in that it can be run in biphasic organic-water, water-only, or no solvent. This increased the scope of coupling reactions, as a variety of water-soluble bases, catalyst systems, and reagents could be used without concern over their solubility in organic solvent. Use of water as a solvent system is also attractive because of the economic and safety advantages. Frequently used in solvent systems for Suzuki coupling are toluene, THF, dioxane, and DMF. The most frequently used bases are K2CO3, KOtBu, Cs2CO3, K3PO4, NaOH, and NEt3. See also Chan-Lam coupling Heck reaction Hiyama coupling Kumada coupling Negishi coupling Petasis reaction Sonogashira coupling Stille reaction List of organic reactions References External links Suzuki coupling A Bit of Boron, a Pinch of Palladium: One-Stop Shop for the Suzuki Reaction Carbon-carbon bond forming reactions Palladium Substitution reactions Name reactions
Suzuki reaction
[ "Chemistry" ]
3,003
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
1,225,188
https://en.wikipedia.org/wiki/Topoisomer
Topoisomers or topological isomers are molecules with the same chemical formula and stereochemical bond connectivities but different topologies. Examples of molecules for which there exist topoisomers include DNA, which can form knots, and catenanes. Each topoisomer of a given DNA molecule possesses a different linking number associated with it. DNA topoisomers can be interchanged by enzymes called topoisomerases. Using a topoisomerase along with an intercalator, topoisomers with different linking number may be separated on an agarose gel via gel electrophoresis. See also Mechanically-interlocked molecular architectures Catenane Rotaxanes Molecular knot Molecular Borromean rings References New Molecular Topologies Beyond Catenanes and Rotaxanes Essay 2000 Theresa Chang American Chemical Society Online Article Stereochemistry Molecular topology
Topoisomer
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
180
[ "Materials science stubs", "Stereochemistry", "Molecular topology", "Space", "Stereochemistry stubs", "nan", "Nanotechnology stubs", "Topology", "Spacetime", "Nanotechnology" ]
1,225,755
https://en.wikipedia.org/wiki/Limb%20darkening
Limb darkening is an optical effect seen in stars (including the Sun) and planets, where the central part of the disk appears brighter than the edge, or limb. Its understanding offered early solar astronomers an opportunity to construct models with such gradients. This encouraged the development of the theory of radiative transfer. Basic theory Optical depth, a measure of the opacity of an object or part of an object, combines with effective temperature gradients inside the star to produce limb darkening. The light seen is approximately the integral of all emission along the line of sight modulated by the optical depth to the viewer (i.e. 1/e times the emission at 1 optical depth, 1/e2 times the emission at 2 optical depths, etc.). Near the center of the star, optical depth is effectively infinite, causing approximately constant brightness. However, the effective optical depth decreases with increasing radius due to lower gas density and a shorter line of sight distance through the star, producing a gradual dimming, until it becomes zero at the apparent edge of the star. The effective temperature of the photosphere also decreases with increasing distance from the center of the star. The radiation emitted from a gas is approximately black-body radiation, the intensity of which is proportional to the fourth power of the temperature. Therefore, even in line of sight directions where the optical depth is effectively infinite, the emitted energy comes from cooler parts of the photosphere, resulting in less total energy reaching the viewer. The temperature in the atmosphere of a star does not always decrease with increasing height. For certain spectral lines, the optical depth is greatest in regions of increasing temperature. In this scenario, the phenomenon of "limb brightening" is seen instead. In the Sun, the existence of a temperature minimum region means that limb brightening should start to dominate at far-infrared or radio wavelengths. Above the lower atmosphere, and well above the temperature-minimum region, the Sun is surrounded by the million-kelvin solar corona. For most wavelengths this region is optically thin, i.e. has small optical depth, and must, therefore, be limb-brightened if it is spherically symmetric. Calculation of limb darkening In the figure shown here, as long as the observer at point P is outside the stellar atmosphere, the intensity seen in the direction θ will be a function only of the angle of incidence . This is most conveniently approximated as a polynomial in : where is the intensity seen at P along a line of sight forming angle with respect to the stellar radius, and is the central intensity. In order that the ratio be unity for , we must have For example, for a Lambertian radiator (no limb darkening) we will have all except . As another example, for the Sun at , the limb darkening is well expressed by N = 2 and The equation for limb darkening is sometimes more conveniently written as which now has independent coefficients rather than coefficients that must sum to unity. The constants can be related to the constants. For , For the Sun at 550 nm, we then have This model gives an intensity at the edge of the Sun's disk of only 30% of the intensity at the center of the disk. We can convert these formulas to functions of by using the substitution where is the angle from the observer to the limb of the star. For small we have We see that the derivative of cos ψ is infinite at the edge. The above approximation can be used to derive an analytic expression for the ratio of the mean intensity to the central intensity. The mean intensity is the integral of the intensity over the disk of the star divided by the solid angle subtended by the disk: where is a solid angle element, and the integrals are over the disk: and . We may rewrite this as Although this equation can be solved analytically, it is rather cumbersome. However, for an observer at infinite distance from the star, can be replaced by , so we have which gives For the Sun at 550 nm, this says that the average intensity is 80.5% of the intensity at the center. References Stellar phenomena Solar phenomena
Limb darkening
[ "Physics" ]
837
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
1,226,666
https://en.wikipedia.org/wiki/Adams%20operation
In mathematics, an Adams operation, denoted ψk for natural numbers k, is a cohomology operation in topological K-theory, or any allied operation in algebraic K-theory or other types of algebraic construction, defined on a pattern introduced by Frank Adams. The basic idea is to implement some fundamental identities in symmetric function theory, at the level of vector bundles or other representing object in more abstract theories. Adams operations can be defined more generally in any λ-ring. Adams operations in K-theory Adams operations ψk on K theory (algebraic or topological) are characterized by the following properties. ψk are ring homomorphisms. ψk(l)= lk if l is the class of a line bundle. ψk are functorial. The fundamental idea is that for a vector bundle V on a topological space X, there is an analogy between Adams operators and exterior powers, in which ψk(V) is to Λk(V) as the power sum Σ αk is to the k-th elementary symmetric function σk of the roots α of a polynomial P(t). (Cf. Newton's identities.) Here Λk denotes the k-th exterior power. From classical algebra it is known that the power sums are certain integral polynomials Qk in the σk. The idea is to apply the same polynomials to the Λk(V), taking the place of σk. This calculation can be defined in a K-group, in which vector bundles may be formally combined by addition, subtraction and multiplication (tensor product). The polynomials here are called Newton polynomials (not, however, the Newton polynomials of interpolation theory). Justification of the expected properties comes from the line bundle case, where V is a Whitney sum of line bundles. In this special case the result of any Adams operation is naturally a vector bundle, not a linear combination of ones in K-theory. Treating the line bundle direct factors formally as roots is something rather standard in algebraic topology (cf. the Leray–Hirsch theorem). In general a mechanism for reducing to that case comes from the splitting principle for vector bundles. Adams operations in group representation theory The Adams operation has a simple expression in group representation theory. Let G be a group and ρ a representation of G with character χ. The representation ψk(ρ) has character References Algebraic topology Symmetric functions
Adams operation
[ "Physics", "Mathematics" ]
484
[ "Symmetry", "Algebraic topology", "Symmetric functions", "Topology", "Fields of abstract algebra", "Algebra" ]
1,226,719
https://en.wikipedia.org/wiki/Cohomology%20operation
In mathematics, the cohomology operation concept became central to algebraic topology, particularly homotopy theory, from the 1950s onwards, in the shape of the simple definition that if F is a functor defining a cohomology theory, then a cohomology operation should be a natural transformation from F to itself. Throughout there have been two basic points: the operations can be studied by combinatorial means; and the effect of the operations is to yield an interesting bicommutant theory. The origin of these studies was the work of Pontryagin, Postnikov, and Norman Steenrod, who first defined the Pontryagin square, Postnikov square, and Steenrod square operations for singular cohomology, in the case of mod 2 coefficients. The combinatorial aspect there arises as a formulation of the failure of a natural diagonal map, at cochain level. The general theory of the Steenrod algebra of operations has been brought into close relation with that of the symmetric group. In the Adams spectral sequence the bicommutant aspect is implicit in the use of Ext functors, the derived functors of Hom-functors; if there is a bicommutant aspect, taken over the Steenrod algebra acting, it is only at a derived level. The convergence is to groups in stable homotopy theory, about which information is hard to come by. This connection established the deep interest of the cohomology operations for homotopy theory, and has been a research topic ever since. An extraordinary cohomology theory has its own cohomology operations, and these may exhibit a richer set on constraints. Formal definition A cohomology operation of type is a natural transformation of functors defined on CW complexes. Relation to Eilenberg–MacLane spaces Cohomology of CW complexes is representable by an Eilenberg–MacLane space, so by the Yoneda lemma a cohomology operation of type is given by a homotopy class of maps . Using representability once again, the cohomology operation is given by an element of . Symbolically, letting denote the set of homotopy classes of maps from to , See also Secondary cohomology operation References Algebraic topology
Cohomology operation
[ "Mathematics" ]
457
[ "Fields of abstract algebra", "Topology", "Algebraic topology" ]
20,696,848
https://en.wikipedia.org/wiki/Nitrate%20radical
Nitrogen trioxide or nitrate radical is an oxide of nitrogen with formula , consisting of three oxygen atoms covalently bound to a nitrogen atom. This highly unstable blue compound has not been isolated in pure form, but can be generated and observed as a short-lived component of gas, liquid, or solid systems. Like nitrogen dioxide , it is a radical (a molecule with an unpaired valence electron), which makes it paramagnetic. It is the uncharged counterpart of the nitrate anion and an isomer of the peroxynitrite radical . Nitrogen trioxide is an important intermediate in reactions between atmospheric components, including the destruction of ozone. History The existence of the radical was postulated in 1881-1882 by Hautefeuille and Chappuis to explain the absorption spectrum of air subjected to a silent electrical discharge. Structure and properties The neutral molecule appears to be planar, with three-fold rotational symmetry (symmetry group D3h); or possibly a resonance between three Y-shaped molecules. The radical does not react directly with water, and is relatively unreactive towards closed-shell molecules, as opposed to isolated atoms and other radicals. It is decomposed by light of certain wavelengths into nitric oxide and oxygen . The absorption spectrum of has a broad band for light with wavelengths from about 500 to 680 nm, with three salient peaks in the visible at 590, 662, and 623 nm. Absorption in the range 640–680 nm does not lead to dissociation but to fluorescence: specifically, from about 605 to 800 nm following excitation at 604.4 nm, and from about 662 to 800 nm following excitation at 661.8 nm. In water solution, another absorption band appears at about 330 nm (ultraviolet). An excited state can be achieved by photons of wavelength less than 595 nm. Preparation Nitrogen trioxide can be prepared in the gas phase by mixing nitrogen dioxide and ozone: + → + This reaction can be performed also in the solid phase or water solutions, by irradiating frozen gas mixtures, flash photolysis and radiolysis of nitrate salts and nitric acid, and several other methods. Nitrogen trioxide is a product of the photolysis of dinitrogen pentoxide , chlorine nitrate , and peroxynitric acid and its salts. N2O5 → NO2 + NO3 2 ClONO2 → Cl2 + 2 NO3 References Nitrogen oxides Free radicals
Nitrate radical
[ "Chemistry", "Biology" ]
510
[ "Senescence", "Free radicals", "Biomolecules" ]
20,697,507
https://en.wikipedia.org/wiki/Dunford%E2%80%93Schwartz%20theorem
In mathematics, particularly functional analysis, the Dunford–Schwartz theorem, named after Nelson Dunford and Jacob T. Schwartz, states that the averages of powers of certain norm-bounded operators on L1 converge in a suitable sense. Statement The statement is no longer true when the boundedness condition is relaxed to even . Notes Theorems in functional analysis
Dunford–Schwartz theorem
[ "Mathematics" ]
71
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in functional analysis", "Mathematical analysis stubs" ]
20,698,214
https://en.wikipedia.org/wiki/Fendiline
Fendiline is a nonselective calcium channel blocker. References Calcium channel blockers Amines
Fendiline
[ "Chemistry" ]
23
[ "Amines", "Bases (chemistry)", "Functional groups" ]
20,700,323
https://en.wikipedia.org/wiki/Wien%20filter
A Wien filter also known as a velocity selector is a device consisting of perpendicular electric and magnetic fields that can be used as a velocity filter for charged particles, for example in electron microscopes and spectrometers. It is used in accelerator mass spectrometry to select particles based on their speed. The device is composed of orthogonal electric and magnetic fields, such that particles with the correct speed will be unaffected while other particles will be deflected. It is named for Wilhelm Wien who developed it in 1898 for the study of anode rays. It can be configured as a charged particle energy analyzer, monochromator, or mass spectrometer. Theory Any charged particle in an electric field will feel a force proportional to the charge and field strength such that , where F is force, q is charge, and E is electric field strength. Similarly, any particle moving in a magnetic field will feel a force proportional to the velocity and charge of the particle. The force felt by any particle is then equal to , where F is force, q is the charge on the particle, v is the velocity of the particle, B is the strength of the magnetic field, and is the cross product. In the case of a velocity selector, the magnetic field is always at 90 degrees to the velocity and the force is simplified to in the direction described by the cross product. Setting the two forces to equal magnitude in opposite directions it can be shown that . Which means that any combination of electric () and magnetic () fields will allow charged particles with only velocity through. See also Neutron-velocity selector References Mass spectrometry Electron microscopy
Wien filter
[ "Physics", "Chemistry", "Materials_science" ]
330
[ "Electron", "Materials science stubs", "Electron microscopy", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Microscopy", "Electromagnetism stubs", "Matter" ]
20,700,895
https://en.wikipedia.org/wiki/Resistance%20distance%20%28mechanics%29
Mechanics
Resistance distance (mechanics)
[ "Physics", "Engineering" ]
3
[ "Mechanics", "Mechanical engineering" ]
20,709,007
https://en.wikipedia.org/wiki/Solid%20surface%20material
Solid surface material, also known as solid surface composite, is a man-made material usually composed of a combination of alumina trihydrate (ATH), acrylic, epoxy or polyester resins and pigments. It is most frequently used for seamless countertop installations. A solid surface material was first introduced by DuPont in 1967 under the name of Corian. Since the expiration of their patent other manufacturers have entered the market with their own branded products. These include Hi-Macs by LX Hausys, Hanex Solid Surface by Hyundai L&C, Staron by Lotte Chemical and Velstone. History and characteristics DuPont created and marketed the first solid surface material in 1967, trademarked as Corian. The product was invented by DuPont biochemist Don Slocum that year, and was patented in 1968. Corian consists of a blend of alumina trihydrate, acrylic resin and various pigments. Originally marketed as a countertop material for both residential and commercial applications, it has since been used in many other applications including furniture. Other companies introduced competing products of greater or lesser similarity. Competitors included Gibraltar, Fountainhead, Avonite, Hanex, HiMacs, Staron and Surrell. Some of these brands used different resins such as polyester which have different performance characteristics and some are no longer on the market. Solid surface materials are usually approximately 70% aluminum trihydrate. Solid surface materials are available in a wide variety of colors and patterns. Some of the patterns emulate granite and marble, while other patterns are original. The most common thickness is 1/2" (13 mm) although other thicknesses are available for other applications, such as tub and shower surrounds. These products are non-porous, sanitary, moderately heat resistant and repairable. Versions of solid surface materials based on acrylic resins can be thermoformed into components of various curved shapes. Fabrication and installation Standards for properly working with solid surface materials are well established within the construction industry. These include procedures for creating inconspicuous seams, built-up decorative edges and various styles of backsplashes, and for installing sinks, including integral solid surface sinks. The use of power tools with solid surface materials generates airborne dust, but any effect on health from exposure is poorly understood; one death from pulmonary fibrosis was reported in 2014 in association with long-term exposure. See also Engineered stone Epoxy granite Paper composite panels References External links Materials Kitchen countertops
Solid surface material
[ "Physics" ]
522
[ "Materials", "Matter" ]
15,580,787
https://en.wikipedia.org/wiki/High-g%20training
High-g training is done by aviators and astronauts who are subject to high levels of acceleration ('g'). It is designed to prevent a g-induced loss of consciousness (g-LOC), a situation when the action of g-forces moves the blood away from the brain to the extent that consciousness is lost. Incidents of acceleration-induced loss of consciousness have caused fatal accidents in aircraft capable of sustaining high-g for considerable periods. The value of training has been well established during the decades since the 1970s and has been the subject of much research and literature, and training has contributed to extending pilots' g tolerance in both magnitude and duration. Training includes centrifuge, Anti-g Straining Maneuvers (AGSM), and acceleration physiology. Overview As g-forces increase, visual effects include loss of colour vision ("greyout"), followed by tunnel vision (where peripheral vision is lost, retaining only the centre vision). If g-forces increase further, complete loss of vision will occur, while consciousness remains. These effects are due to a reduction of blood flow to the eyes before blood flow to the brain is lost, because the extra pressure within the eye (intraocular pressure) counters the blood pressure. The reverse effect is experienced in advanced aerobatic maneuvers under negative g-forces, where excess blood moves towards the brain and eyes ("redout"). The human body has different tolerances for g-forces depending on the acceleration direction. Humans can withstand a positive acceleration forward at higher g-forces than they can withstand a positive acceleration upwards. This is because when the body accelerates up at such high rates the blood rushes from the brain which causes loss of vision. A further increase in g-forces will cause g-LOC where consciousness is lost. This is doubly dangerous because, on recovery as g is reduced, a period of several seconds of disorientation occurs, during which the aircraft can dive into the ground. Dreams are reported to follow g-LOC which are brief and vivid. The g thresholds at which these effects occur depend on the training, age and fitness of the individual. An untrained individual not used to the g-straining maneuver can black out between 4 and 6 g, particularly if this is pulled suddenly. Roller coasters typically do not expose the occupants to much more than about 3 g. A hard slap on the face may impose hundreds of g-s locally but may not produce any obvious damage; a constant 15 g-s for a minute, however, may be deadly. A trained, fit individual wearing a g suit and practicing the straining maneuver can, with some difficulty, sustain up to 9 g without loss of consciousness. The human body is considerably more able to survive g-forces that are perpendicular to the spine. This is not true in 0 g when you strafe up; that is an eyeballs-down maneuver, which is the same force as a blackout where blood rushes to the feet, and this force is parallel to the spine. In general, when the g-force pushes the body forwards (colloquially known as 'eyeballs in') a much higher tolerance is shown than when g-force is pushing the body backwards ('eyeballs out') since blood vessels in the retina appear more sensitive to that direction. G-suits A g-suit is worn by aviators and astronauts who are subject to high levels of acceleration and, hence, increasing positive g. It is designed to prevent a blackout and g-LOC, due to the blood pooling in the lower part of the body when under high-g, thus depriving the brain of blood. Human centrifuge training Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity. In the UK High-G training is provided at the High-G Training and Test Facility, RAF Cranwell using an AMST built human centrifuge. The facility trains Royal Navy, Royal Air Force and international students. KBRwyle at Brooks City-Base in San Antonio, Texas, operates a human centrifuge. The centrifuge at Brooks is used to train USAF and USN aircrew for sustained high-g flight. The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of free fall. An example of this can be seen aboard the Discovery spacecraft in the film 2001: A Space Odyssey. Human-rated centrifuges are made by AMST Systemtechnik in Austria (Austria Metall SystemTechnik), Latécoère in France, Wyle Laboratories and ETC in the US. See also Bárány chair Aerotrim Flight training G-seat Index of aviation articles References Flight training Effects of gravity Acceleration
High-g training
[ "Physics", "Mathematics" ]
1,026
[ "Wikipedia categories named after physical quantities", "Quantity", "Physical quantities", "Acceleration" ]
15,583,411
https://en.wikipedia.org/wiki/Schanuel%27s%20lemma
In mathematics, especially in the area of algebra known as module theory, Schanuel's lemma, named after Stephen Schanuel, allows one to compare how far modules depart from being projective. It is useful in defining the Heller operator in the stable category, and in giving elementary descriptions of dimension shifting. Statement Schanuel's lemma is the following statement: Let R be a ring with identity. If 0 → K → P → M → 0 and 0 → K′ → P′ → M → 0 are short exact sequences of R-modules and P and P′ are projective, then K ⊕ P′ is isomorphic to K′ ⊕ P. Proof Define the following submodule of , where and : The map , where is defined as the projection of the first coordinate of into , is surjective. Since is surjective, for any , one may find a such that . This gives with . Now examine the kernel of the map : We may conclude that there is a short exact sequence Since is projective this sequence splits, so . Similarly, we can write another map , and the same argument as above shows that there is another short exact sequence and so . Combining the two equivalences for gives the desired result. Long exact sequences The above argument may also be generalized to long exact sequences. Origins Stephen Schanuel discovered the argument in Irving Kaplansky's homological algebra course at the University of Chicago in Autumn of 1958. Kaplansky writes: Early in the course I formed a one-step projective resolution of a module, and remarked that if the kernel was projective in one resolution it was projective in all. I added that, although the statement was so simple and straightforward, it would be a while before we proved it. Steve Schanuel spoke up and told me and the class that it was quite easy, and thereupon sketched what has come to be known as "Schanuel's lemma." Notes Homological algebra Module theory
Schanuel's lemma
[ "Mathematics" ]
405
[ "Mathematical structures", "Fields of abstract algebra", "Category theory", "Module theory", "Homological algebra" ]
15,584,482
https://en.wikipedia.org/wiki/Pregeometry%20%28physics%29
In physics, a pregeometry is a hypothetical structure from which the geometry of the universe develops. Some cosmological models feature a pregeometric universe before the Big Bang. The term was championed by John Archibald Wheeler in the 1960s and 1970s as a possible route to a theory of quantum gravity. Since quantum mechanics allowed a metric to fluctuate, it was argued that the merging of gravity with quantum mechanics required a set of more fundamental rules regarding connectivity that were independent of topology and dimensionality. Where geometry could describe the properties of a known surface, the physics of a hypothetical region with predefined properties, "pregeometry" might allow one to work with deeper underlying rules of physics that were not so strongly dependent on simplified classical assumptions about the properties of space. No single proposal for pregeometry has gained wide consensus support in the physics community. Some notions related to pregeometry predate Wheeler, other notions depart considerably from his outline of pregeometry but are still associated with it. A 2006 paper provided a survey and critique of pregeometry or near-pregeometry proposals up to that time. A summary of these is given below: Discrete spacetime by Hill A proposal anticipating Wheeler's pregeometry, though assuming some geometric notions embedded in quantum mechanics and special relativity. A subgroup of Lorentz transformations with only rational coefficients is deployed. Energy and momentum variables are restricted to a certain set of rational numbers. Quantum wave functions work out to be a special case semi-periodical functions though the nature of wave functions is ambiguous since the energy-momentum space cannot be uniquely interpreted. Discrete-space structure by Dadić and Pisk Spacetime as an unlabeled graph whose topological structure entirely characterizes the graph. Spatial points are related to vertices. Operators define the creation or annihilation of lines which develop into a Fock space framework. This discrete-space structure assumes the metric of spacetime and assumes composite geometric objects so it is not a pregeometric scheme in line with Wheeler's original conception of pregeometry. Pregeometric graph by Wilson Spacetime is described by a generalized graph consisting of a very large or infinite set of vertices paired with a very large or infinite set of edges. From that graph emerge various constructions such as vertices with multiple edges, loops, and directed edges. These in turn support formulations of the metrical foundation of space-time. Number theory pregeometry by Volovich Spacetime as a non-Archimedean geometry over a field of rational numbers and a finite Galois field where rational numbers themselves undergo quantum fluctuations. Causal sets by Bombelli, Lee, Meyer and Sorkin All of spacetime at very small scales is a causal set consisting of locally finite set of elements with a partial order linked to the notion of past and future in macroscopic spacetime and causality between point-events. Derived from the causal order is the differential structure and the conformal metric of a manifold. A probability is assigned to a causal set becoming embedded in a manifold; thus there can be a transition from a discrete Planck scale fundamental unit of volume to a classical large scale continuous space. Random graphs by Antonsen Spacetime is described by dynamical graphs with points (associated with vertices) and links (of unit length) that are created or annihilated according to probability calculations. The parameterization of graphs in a metaspace gives rise to time. Bootstrap universe by Cahill and Klinger An iterative map composed of monads and the relations between them becomes a tree-graph of nodes and links. A definition of distance between any two monads is defined and from this and probabilistic mathematical tools emerges a three-dimensional space. Axiomatic pregeometry by Perez-Bergliaffa, Romero and Vucetich An assortment of ontological presuppositions describes spacetime a result of relations between objectively existing entities. From presuppositions emerges the topology and metric of Minkowski spacetime. Cellular networks by Requardt Space is described by a graph with densely entangled sub-clusters of nodes (with differential states) and bonds (either vanishing at 0 or directed at 1). Rules describe the evolution of the graph from a chaotic patternless pre-Big Bang condition to a stable spacetime in the present. Time emerges from a deeper external-parameter "clock-time" and the graphs lead to a natural metrical structure. Simplicial quantum gravity by Lehto, Nielsen and Ninomiya Spacetime is described as having a deeper pregeometric structure based on three dynamical variables, vertices of an abstract simplicial complex, and a real-valued field associated with every pair of vertices; the abstract simplicial complex is set to correspond with a geometric simplicial complex and then geometric simplices are stitched together into a piecewise linear space. Developed further, triangulation, link distance, a piecewise linear manifold, and a spacetime metric arise. Further, a lattice quantization is formulated resulting in a quantum gravity description of spacetime. Quantum automaton universe by Jaroszkiewicz and Eakins Event states (elementary or entangled) are provided topological relationships via tests (Hermitian operators) endowing the event states with evolution, irreversible acquisition of information, and a quantum arrow of time. Information content in various ages of the universe modifies the tests so the universe acts as an automaton, modifying its structure. Causal set theory is then worked out within this quantum automaton framework to describe a spacetime that inherits the assumptions of geometry within standard quantum mechanics. Rational-number spacetime by Horzela, Kapuścik, Kempczyński and Uzes A preliminary investigation into how all events might be mapped with rational number coordinates and how this might help to better understand a discrete spacetime framework. Further reading Some additional or related pregeometry proposals are: Akama, Keiichi. "An Attempt at Pregeometry: Gravity with Composite Metric" Requardt, Mandred; Roy, Sisir. "(Quantum) Space-Time as a Statistical Geometry of Fuzzy Lumps and the Connection with Random Metric Spaces" Sidoni, Lorenzo. "Horizon thermodynamics in pregeometry" References Misner, Thorne, and Wheeler ("MTW"), Gravitation (1971) §44.4 "Not geometry, but pregeometry as the magic building material", §44.5 "Pregeometry as the calculus of prepositions" Mathematical physics Quantum gravity
Pregeometry (physics)
[ "Physics", "Mathematics" ]
1,358
[ "Applied mathematics", "Theoretical physics", "Unsolved problems in physics", "Quantum gravity", "Geometry", "Mathematical physics", "Physics beyond the Standard Model" ]
6,150,170
https://en.wikipedia.org/wiki/Particle%20Data%20Group
The Particle Data Group (PDG) is an international collaboration of particle physicists that compiles and reanalyzes published results related to the properties of particles and fundamental interactions. It also publishes reviews of theoretical results that are phenomenologically relevant, including those in related fields such as cosmology. The PDG currently publishes the Review of Particle Physics and its pocket version, the Particle Physics Booklet, which are printed biennially as books, and updated annually via the World Wide Web. In previous years, the PDG has published the Pocket Diary for Physicists, a calendar with the dates of key international conferences and contact information of major high energy physics institutions, which is now discontinued. PDG also further maintains the standard numbering scheme for particles in event generators, in association with the event generator authors. Review of Particle Physics The Review of Particle Physics (formerly Review of Particle Properties, Data on Particles and Resonant States, and Data on Elementary Particles and Resonant States) is a voluminous, 1,200+ page reference work which summarizes particle properties and reviews the current status of elementary particle physics, general relativity and big-bang cosmology. Usually singled out for citation analysis, it is currently the most cited article in high energy physics, being cited more than 2,000 times annually in the scientific literature (). The Review is currently divided into 3 sections: Particle Physics Summary Tables—Brief tables of particles: gauge and higgs bosons, leptons, quarks, mesons, baryons, constraints for the search for hypothetical particles and violation of physical laws. Reviews, Tables and Plots—Review of fundamental concepts from mathematics and statistics, table of Clebsch-Gordan coefficients, periodic table of elements, table of electronic configuration of the elements, brief table of material properties, review of current status in the fields of Standard Model, Cosmology, and experimental method of particle physics, and with tables of fundamental physical and astronomical constants (many from CODATA and the Astronomical Almanac). Particle Listings—Comprehensive version of the Particle Physics Summary Tables, with all significant measurements fully referenced. A condensed version of the Review, with the Summary Tables, a significantly shortened Reviews, Tables and Plots, and without the Particle Listings, is available as a 300-page, pocket-sized Particle Physics Booklet. The history of Review of Particle Physics can be traced back to the 1957 article Hyperons and Heavy Mesons (Systematics and Decay) by Murray Gell-Mann and Arthur H. Rosenfeld, and the unpublished update tables for its data with the title Data for Elementary Particle Physics (University of California Radiation Laboratory Technical Report UCRL-8030) that were circulated before the actual publication of the original article. In 1963, Matts Roos independently published a compilation Data on Elementary Particles and Resonant States. On his suggestion, the two publications were merged a year later into the 1964 Data on Elementary Particles and Resonant States. The publication underwent three renamings thereafter: 1965 into Data on Particles and Resonant States, 1970 into Review of Particle Properties, and 1996 into the present form Review of Particle Physics. Starting with 1972, the Review no longer appear exclusively in Reviews of Modern Physics, but also in Physics Letters B, European Physical Journal C, Journal of Physics G, Physical Review D, and Chinese Physics C (depending on the year). Past editions of Review of Particle Physics See also CODATA References External links Particle Data Group official site and electronic edition of Review of Particle Physics 2018 Photo of the 2004 Review of Particle Physics First edition of the wallet card from the Particle Data Group, 1958 Particle Physics Booklet, current version Particle Physics Booklet, July 2010 Particle Physics Booklet, 2014 Particle Physics Booklet, 2018 Particle physics Physical constants
Particle Data Group
[ "Physics", "Mathematics" ]
768
[ "Physical constants", "Quantity", "Physical quantities", "Particle physics" ]
6,154,036
https://en.wikipedia.org/wiki/Immittance
Immittance is a term used within electrical engineering and acoustics, specifically bioacoustics and the inner ear, to describe the combined measure of electrical or acoustic admittance and electrical or acoustic impedance. Immittance was initially coined by H. W. Bode in 1945, and was first used to describe the electrical admittance or impedance of either a nodal or a mesh network. Bode also suggested the name "adpedence", however the current name was more widely adopted. In bioacoustics, immittance is typically used to help define the characteristics of noise reverberation within the middle ear and assist with differential diagnosis of middle-ear disease. Immittance is typically a complex number which can represent either or both the impedance and the admittance (ratio of voltage to current or vice versa in electrical circuits, or volume velocity to sound pressure or vice versa in acoustical systems) of a system. Immittance does not have an associated unit because it applies to both impedance, which is measured in ohms () or acoustic ohms, and admittance, which is commonly measured in siemens () and historically has also been measured in mhos (), the reciprocal of ohms. Notable usage Bioacoustics In audiology, tympanometry is sometimes referred to as immittance testing. Tympanometry is especially effective when both the impedance and admittance of the inner ear are accounted for. Immittance allows for the analysis of both, and therefore is crucial to multiple-component, multiple-frequency tympanometry. Clinically, few cases require the use of this technique for accurate diagnosis; but for the fewer than 20% of cases which do require it, the technique is a necessity. Multiple-component, multiple-frequency tympanometry is invaluable for the differential diagnosis of fixation of the lateral ossicular chain from fixation of the stapes, profound mixed hearing losses, clinical otosclerosis from disruption of the ossicular chain, hypermobility of the incudostapedial joint, and congenital ossicular fixation in children. Electrical engineering In electronics, an immittance Smith chart can be created by overlaying both the impedance and admittance grids, which is useful for cascading series-connected with parallel-connected electric circuits. This allows for the visualization of changes in impedance or admittance in the system caused by components of either the series or parallel circuit. External links Network Analysis and Feedback Amplifier Design (H. W. Bode, 1945) References Physical quantities Electrical parameters
Immittance
[ "Physics", "Mathematics", "Engineering" ]
538
[ "Physical phenomena", "Physical quantities", "Quantity", "Electrical engineering", "Physical properties", "Electrical parameters" ]
6,155,162
https://en.wikipedia.org/wiki/Digestate
Digestate is the material remaining after the anaerobic digestion (decomposition under low oxygen conditions) of a biodegradable feedstock. Anaerobic digestion produces two main products: digestate and biogas. Digestate is produced both by acidogenesis and methanogenesis and each has different characteristics. These characteristics stem from the original feedstock source as well as the processes themselves. Digestate feedstock sources Anaerobic digestion is a versatile process that can use many different types of feedstocks. Example of feedstocks can be from: Sewage sludges: Liquid sludge, untreated sewage sludge, composted sludge, and lime treated sludge. Animal wastes: Animal fats, animal blood, food remains, stomach contents, rumen contents, animal carcasses, and poultry, fish, and livestock manure. Energy crops: Usually corn, maize, millet, and clover. This can be whole crops used in co-digestion or as waste (stems and stalks) from harvesting of these crops. Municipal wastes: Food waste, coffee/tea filters, organic leftovers, bakery waste, and kitchen waste. Agricultural wastes: Fruits, molasses, stems, plant straw, and bagasse (residue after crushing sugarcane or sorghum stalks). Industrial wastes: Food/beverage processing waste, dairy wastes, starch/sugar industries wastes, slaughterhouse wastes, and brewery wastes. These are just some of the different sources that anaerobic digestate can come from. The chemical make-up of the digestate produced can vary depending on what feedstock is used. Sewage sludge and animal manure generally have the majority of its energy contents consumed due to the original energy source (food) being digested inside the person or animal first. This allows sewage sludge and animal manure to be good candidates for co-digestion together with other feedstocks to produce a better digestate for agricultural purposes as well as increased biogas production. Acidogenic digestate During this stage, the acidifying bacteria convert water-soluble chemical substances, including products of hydrolysis, to short-chain organic acids, such as formic, acetic, propionic, butyric, and pentanoic, alcohols, such as methanol and ethanol, aldehydes, carbon dioxide, and hydrogen. Ammonia and hydrogen sulfide are other products of acidogenesis. This bacteria operate within a pH range from 4.0 to 8.5. This process can also lower pH inside the biodigester over time causing the microbes to not to be able to function. For this reason pH must be carefully monitored. Since acidogenesis is early in the process of anaerobic digestion, most of the organic matter has not been fully degraded leaving a digestate that is fibrous and consists of structural plant matter including lignin and cellulose. Thus, it is often referred to as solid digestate. Acidogenic digestate has high moisture retention properties. The digestate may also contain minerals (primarily phosphorus) and remnants of bacteria. Methanogenic digestate Methanogenesis is the last stage of anaerobic digestion. During this phase methanogenic Archaea produce methane from the substrates generated during acetogenesis. These substrates are mainly acetate and hydrogen. Methanogenesis can also occur using another metabolism based on the cooperation of fermenting bacteria and methanogens archaea, the syntrophic methanogenic pathway. During syntrophic methanogens bacteria belonging mainly to the Clostridia class oxidize acetate into hydrogen and CO2, which are successively exploited by hydrogenotrophic Archaea for the methanogens. The methanogenic microbes are fairly sensitive to pH changes and prefer a range from 5.0-8.5 depending on the species. This is why in some biodigesters the chambers for the different anaerobic digestions stages will be separated for optimal biogas production. By this point most of the organic matter has broken down leaving behind the Methanogenic digestate known as a sludge (sometimes called a liquor or liquid digestate). The sludge is high in nutrients such as ammoniums and potassium. The other byproduct of this step is methane, which is often collected and used as a fuel source. Whole digestate This is when the fibrous digestate (solid fraction) of the acidogenic digestate is combined with the liquor digestate (liquid fraction) of the methanogenic digestate to create the whole digestate. This combination of the two digestates consists as a sludge form. The liquid fraction constitutes up to 90% of the digestate by volume, contains 2–6% dry matter, particles <1.2 mm in size, and most of the soluble nitrogen and potassium, while the solid fraction retains most of the digestate phosphorus, and contains dry matter content ˃ 15%. Combining the two into a whole digestate allows for increased availability of a wide array of nutrients that can be useful for agricultural activities. Some anaerobic biodigesters will only have one digestion chamber allowing these two digitates to mix together on their own without further intervention. Digestate characteristics The major parameters to assess digestate quality when being used for agricultural applications include pH, nutrients, total solids (TS), volatile solids (VS), and total carbon (TC). This quality depends on feedstock and type of anaerobic digester system. Generally the ammonia content of the digestate accounts for approximately 60-80% of the total nitrogen content, but for a feedstock like kitchen food waste it can be as high as 99%. Digestate has also been reported to have a higher phosphorus and potassium concentration than that of composts. The average P to K ratio is about 1:3. All this together makes digestate a potentially viable source for agricultural soil amendments of certain crops. Uses The primary use of digestate is as a soil conditioner. Acidogenic digestate provides moisture retention and organic content for soils. This organic material can break down further, aerobically in soil. Methanogenic digestate provides nutrients for plant growth. It can also be used to protect soils against erosion. Acidogenic digestate can also be used as an environmentally friendly filler to give structure to composite plastics. Growth trials on digestate originating from mixed waste have showed healthy growth results for crops. Digestate can also be used in intensive greenhouse cultivation of plants, e.g., in digeponics. Additionally, both solid and liquid digestates have been shown to be of use in hydroponic crop production. Multiple studies have shown that digestate can produce similar or higher yields across multiple crops when compared to standard growing practices used in hydroponics and soilless substrate growing. Application of digestate has been shown to inhibit plant diseases and induction of resistance. Digestate application has a direct effect on soil-borne diseases, and an indirect effect by stimulation of biological activity. Digestate and compost Digestate is technically not compost although it is similar to it in physical and chemical characteristics. Compost is produced by aerobic digestion-decomposition by aerobes. This includes fungi and bacteria which are able to break down the lignin and cellulose to a greater extent. Treatment, for example by ultrasonication, has shown to enhance solubilization of digestate as measured by increased levels of soluble chemical oxygen demand (sCOD), soluble total organic carbon (sTOC), and soluble total nitrogen (sTN) released into the solution. Standards for digestate The standard of digestate produced by anaerobic digestion can be assessed on three criteria, chemical, biological and physical aspects. Chemical quality needs to be considered in terms of heavy metals and other inorganic contaminant, persistent organic compounds and the content of macro-elements such as nitrogen, phosphorus and potassium. Depending on their source, biowastes can contain pathogens, which can lead to the spreading of human, animal or plant diseases if not appropriately managed. The physical standards of composts includes mainly appearance and odor factors. Whilst physical contamination does not present a problem with regards to human, plant or animal health, contamination (in the form of plastics, metals and ceramics) can cause a negative public perception. Even if the compost is of high quality and all standards are met, a negative public perception of waste-based composts still exists. The presence of visible contaminants reminds users of this. Quality control of the feedstock is the most important way of ensuring a quality end product. The content and quality of waste arriving on-site should be characterised as thoroughly as possible prior to being supplied. In the UK the Publicly Available Specification (called PAS110) governs the definition of digestate derived from the anaerobic digestion of source-segregated biodegradable materials. The specification ensures all digested materials are of consistent quality and fit for purpose. If a biogas plant meets the standard, its digestate will be regarded as having been fully recovered and to have ceased to be waste, and it can be sold with the name "bio-fertiliser". See also Anaerobic decomposition Anaerobic digester types Anaerobic digestion Biogas powerplant Biosolids Mechanical biological treatment References Peng, Wei & Pivato, Alberto. (2019). Sustainable Management of Digestate from the Organic Fraction of Municipal Solid Waste and Food Waste Under the Concepts of Back to Earth Alternatives and Circular Economy. Waste and Biomass Valorization. 10. 10.1007/s12649-017-0071-2. External links Anaerobic digestion Biodegradable waste management Biogas technology Mechanical biological treatment Soil improvers
Digestate
[ "Chemistry", "Engineering", "Biology" ]
2,021
[ "Biofuels technology", "Biodegradable waste management", "Biodegradation", "Anaerobic digestion", "Environmental engineering", "Water technology", "Biogas technology" ]
40,708
https://en.wikipedia.org/wiki/Allan%20variance
The Allan variance (AVAR), also known as two-sample variance, is a measure of frequency stability in clocks, oscillators and amplifiers. It is named after David W. Allan and expressed mathematically as . The Allan deviation (ADEV), also known as sigma-tau, is the square root of the Allan variance, . The M-sample variance is a measure of frequency stability using M samples, time T between measurements and observation time . M-sample variance is expressed as The Allan variance is intended to estimate stability due to noise processes and not that of systematic errors or imperfections such as frequency drift or temperature effects. The Allan variance and Allan deviation describe frequency stability. See also the section Interpretation of value below. There are also different adaptations or alterations of Allan variance, notably the modified Allan variance MAVAR or MVAR, the total variance, and the Hadamard variance. There also exist time-stability variants such as time deviation (TDEV) or time variance (TVAR). Allan variance and its variants have proven useful outside the scope of timekeeping and are a set of improved statistical tools to use whenever the noise processes are not unconditionally stable, thus a derivative exists. The general M-sample variance remains important, since it allows dead time in measurements, and bias functions allow conversion into Allan variance values. Nevertheless, for most applications the special case of 2-sample, or "Allan variance" with is of greatest interest. Background When investigating the stability of crystal oscillators and atomic clocks, it was found that they did not have a phase noise consisting only of white noise, but also of flicker frequency noise. These noise forms become a challenge for traditional statistical tools such as standard deviation, as the estimator will not converge. The noise is thus said to be divergent. Early efforts in analysing the stability included both theoretical analysis and practical measurements. An important side consequence of having these types of noise was that, since the various methods of measurements did not agree with each other, the key aspect of repeatability of a measurement could not be achieved. This limits the possibility to compare sources and make meaningful specifications to require from suppliers. Essentially all forms of scientific and commercial uses were then limited to dedicated measurements, which hopefully would capture the need for that application. To address these problems, David Allan introduced the M-sample variance and (indirectly) the two-sample variance. While the two-sample variance did not completely allow all types of noise to be distinguished, it provided a means to meaningfully separate many noise-forms for time-series of phase or frequency measurements between two or more oscillators. Allan provided a method to convert between any M-sample variance to any N-sample variance via the common 2-sample variance, thus making all M-sample variances comparable. The conversion mechanism also proved that M-sample variance does not converge for large M, thus making them less useful. IEEE later identified the 2-sample variance as the preferred measure. An early concern was related to time- and frequency-measurement instruments that had a dead time between measurements. Such a series of measurements did not form a continuous observation of the signal and thus introduced a systematic bias into the measurement. Great care was spent in estimating these biases. The introduction of zero-dead-time counters removed the need, but the bias-analysis tools have proved useful. Another early aspect of concern was related to how the bandwidth of the measurement instrument would influence the measurement, such that it needed to be noted. It was later found that by algorithmically changing the observation , only low values would be affected, while higher values would be unaffected. The change of is done by letting it be an integer multiple of the measurement timebase : The physics of crystal oscillators were analyzed by D. B. Leeson, and the result is now referred to as Leeson's equation. The feedback in the oscillator will make the white noise and flicker noise of the feedback amplifier and crystal become the power-law noises of white frequency noise and flicker frequency noise respectively. These noise forms have the effect that the standard variance estimator does not converge when processing time-error samples. The mechanics of the feedback oscillators was unknown when the work on oscillator stability started, but was presented by Leeson at the same time as the set of statistical tools was made available by David W. Allan. For a more thorough presentation on the Leeson effect, see modern phase-noise literature. Interpretation of value Allan variance is defined as one half of the time average of the squares of the differences between successive readings of the frequency deviation sampled over the sampling period. The Allan variance depends on the time period used between samples, therefore, it is a function of the sample period, commonly denoted as τ, likewise the distribution being measured, and is displayed as a graph rather than a single number. A low Allan variance is a characteristic of a clock with good stability over the measured period. Allan deviation is widely used for plots (conventionally in log–log format) and presentation of numbers. It is preferred, as it gives the relative amplitude stability, allowing ease of comparison with other sources of errors. An Allan deviation of 1.3 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations 1 second apart with a relative root mean square (RMS) value of 1.3. For a 10 MHz clock, this would be equivalent to 13 mHz RMS movement. If the phase stability of an oscillator is needed, then the time deviation variants should be consulted and used. One may convert the Allan variance and other time-domain variances into frequency-domain measures of time (phase) and frequency stability. Formulations M-sample variance Given a time-series , for any positive real numbers , define the real number sequenceThen the -sample variance is defined (here in a modernized notation form) as the Bessel-corrected variance of the sequence :The interpretation of the symbols is as follows: is the reading on a reference clock (in arbitrary units). is the reading of a clock we are testing (in arbitrary units), as a function of the reference clock's reading. It can also be interpreted as the average fractional frequency time series. is the nth fractional frequency average over the observation time . is the number of clock reading intervals used in computing the -sample variance, is the time between each frequency sample, is the time length of each frequency estimate, or the observation period. Dead-time can be accounted for by letting the time be different from that of . Allan variance The Allan variance is defined as where denotes the expectation operator. The condition means the samples are taken with no dead-time between them. Allan deviation Just as with standard deviation and variance, the Allan deviation is defined as the square root of the Allan variance: Supporting definitions Oscillator model The oscillator being analysed is assumed to follow the basic model of The oscillator is assumed to have a nominal frequency of , given in cycles per second (SI unit: hertz). The nominal angular frequency (in radians per second) is given by The total phase can be separated into a perfectly cyclic component , along with a fluctuating component : Time error The time-error function x(t) is the difference between expected nominal time and actual normal time: For measured values a time-error series TE(t) is defined from the reference time function T(t) as Frequency function The frequency function is the frequency over time, defined as Fractional frequency The fractional frequency y(t) is the normalized difference between the frequency and the nominal frequency : Average fractional frequency The average fractional frequency is defined as where the average is taken over observation time τ, the y(t) is the fractional-frequency error at time t, and τ is the observation time. Since y(t) is the derivative of x(t), we can without loss of generality rewrite it as Estimators This definition is based on the statistical expected value, integrating over infinite time. The real-world situation does not allow for such time-series, in which case a statistical estimator needs to be used in its place. A number of different estimators will be presented and discussed. Conventions Fixed τ estimators A first simple estimator would be to directly translate the definition into or for the time series: These formulas, however, only provide the calculation for the τ = τ0 case. To calculate for a different value of τ, a new time-series needs to be provided. Non-overlapped variable τ estimators Taking the time-series and skipping past n − 1 samples, a new (shorter) time-series would occur with τ0 as the time between the adjacent samples, for which the Allan variance could be calculated with the simple estimators. These could be modified to introduce the new variable n such that no new time-series would have to be generated, but rather the original time series could be reused for various values of n. The estimators become with , and for the time series: with . These estimators have a significant drawback in that they will drop a significant amount of sample data, as only 1/n of the available samples is being used. Overlapped variable τ estimators A technique presented by J. J. Snyder provided an improved tool, as measurements were overlapped in n overlapped series out of the original series. The overlapping Allan variance estimator was introduced by Howe, Allan and Barnes. This can be shown to be equivalent to averaging the time or normalized frequency samples in blocks of n samples prior to processing. The resulting predictor becomes or for the time series: The overlapping estimators have far superior performance over the non-overlapping estimators, as n rises and the time-series is of moderate length. The overlapped estimators have been accepted as the preferred Allan variance estimators in IEEE, ITU-T and ETSI standards for comparable measurements such as needed for telecommunication qualification. Modified Allan variance In order to address the inability to separate white phase modulation from flicker phase modulation using traditional Allan variance estimators, an algorithmic filtering reduces the bandwidth by n. This filtering provides a modification to the definition and estimators and it now identifies as a separate class of variance called modified Allan variance. The modified Allan variance measure is a frequency stability measure, just as is the Allan variance. Time stability estimators A time stability (σx) statistical measure, which is often called the time deviation (TDEV), can be calculated from the modified Allan deviation (MDEV). The TDEV is based on the MDEV instead of the original Allan deviation, because the MDEV can discriminate between white and flicker phase modulation (PM). The following is the time variance estimation based on the modified Allan variance: and similarly for modified Allan deviation to time deviation: The TDEV is normalized so that it is equal to the classical deviation for white PM for time constant τ = τ0. To understand the normalization scale factor between the statistical measures, the following is the relevant statistical rule: For independent random variables X and Y, the variance (σz2) of a sum or difference (z = x − y) is the sum square of their variances (σz2 = σx2 + σy2). The variance of the sum or difference (y = x2τ − xτ) of two independent samples of a random variable is twice the variance of the random variable (σy2 = 2σx2). The MDEV is the second difference of independent phase measurements (x) that have a variance (σx2). Since the calculation is the double difference, which requires three independent phase measurements (x2τ − 2xτ + x), the modified Allan variance (MVAR) is three times the variances of the phase measurements. Other estimators Further developments have produced improved estimation methods for the same stability measure, the variance/deviation of frequency, but these are known by separate names such as the Hadamard variance, modified Hadamard variance, the total variance, modified total variance and the Theo variance. These distinguish themselves in better use of statistics for improved confidence bounds or ability to handle linear frequency drift. Confidence intervals and equivalent degrees of freedom Statistical estimators will calculate an estimated value on the sample series used. The estimates may deviate from the true value and the range of values which for some probability will contain the true value is referred to as the confidence interval. The confidence interval depends on the number of observations in the sample series, the dominant noise type, and the estimator being used. The width is also dependent on the statistical certainty for which the confidence interval values forms a bounded range, thus the statistical certainty that the true value is within that range of values. For variable-τ estimators, the τ0 multiple n is also a variable. Confidence interval The confidence interval can be established using chi-squared distribution by using the distribution of the sample variance: where s2 is the sample variance of our estimate, σ2 is the true variance value, df is the degrees of freedom for the estimator, and χ2 is the degrees of freedom for a certain probability. For a 90% probability, covering the range from the 5% to the 95% range on the probability curve, the upper and lower limits can be found using the inequality which after rearrangement for the true variance becomes Effective degrees of freedom The degrees of freedom represents the number of free variables capable of contributing to the estimate. Depending on the estimator and noise type, the effective degrees of freedom varies. Estimator formulas depending on N and n has been found empirically: {| class="wikitable" |+ Allan variance degrees of freedom |- !Noise type !degrees of freedom |- |white phase modulation (WPM) | |- |flicker phase modulation (FPM) | |- |white frequency modulation (WFM) | |- |flicker frequency modulation (FFM) | |- |random-walk frequency modulation (RWFM) | |} Power-law noise The Allan variance will treat various power-law noise types differently, conveniently allowing them to be identified and their strength estimated. As a convention, the measurement system width (high corner frequency) is denoted fH. As found in and in modern forms. The Allan variance is unable to distinguish between WPM and FPM, but is able to resolve the other power-law noise types. In order to distinguish WPM and FPM, the modified Allan variance needs to be employed. The above formulas assume that and thus that the bandwidth of the observation time is much lower than the instruments bandwidth. When this condition is not met, all noise forms depend on the instrument's bandwidth. α–μ mapping The detailed mapping of a phase modulation of the form where or frequency modulation of the form into the Allan variance of the form can be significantly simplified by providing a mapping between α and μ. A mapping between α and Kα is also presented for convenience: {| class="wikitable" |+ Allan variance α–μ mapping |- !α !β !μ !Kα |- | −2 | −4 | 1 | |- | −1 | −3 | 0 | |- | 0 | −2 | −1 | |- | 1 | −1 | −2 | |- | 2 | 0 | −2 | |} General conversion from phase noise A signal with spectral phase noise with units rad2/Hz can be converted to Allan Variance by Linear response While Allan variance is intended to be used to distinguish noise forms, it will depend on some but not all linear responses to time. They are given in the table: {| class="wikitable" |+ Allan variance linear response |- ! Linear effect ! time response ! frequency response ! Allan variance ! Allan deviation |- | phase offset | | | | |- | frequency offset | | | | |- | linear drift | | | | |} Thus, linear drift will contribute to output result. When measuring a real system, the linear drift or other drift mechanism may need to be estimated and removed from the time-series prior to calculating the Allan variance. Time and frequency filter properties In analysing the properties of Allan variance and friends, it has proven useful to consider the filter properties on the normalize frequency. Starting with the definition for Allan variance for where Replacing the time series of with the Fourier-transformed variant the Allan variance can be expressed in the frequency domain as Thus the transfer function for Allan variance is Bias functions The M-sample variance, and the defined special case Allan variance, will experience systematic bias depending on different number of samples M and different relationship between T and τ. In order to address these biases the bias-functions B1 and B2 has been defined and allows conversion between different M and T values. These bias functions are not sufficient for handling the bias resulting from concatenating M samples to the Mτ0 observation time over the MT0 with the dead-time distributed among the M measurement blocks rather than at the end of the measurement. This rendered the need for the B3 bias. The bias functions are evaluated for a particular μ value, so the α–μ mapping needs to be done for the dominant noise form as found using noise identification. Alternatively, the μ value of the dominant noise form may be inferred from the measurements using the bias functions. B1 bias function The B1 bias function relates the M-sample variance with the 2-sample variance, keeping the time between measurements T and time for each measurements τ constant. It is defined as where The bias function becomes after analysis B2 bias function The B2 bias function relates the 2-sample variance for sample time T with the 2-sample variance (Allan variance), keeping the number of samples N = 2 and the observation time τ constant. It is defined as where The bias function becomes after analysis B3 bias function The B3 bias function relates the 2-sample variance for sample time MT0 and observation time Mτ0 with the 2-sample variance (Allan variance) and is defined as where The B3 bias function is useful to adjust non-overlapping and overlapping variable τ estimator values based on dead-time measurements of observation time τ0 and time between observations T0 to normal dead-time estimates. The bias function becomes after analysis (for the N = 2 case) where τ bias function While formally not formulated, it has been indirectly inferred as a consequence of the α–μ mapping. When comparing two Allan variance measure for different τ, assuming same dominant noise in the form of same μ coefficient, a bias can be defined as The bias function becomes after analysis Conversion between values In order to convert from one set of measurements to another the B1, B2 and τ bias functions can be assembled. First the B1 function converts the (N1, T1, τ1) value into (2, T1, τ1), from which the B2 function converts into a (2, τ1, τ1) value, thus the Allan variance at τ1. The Allan variance measure can be converted using the τ bias function from τ1 to τ2, from which then the (2, T2, τ2) using B2 and then finally using B1 into the (N2, T2, τ2) variance. The complete conversion becomes where Similarly, for concatenated measurements using M sections, the logical extension becomes Measurement issues When making measurements to calculate Allan variance or Allan deviation, a number of issues may cause the measurements to degenerate. Covered here are the effects specific to Allan variance, where results would be biased. Measurement bandwidth limits A measurement system is expected to have a bandwidth at or below that of the Nyquist rate, as described within the Shannon–Hartley theorem. As can be seen in the power-law noise formulas, the white and flicker noise modulations both depends on the upper corner frequency (these systems is assumed to be low-pass filtered only). Considering the frequency filter property, it can be clearly seen that low-frequency noise has greater impact on the result. For relatively flat phase-modulation noise types (e.g. WPM and FPM), the filtering has relevance, whereas for noise types with greater slope the upper frequency limit becomes of less importance, assuming that the measurement system bandwidth is wide relative the as given by When this assumption is not met, the effective bandwidth needs to be notated alongside the measurement. The interested should consult NBS TN394. If, however, one adjust the bandwidth of the estimator by using integer multiples of the sample time , then the system bandwidth impact can be reduced to insignificant levels. For telecommunication needs, such methods have been required in order to ensure comparability of measurements and allow some freedom for vendors to do different implementations. The ITU-T Rec. G.813 for the TDEV measurement. It can be recommended that the first multiples be ignored, such that the majority of the detected noise is well within the passband of the measurement systems bandwidth. Further developments on the Allan variance was performed to let the hardware bandwidth be reduced by software means. This development of a software bandwidth allowed addressing the remaining noise, and the method is now referred to modified Allan variance. This bandwidth reduction technique should not be confused with the enhanced variant of modified Allan variance, which also changes a smoothing filter bandwidth. Dead time in measurements Many measurement instruments of time and frequency have the stages of arming time, time-base time, processing time and may then re-trigger the arming. The arming time is from the time the arming is triggered to when the start event occurs on the start channel. The time-base then ensures that minimal amount of time goes prior to accepting an event on the stop channel as the stop event. The number of events and time elapsed between the start event and stop event is recorded and presented during the processing time. When the processing occurs (also known as the dwell time), the instrument is usually unable to do another measurement. After the processing has occurred, an instrument in continuous mode triggers the arm circuit again. The time between the stop event and the following start event becomes dead time, during which the signal is not being observed. Such dead time introduces systematic measurement biases, which needs to be compensated for in order to get proper results. For such measurement systems will the time T denote the time between the adjacent start events (and thus measurements), while denote the time-base length, i.e. the nominal length between the start and stop event of any measurement. Dead-time effects on measurements have such an impact on the produced result that much study of the field have been done in order to quantify its properties properly. The introduction of zero-dead-time counters removed the need for this analysis. A zero-dead-time counter has the property that the stop event of one measurement is also being used as the start event of the following event. Such counters create a series of event and time timestamp pairs, one for each channel spaced by the time-base. Such measurements have also proved useful in order forms of time-series analysis. Measurements being performed with dead time can be corrected using the bias function B1, B2 and B3. Thus, dead time as such is not prohibiting the access to the Allan variance, but it makes it more problematic. The dead time must be known, such that the time between samples T can be established. Measurement length and effective use of samples Studying the effect on the confidence intervals that the length N of the sample series have and the effect of the variable τ parameter n, the confidence intervals may become very large since the effective degree of freedom may become small for some combination of N and n for the dominant noise form (for that τ). The effect may be that the estimated value may be much smaller or much greater than the real value, which may lead to false conclusions of the result. It is recommended that: The confidence interval be plotted along with the data, such that the reader of the plot knows of the statistical uncertainty of the values. The length of the sample sequence (i.e. the number of samples N) must be kept as high as possible to ensure that confidence interval is small over the τ range of interest. Estimators providing better degrees of freedom values be used in replacement of the Allan variance estimators or as complementing them where they outperform the Allan variance estimators. Among those the total variance and Theo variance estimators should be considered. The τ range as swept by the τ0 multiplier n is limited in the upper end relative N, such that the reader of the plot may not be confused by highly unstable estimator values. Dominant noise type A large number of conversion constants, bias corrections and confidence intervals depends on the dominant noise type. For proper interpretation shall the dominant noise type for the particular τ of interest be identified through noise identification. Failing to identify the dominant noise type will produce biased values. Some of these biases may be of several order of magnitude, so it may be of large significance. Linear drift Systematic effects on the signal is only partly cancelled. Phase and frequency offset is cancelled, but linear drift or other high-degree forms of polynomial phase curves will not be cancelled and thus form a measurement limitation. Curve fitting and removal of systematic offset could be employed. Often removal of linear drift can be sufficient. Use of linear-drift estimators such as the Hadamard variance could also be employed. A linear drift removal could be employed using a moment-based estimator. Measurement instrument estimator bias Traditional instruments provided only the measurement of single events or event pairs. The introduction of the improved statistical tool of overlapping measurements by J. J. Snyder allowed much improved resolution in frequency readouts, breaking the traditional digits/time-base balance. While such methods is useful for their intended purpose, using such smoothed measurements for Allan variance calculations would give a false impression of high resolution, but for longer τ the effect is gradually removed, and the lower-τ region of the measurement has biased values. This bias is providing lower values than it should, so it is an overoptimistic (assuming that low numbers is what one wishes) bias, reducing the usability of the measurement rather than improving it. Such smart algorithms can usually be disabled or otherwise circumvented by using time-stamp mode, which is much preferred if available. Practical measurements While several approaches to measurement of Allan variance can be devised, a simple example may illustrate how measurements can be performed. Measurement All measurements of Allan variance will in effect be the comparison of two different clocks. Consider a reference clock and a device under test (DUT), and both having a common nominal frequency of 10 MHz. A time-interval counter is being used to measure the time between the rising edge of the reference (channel A) and the rising edge of the device under test. In order to provide evenly spaced measurements, the reference clock will be divided down to form the measurement rate, triggering the time-interval counter (ARM input). This rate can be 1 Hz (using the 1 PPS output of a reference clock), but other rates like 10 Hz and 100 Hz can also be used. The speed of which the time-interval counter can complete the measurement, output the result and prepare itself for the next arm will limit the trigger frequency. A computer is then useful to record the series of time differences being observed. Post-processing The recorded time-series require post-processing to unwrap the wrapped phase, such that a continuous phase error is being provided. If necessary, logging and measurement mistakes should also be fixed. Drift estimation and drift removal should be performed, the drift mechanism needs to be identified and understood for the sources. Drift limitations in measurements can be severe, so letting the oscillators become stabilized, by long enough time being powered on, is necessary. The Allan variance can then be calculated using the estimators given, and for practical purposes the overlapping estimator should be used due to its superior use of data over the non-overlapping estimator. Other estimators such as total or Theo variance estimators could also be used if bias corrections is applied such that they provide Allan variance-compatible results. To form the classical plots, the Allan deviation (square root of Allan variance) is plotted in log–log format against the observation interval τ. Equipment and software The time-interval counter is typically an off-the-shelf counter commercially available. Limiting factors involve single-shot resolution, trigger jitter, speed of measurements and stability of reference clock. The computer collection and post-processing can be done using existing commercial or public-domain software. Highly advanced solutions exists, which will provide measurement and computation in one box. Research history The field of frequency stability has been studied for a long time. However, during the 1960s it was found that coherent definitions were lacking. A NASA-IEEE Symposium on Short-Term Stability in November 1964 resulted in the special February 1966 issue of the IEEE Proceedings on Frequency Stability. The NASA-IEEE Symposium brought together many fields and uses of short- and long-term stability, with papers from many different contributors. The articles and panel discussions concur on the existence of the frequency flicker noise and the wish to achieve a common definition for both short-term and long-term stability. Important papers, including those of David Allan, James A. Barnes, L. S. Cutler and C. L. Searle and D. B. Leeson, appeared in the IEEE Proceedings on Frequency Stability and helped shape the field. David Allan's article analyses the classical M-sample variance of frequency, tackling the issue of dead-time between measurements along with an initial bias function. Although Allan's initial bias function assumes no dead-time, his formulas do include dead-time calculations. His article analyses the case of M frequency samples (called N in the article) and variance estimators. It provides the now standard α–μ mapping, clearly building on James Barnes' work in the same issue. The 2-sample variance case is a special case of the M-sample variance, which produces an average of the frequency derivative. Allan implicitly uses the 2-sample variance as a base case, since for arbitrary chosen M, values may be transferred via the 2-sample variance to the M-sample variance. No preference was clearly stated for the 2-sample variance, even if the tools were provided. However, this article laid the foundation for using the 2-sample variance as a way of comparing other M-sample variances. James Barnes significantly extended the work on bias functions, introducing the modern B1 and B2 bias functions. Curiously enough, it refers to the M-sample variance as "Allan variance", while referring to Allan's article "Statistics of Atomic Frequency Standards". With these modern bias functions, full conversion among M-sample variance measures of various M, T and τ values could be performed, by conversion through the 2-sample variance. James Barnes and David Allan further extended the bias functions with the B3 function to handle the concatenated samples estimator bias. This was necessary to handle the new use of concatenated sample observations with dead-time in between. In 1970, the IEEE Technical Committee on Frequency and Time, within the IEEE Group on Instrumentation & Measurements, provided a summary of the field, published as NBS Technical Notice 394. This paper was first in a line of more educational and practical papers helping fellow engineers grasp the field. This paper recommended the 2-sample variance with T = τ, referring to it as Allan variance (now without the quotes). The choice of such parametrisation allows good handling of some noise forms and getting comparable measurements; it is essentially the least common denominator with the aid of the bias functions B1 and B2. J. J. Snyder proposed an improved method for frequency or variance estimation, using sample statistics for frequency counters. To get more effective degrees of freedom out of the available dataset, the trick is to use overlapping observation periods. This provides a improvement, and was incorporated in the overlapping Allan variance estimator. Variable-τ software processing was also incorporated. This development improved the classical Allan variance estimators, likewise providing a direct inspiration for the work on modified Allan variance. Howe, Allan and Barnes presented the analysis of confidence intervals, degrees of freedom, and the established estimators. Educational and practical resources The field of time and frequency and its use of Allan variance, Allan deviation and friends is a field involving many aspects, for which both understanding of concepts and practical measurements and post-processing requires care and understanding. Thus, there is a realm of educational material stretching about 40 years available. Since these reflect the developments in the research of their time, they focus on teaching different aspect over time, in which case a survey of available resources may be a suitable way of finding the right resource. The first meaningful summary is the NBS Technical Note 394 "Characterization of Frequency Stability". This is the product of the Technical Committee on Frequency and Time of the IEEE Group on Instrumentation & Measurement. It gives the first overview of the field, stating the problems, defining the basic supporting definitions and getting into Allan variance, the bias functions B1 and B2, the conversion of time-domain measures. This is useful, as it is among the first references to tabulate the Allan variance for the five basic noise types. A classical reference is the NBS Monograph 140 from 1974, which in chapter 8 has "Statistics of Time and Frequency Data Analysis". This is the extended variant of NBS Technical Note 394 and adds essentially in measurement techniques and practical processing of values. An important addition will be the Properties of signal sources and measurement methods. It covers the effective use of data, confidence intervals, effective degree of freedom, likewise introducing the overlapping Allan variance estimator. It is a highly recommended reading for those topics. The IEEE standard 1139 Standard definitions of Physical Quantities for Fundamental Frequency and Time Metrology is beyond that of a standard a comprehensive reference and educational resource. A modern book aimed towards telecommunication is Stefano Bregni "Synchronisation of Digital Telecommunication Networks". This summarises not only the field, but also much of his research in the field up to that point. It aims to include both classical measures and telecommunication-specific measures such as MTIE. It is a handy companion when looking at measurements related to telecommunication standards. The NIST Special Publication 1065 "Handbook of Frequency Stability Analysis" of W. J. Riley is a recommended reading for anyone wanting to pursue the field. It is rich of references and also covers a wide range of measures, biases and related functions that a modern analyst should have available. Further it describes the overall processing needed for a modern tool. Uses Allan variance is used as a measure of frequency stability in a variety of precision oscillators, such as crystal oscillators, atomic clocks and frequency-stabilized lasers over a period of a second or more. Short-term stability (under a second) is typically expressed as phase noise. The Allan variance is also used to characterize the bias stability of gyroscopes, including fiber optic gyroscopes, hemispherical resonator gyroscopes and MEMS gyroscopes and accelerometers. 50th anniversary In 2016, IEEE-UFFC is going to be publishing a "Special Issue to celebrate the 50th anniversary of the Allan Variance (1966–2016)". A guest editor for that issue will be David's former colleague at NIST, Judah Levine, who is the most recent recipient of the I. I. Rabi Award. See also Variance Semivariance Variogram Metrology Network time protocol Precision Time Protocol Synchronization References External links UFFC Frequency Control Teaching Resources NIST Publication search tool David W. Allan's Allan Variance Overview David W. Allan's official web site JPL Publications – Noise Analysis and Statistics William Riley publications Stable32, Software for Frequency Stability Analysis, by William Riley Stefano Bregni publications Enrico Rubiola publications Allanvar: R package for sensor error characterization using the Allan Variance Alavar windows software with reporting tools; Freeware AllanTools open-source python library for Allan variance MATLAB AVAR open-source MATLAB application Clocks Signal processing metrics Measurement
Allan variance
[ "Physics", "Mathematics", "Technology", "Engineering" ]
7,502
[ "Machines", "Physical quantities", "Quantity", "Clocks", "Measurement", "Size", "Physical systems", "Measuring instruments" ]
40,720
https://en.wikipedia.org/wiki/Noise%20temperature%20%28antenna%29
In radio frequency (RF) applications such as radio, radar and telecommunications, noise temperature of an antenna is a measure of the noise power density contributed by the antenna to the overall RF receiver system. It is defined as "the temperature of a resistor having an available thermal noise power per unit bandwidth equal to that at the antenna's output at a specified frequency". In other words, antenna noise temperature is a parameter that describes how much noise an antenna produces in a given environment. This temperature is not the physical temperature of the antenna. Moreover, an antenna does not have an intrinsic "antenna temperature" associated with it; rather the temperature depends on its gain pattern, pointing direction, and the thermal environment that it is placed in. Mathematics In RF applications, noise power is defined using the relationship , where k is the Boltzmann constant, T is the noise temperature, and B is the noise bandwidth. Typically the noise bandwidth is determined by the bandwidth of the intermediate frequency (IF) filter of the radio receiver. Thus, we can define the noise temperature as: Because k is a constant, we can effectively think of T as noise power spectral density (with unit W/Hz) normalized by k. Antenna noise is only one of the contributors to the overall noise temperature of an RF receiver system, so it is typically subscripted, such as TA. It is added directly to the effective noise temperature of the receiver to obtain the overall system noise temperature: Sources of antenna noise Antenna noise temperature has contributions from many sources, including: Cosmic microwave background radiation Galactic radiation Earth heating The Sun The Moon Electrical devices The antenna itself Galactic noise is high below 1000 MHz. At around 150 MHz, it is approximately 1000 K. At 2500 MHz, it has leveled off to around 10 K. Earth has an accepted standard temperature of 288 K. The level of the Sun's contribution depends on the solar flux. It is given by where is the solar flux, is the wavelength, and is the logarithmic gain of the antenna in decibels. The antenna noise temperature depends on antenna coupling to all noise sources in its environment as well as on noise generated within the antenna. That is, in a directional antenna, the portion of the noise source that the antenna's main and side lobes intersect contribute proportionally. For example, a satellite antenna may not receive noise contribution from the Earth in its main lobe, but sidelobes will contribute a portion of the 288 K Earth noise to its overall noise temperature. See also Noise Temperature Johnson–Nyquist noise Federal Standard 1037C MIL-STD-188 References Temperature Noise (electronics)
Noise temperature (antenna)
[ "Physics", "Chemistry" ]
532
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
40,732
https://en.wikipedia.org/wiki/Atmospheric%20duct
In telecommunications, an atmospheric duct is a horizontal layer in the lower atmosphere in which the vertical refractive index gradients are such that radio signals (and light rays) are guided or ducted, tend to follow the curvature of the Earth, and experience less attenuation in the ducts than they would if the ducts were not present. The duct acts as an atmospheric dielectric waveguide and limits the spread of the wavefront to only the horizontal dimension. Atmospheric ducting is a mode of propagation of electromagnetic radiation, usually in the lower layers of Earth’s atmosphere, where the waves are bent by atmospheric refraction. In over-the-horizon radar, ducting causes part of the radiated and target-reflection energy of a radar system to be guided over distances far greater than the normal radar range. It also causes long-distance propagation of radio signals in bands that would normally be limited to line of sight. Normally radio "ground waves" propagate along the surface as creeping waves. That is, they are only diffracted around the curvature of the earth. This is one reason that early long-distance radio communication used long wavelengths. The best known exception is that HF (3–30 MHz.) waves are reflected by the ionosphere. The reduced refractive index due to lower densities at the higher altitudes in the Earth's atmosphere bends the signals back toward the Earth. Signals in a higher refractive index layer, i.e., duct, tend to remain in that layer because of the reflection and refraction encountered at the boundary with a lower refractive index material. In some weather conditions, such as inversion layers, density changes so rapidly that waves are guided around the curvature of the earth at constant altitude. Phenomena of atmospheric optics related to atmospheric ducting include the green flash, Fata Morgana, superior mirage, mock mirage of astronomical objects and the Novaya Zemlya effect. See also Anomalous propagation Earth-Ionosphere waveguide Predel-E (Russian OTH radar system based on atmospheric ducting) Sky wave Thermal fade Temperature inversion Tropospheric ducting References Electromagnetic radiation Atmosphere Radio frequency propagation
Atmospheric duct
[ "Physics" ]
440
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic radiation", "Electromagnetic spectrum", "Waves", "Radiation" ]
40,735
https://en.wikipedia.org/wiki/Attenuation
In physics, attenuation (in some contexts, extinction) is the gradual loss of flux intensity through a medium. For instance, dark glasses attenuate sunlight, lead attenuates X-rays, and water and air attenuate both light and sound at variable attenuation rates. Hearing protectors help reduce acoustic flux from flowing into the ears. This phenomenon is called acoustic attenuation and is measured in decibels (dBs). In electrical engineering and telecommunications, attenuation affects the propagation of waves and signals in electrical circuits, in optical fibers, and in air. Electrical attenuators and optical attenuators are commonly manufactured components in this field. Background In many cases, attenuation is an exponential function of the path length through the medium. In optics and in chemical spectroscopy, this is known as the Beer–Lambert law. In engineering, attenuation is usually measured in units of decibels per unit length of medium (dB/cm, dB/km, etc.) and is represented by the attenuation coefficient of the medium in question. Attenuation also occurs in earthquakes; when the seismic waves move farther away from the hypocenter, they grow smaller as they are attenuated by the ground. Ultrasound One area of research in which attenuation plays a prominent role, is in ultrasound physics. Attenuation in ultrasound is the reduction in amplitude of the ultrasound beam as a function of distance through the imaging medium. Accounting for attenuation effects in ultrasound is important because a reduced signal amplitude can affect the quality of the image produced. By knowing the attenuation that an ultrasound beam experiences traveling through a medium, one can adjust the input signal amplitude to compensate for any loss of energy at the desired imaging depth. Ultrasound attenuation measurement in heterogeneous systems, like emulsions or colloids, yields information on particle size distribution. There is an ISO standard on this technique. Ultrasound attenuation can be used for extensional rheology measurement. There are acoustic rheometers that employ Stokes' law for measuring extensional viscosity and volume viscosity. Wave equations which take acoustic attenuation into account can be written on a fractional derivative form. In homogeneous media, the main physical properties contributing to sound attenuation are viscosity and thermal conductivity. Attenuation coefficient Attenuation coefficients are used to quantify different media according to how strongly the transmitted ultrasound amplitude decreases as a function of frequency. The attenuation coefficient () can be used to determine total attenuation in dB in the medium using the following formula: Attenuation is linearly dependent on the medium length and attenuation coefficient, as well as – approximately – the frequency of the incident ultrasound beam for biological tissue (while for simpler media, such as air, the relationship is quadratic). Attenuation coefficients vary widely for different media. In biomedical ultrasound imaging however, biological materials and water are the most commonly used media. The attenuation coefficients of common biological materials at a frequency of 1 MHz are listed below: There are two general ways of acoustic energy losses: absorption and scattering. Ultrasound propagation through homogeneous media is associated only with absorption and can be characterized with absorption coefficient only. Propagation through heterogeneous media requires taking into account scattering. Light attenuation in water Shortwave radiation emitted from the Sun have wavelengths in the visible spectrum of light that range from 360 nm (violet) to 750 nm (red). When the Sun's radiation reaches the sea surface, the shortwave radiation is attenuated by the water, and the intensity of light decreases exponentially with water depth. The intensity of light at depth can be calculated using the Beer-Lambert Law. In clear mid-ocean waters, visible light is absorbed most strongly at the longest wavelengths. Thus, red, orange, and yellow wavelengths are totally absorbed at shallower depths, while blue and violet wavelengths reach deeper in the water column. Because the blue and violet wavelengths are absorbed least compared to the other wavelengths, open-ocean waters appear deep blue to the eye. Near the shore, coastal water contains more phytoplankton than the very clear mid-ocean waters. Chlorophyll-a pigments in the phytoplankton absorb light, and the plants themselves scatter light, making coastal waters less clear than mid-ocean waters. Chlorophyll-a absorbs light most strongly in the shortest wavelengths (blue and violet) of the visible spectrum. In coastal waters where high concentrations of phytoplankton occur, the green wavelength reaches the deepest in the water column and the color of water appears blue-green or green. Seismic The energy with which an earthquake affects a location depends on the running distance. The attenuation in the signal of ground motion intensity plays an important role in the assessment of possible strong groundshaking. A seismic wave loses energy as it propagates through the earth (seismic attenuation). This phenomenon is tied into the dispersion of the seismic energy with the distance. There are two types of dissipated energy: geometric dispersion caused by distribution of the seismic energy to greater volumes dispersion as heat, also called intrinsic attenuation or anelastic attenuation. In porous fluid—saturated sedimentary rocks such as sandstones, intrinsic attenuation of seismic waves is primarily caused by the wave-induced flow of the pore fluid relative to the solid frame. Electromagnetic Attenuation decreases the intensity of electromagnetic radiation due to absorption or scattering of photons. Attenuation does not include the decrease in intensity due to inverse-square law geometric spreading. Therefore, calculation of the total change in intensity involves both the inverse-square law and an estimation of attenuation over the path. The primary causes of attenuation in matter are the photoelectric effect, Compton scattering, and, for photon energies of above 1.022 MeV, pair production. Coaxial and general RF cables The attenuation of RF cables is defined by: where is the input power into a 100 m long cable terminated with the nominal value of its characteristic impedance, and is the output power at the far end of this cable. Attenuation in a coaxial cable is a function of the materials and the construction. Radiography The beam of X-ray is attenuated when photons are absorbed when the x-ray beam passes through the tissue. Interaction with matter varies between high energy photons and low energy photons. Photons travelling at higher energy are more capable of travelling through a tissue specimen as they have less chances of interacting with matter. This is mainly due to the photoelectric effect which states that "the probability of photoelectric absorption is approximately proportional to (Z/E)3, where Z is the atomic number of the tissue atom and E is the photon energy. In context of this, an increase in photon energy (E) will result in a rapid decrease in the interaction with matter. Optics Attenuation in fiber optics, also known as transmission loss, is the reduction in intensity of the light beam (or signal) with respect to distance travelled through a transmission medium. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the relatively high quality of transparency of modern optical transmission. The medium is typically a fiber of silica glass that confines the incident light beam to the inside. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. Empirical research has shown that attenuation in optical fiber is caused primarily by both scattering and absorption. Attenuation in fiber optics can be quantified using the following equation: Light scattering The propagation of light through the core of an optical fiber is based on total internal reflection of the lightwave. Rough and irregular surfaces, even at the molecular level of the glass, can cause light rays to be reflected in many random directions. This type of reflection is referred to as "diffuse reflection", and it is typically characterized by wide variety of reflection angles. Most objects that can be seen with the naked eye are visible due to diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation. Light scattering from many common surfaces can be modelled by reflectance. Light scattering depends on the wavelength of the light being scattered. Thus, limits to spatial scales of visibility arise, depending on the frequency of the incident lightwave and the physical dimension (or spatial scale) of the scattering center, which is typically in the form of some specific microstructural feature. For example, since visible light has a wavelength scale on the order of one micrometer, scattering centers will have dimensions on a similar spatial scale. Thus, attenuation results from the incoherent scattering of light at internal surfaces and interfaces. In (poly)crystalline materials such as metals and ceramics, in addition to pores, most of the internal surfaces or interfaces are in the form of grain boundaries that separate tiny regions of crystalline order. It has recently been shown that, when the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent. This phenomenon has given rise to the production of transparent ceramic materials. Likewise, the scattering of light in optical quality glass fiber is caused by molecular-level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that a glass is simply the limiting case of a polycrystalline solid. Within this framework, "domains" exhibiting various degrees of short-range order become the building-blocks of both metals and alloys, as well as glasses and ceramics. Distributed both between and within these domains are microstructural defects that will provide the most ideal locations for the occurrence of light scattering. This same phenomenon is seen as one of the limiting factors in the transparency of IR missile domes. UV-Vis-IR absorption In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths, in a manner similar to that responsible for the appearance of color. Primary material considerations include both electrons and molecules as follows: At the electronic level, it depends on whether the electron orbitals are spaced (or "quantized") such that they can absorb a quantum of light (or photon) of a specific wavelength or frequency in the ultraviolet (UV) or visible ranges. This is what gives rise to color. At the atomic or molecular level, it depends on the frequencies of atomic or molecular vibrations or chemical bonds, how close-packed its atoms or molecules are, and whether or not the atoms or molecules exhibit long-range order. These factors will determine the capacity of the material transmitting longer wavelengths in the infrared (IR), far IR, radio and microwave ranges. The selective absorption of infrared (IR) light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integral multiple of the frequency) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared (IR) light. Applications In optical fibers, attenuation is the rate at which the signal light decreases in intensity. For this reason, glass fiber (which has a low attenuation) is used for long-distance fiber optic cables; plastic fiber has a higher attenuation and, hence, shorter range. There also exist optical attenuators that decrease the signal in a fiber optic cable intentionally. Attenuation of light is also important in physical oceanography. This same effect is an important consideration in weather radar, as raindrops absorb a part of the emitted beam that is more or less significant, depending on the wavelength used. Due to the damaging effects of high-energy photons, it is necessary to know how much energy is deposited in tissue during diagnostic treatments involving such radiation. In addition, gamma radiation is used in cancer treatments where it is important to know how much energy will be deposited in healthy and in tumorous tissue. In computer graphics attenuation defines the local or global influence of light sources and force fields. In CT imaging, attenuation describes the density or darkness of the image. Radio Attenuation is an important consideration in the modern world of wireless telecommunications. Attenuation limits the range of radio signals and is affected by the materials a signal must travel through (e.g., air, wood, concrete, rain). See the article on path loss for more information on signal loss in wireless communication. See also Air mass (astronomy) Astronomical filter Astronomical seeing Atmospheric refraction Attenuation length Attenuator (genetics) Cross section (physics) Electrical impedance Environmental remediation for natural attenuation Extinction (astronomy) ITU-R P.525 Mean free path Path loss Radar horizon Radiation length Rain fade Sunset#Colors Twinkling Wave propagation References External links NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables Underwater Radio Communication Physical phenomena Acoustics Telecommunications engineering
Attenuation
[ "Physics", "Engineering" ]
2,767
[ "Physical phenomena", "Telecommunications engineering", "Classical mechanics", "Acoustics", "Electrical engineering" ]
40,815
https://en.wikipedia.org/wiki/Brewster%27s%20angle
Brewster's angle (also known as the polarization angle) is an angle of incidence at which light with a particular polarization is perfectly transmitted through a transparent dielectric surface, with no reflection. When unpolarized light is incident at this angle, the light that is reflected from the surface is therefore perfectly polarized. The angle is named after the Scottish physicist Sir David Brewster (1781–1868). Explanation When light encounters a boundary between two media with different refractive indices, some of it is usually reflected as shown in the figure above. The fraction that is reflected is described by the Fresnel equations, and depends on the incoming light's polarization and angle of incidence. The Fresnel equations predict that light with the p polarization (electric field polarized in the same plane as the incident ray and the surface normal at the point of incidence) will not be reflected if the angle of incidence is where n1 is the refractive index of the initial medium through which the light propagates (the "incident medium"), and n2 is the index of the other medium. This equation is known as Brewster's law, and the angle defined by it is Brewster's angle. The physical mechanism for this can be qualitatively understood from the manner in which electric dipoles in the media respond to p-polarized light. One can imagine that light incident on the surface is absorbed, and then re-radiated by oscillating electric dipoles at the interface between the two media. The polarization of freely propagating light is always perpendicular to the direction in which the light is travelling. The dipoles that produce the transmitted (refracted) light oscillate in the polarization direction of that light. These same oscillating dipoles also generate the reflected light. However, dipoles do not radiate any energy in the direction of the dipole moment. If the refracted light is p-polarized and propagates exactly perpendicular to the direction in which the light is predicted to be specularly reflected, the dipoles point along the specular reflection direction and therefore no light can be reflected. (See diagram, above) With simple geometry this condition can be expressed as where θ1 is the angle of reflection (or incidence) and θ2 is the angle of refraction. Using Snell's law, one can calculate the incident angle at which no light is reflected: Solving for θB gives The physical explanation of why the transmitted ray should be at to the reflected ray can be difficult to grasp, but the Brewster angle result also follows simply from the Fresnel equations for reflectivity, which state that for p-polarized light The reflection goes to zero when We can now use Snell's Law to eliminate as follows: we multiply Snell by and square both sides; multiply the zero-reflection condition just obtained by and square both sides; and add the equations. This produces We finally divide both sides by , collect terms and rearrange to produce , from which the desired result follows (which then allows reverse proof that ). For a glass medium () in air (), Brewster's angle for visible light is approximately 56°, while for an air-water interface (), it is approximately 53°. Since the refractive index for a given medium changes depending on the wavelength of light, Brewster's angle will also vary with wavelength. The phenomenon of light being polarized by reflection from a surface at a particular angle was first observed by Étienne-Louis Malus in 1808. He attempted to relate the polarizing angle to the refractive index of the material, but was frustrated by the inconsistent quality of glasses available at that time. In 1815, Brewster experimented with higher-quality materials and showed that this angle was a function of the refractive index, defining Brewster's law. Brewster's angle is often referred to as the "polarizing angle", because light that reflects from a surface at this angle is entirely polarized perpendicular to the plane of incidence ("s-polarized"). A glass plate or a stack of plates placed at Brewster's angle in a light beam can, thus, be used as a polarizer. The concept of a polarizing angle can be extended to the concept of a Brewster wavenumber to cover planar interfaces between two linear bianisotropic materials. In the case of reflection at Brewster's angle, the reflected and refracted rays are mutually perpendicular. For magnetic materials, Brewster's angle can exist for only one of the incident wave polarizations, as determined by the relative strengths of the dielectric permittivity and magnetic permeability. This has implications for the existence of generalized Brewster angles for dielectric metasurfaces. Applications While at the Brewster angle there is no reflection of the p polarization, at yet greater angles the reflection coefficient of the p polarization is always less than that of the s polarization, almost up to 90° incidence where the reflectivity of each rises towards unity. Thus reflected light from horizontal surfaces (such as the surface of a road) at a distance much greater than one's height (so that the incidence angle of specularly reflected light is near, or usually well beyond the Brewster angle) is strongly s-polarized. Polarized sunglasses use a sheet of polarizing material to block horizontally-polarized light and thus reduce glare in such situations. These are most effective with smooth surfaces where specular reflection (thus from light whose angle of incidence is the same as the angle of reflection defined by the angle observed from) is dominant, but even diffuse reflections from roads for instance, are also significantly reduced. Photographers also use polarizing filters to remove reflections from water so that they can photograph objects beneath the surface. Using a polarizing camera attachment which can be rotated, such a filter can be adjusted to reduce reflections from objects other than horizontal surfaces, such as seen in the accompanying photograph (right) where the s polarization (approximately vertical) has been eliminated using such a filter. When recording a classical hologram, the bright reference beam is typically arranged to strike the film in the p polarization at Brewster's angle. By thus eliminating reflection of the reference beam at the transparent back surface of the holographic film, unwanted interference effects in the resulting hologram are avoided. Entrance windows or prisms with their surfaces at the Brewster angle are commonly used in optics and laser physics in particular. The polarized laser light enters the prism at Brewster's angle without any reflective losses. In surface science, Brewster angle microscopes are used to image layers of particles or molecules at air-liquid interfaces. Using illumination by a laser at Brewster's angle to the interface and observation at the angle of reflection, the uniform liquid does not reflect, appearing black in the image. However any molecular layers or artifacts at the surface, whose refractive index or physical structure contrasts with the liquid, allows for some reflection against that black background which is captured by a camera. Brewster windows Gas lasers using an external cavity (reflection by one or both mirrors outside the gain medium) generally seal the tube using windows tilted at Brewster's angle. This prevents light in the intended polarization from being lost through reflection (and reducing the round-trip gain of the laser) which is critical in lasers having a low round-trip gain. On the other hand, it does remove s polarized light, increasing the round trip loss for that polarization, and ensuring the laser only oscillates in one linear polarization, as is usually desired. And many sealed-tube lasers (which do not even need windows) have a glass plate inserted within the tube at the Brewster angle, simply for the purpose of allowing lasing in only one polarization. Pseudo-Brewster's angle When the reflecting surface is absorbing, reflectivity at parallel polarization (p) goes through a non-zero minimum at the so-called pseudo-Brewster's angle. See also Brewster angle microscope Critical angle, the angle of total internal reflection. References Further reading External links Brewster's Angle Extraction from Wolfram Research Brewster window at RP-photonics.com TE, TM Reflection Coefficients – interactive phase and magnitude plots showing Brewster's angle Geometrical optics Physical optics Angle Polarization (waves) Optical quantities
Brewster's angle
[ "Physics", "Mathematics" ]
1,705
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Quantity", "Astrophysics", "Polarization (waves)", "Optical quantities", "Angle", "Wikipedia categories named after physical quantities" ]
40,843
https://en.wikipedia.org/wiki/Capacitive%20coupling
Capacitive coupling is the transfer of energy within an electrical network or between distant networks by means of displacement current between circuit(s) nodes, induced by the electric field. This coupling can have an intentional or accidental effect. In its simplest implementation, capacitive coupling is achieved by placing a capacitor between two nodes. Where analysis of many points in a circuit is carried out, the capacitance at each point and between points can be described in a matrix form. Use in analog circuits In analog circuits, a coupling capacitor is used to connect two circuits such that only the AC signal from the first circuit can pass through to the next while DC is blocked. This technique helps to isolate the DC bias settings of the two coupled circuits. Capacitive coupling is also known as AC coupling and the capacitor used for the purpose is also known as a DC-blocking capacitor. A coupling capacitor's ability to prevent a DC load from interfering with an AC source is particularly useful in Class A amplifier circuits by preventing a 0 volt input being passed to a transistor with additional resistor biasing; creating continuous amplification. Capacitive coupling decreases the low frequency gain of a system containing capacitively coupled units. Each coupling capacitor along with the input electrical impedance of the next stage forms a high-pass filter and the sequence of filters results in a cumulative filter with a cutoff frequency that may be higher than those of each individual filter. Coupling capacitors can also introduce nonlinear distortion at low frequencies. This is not an issue at high frequencies because the voltage across the capacitor stays very close to zero. However, if a signal passing through the coupling capacitance has a frequency that is low relative to the RC cutoff frequency, voltages can develop across the capacitor, which for some capacitor types results in changes of capacitance, leading to distortion. This is avoided by choosing capacitor types that have low voltage coefficient, and by using large values that put the cutoff frequency far lower than the frequencies of the signal. Use in digital circuits AC coupling is also widely used in digital circuits to transmit digital signals with a zero DC component, known as DC-balanced signals. DC-balanced waveforms are useful in communications systems, since they can be used over AC-coupled electrical connections to avoid voltage imbalance problems and charge accumulation between connected systems or components. For this reason, most modern line codes are designed to produce DC-balanced waveforms. The most common classes of DC-balanced line codes are constant-weight codes and paired-disparity codes. Gimmick loop A gimmick loop is a simple type of capacitive coupler: two closely spaced strands of wire. It provides capacitive coupling of a few picofarads between two nodes. Usually the wires are twisted together. Parasitic capacitive coupling Capacitive coupling is often unintended, such as the capacitance between two wires or PCB traces that are next to each other. One signal may capacitively couple with another and cause what appears to be noise. To reduce coupling, wires or traces are often separated as much as possible, or ground lines or ground planes are run in between signals that might affect each other, so that the lines capacitively couple to ground rather than each other. Prototypes of high-frequency (tens of megahertz) or high-gain analog circuits often use circuits that are built over a ground plane to control unwanted coupling. If a high-gain amplifier's output capacitively couples to its input it may become an electronic oscillator. See also Coupling (electronics) DC block Decoupling (electronics) Decoupling capacitor Direct coupling Differential capacitance RC coupling Crosstalk References External links Howard Johnson: When to use AC coupling, DC Blocking Capacitor Value Texas Instruments: AC-Coupling Between Differential LVPECL, LVDS, HSTL, and CML (PDF) Capacitors Electromagnetic compatibility
Capacitive coupling
[ "Physics", "Engineering" ]
835
[ "Electromagnetic compatibility", "Radio electronics", "Physical quantities", "Capacitors", "Capacitance", "Electrical engineering" ]
40,866
https://en.wikipedia.org/wiki/Characteristic%20impedance
The characteristic impedance or surge impedance (usually written Z0) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a wave travelling in one direction along the line in the absence of reflections in the other direction. Equivalently, it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is the ohm. The characteristic impedance of a lossless transmission line is purely real, with no reactive component (see below). Energy supplied by a source at one end of such a line is transmitted through the line without being dissipated in the line itself. A transmission line of finite length (lossless or lossy) that is terminated at one end with an impedance equal to the characteristic impedance appears to the source like an infinitely long transmission line and produces no reflections. Transmission line model The characteristic impedance of an infinite transmission line at a given angular frequency is the ratio of the voltage and current of a pure sinusoidal wave of the same frequency travelling along the line. This relation is also the case for finite transmission lines until the wave reaches the end of the line. Generally, a wave is reflected back along the line in the opposite direction. When the reflected wave reaches the source, it is reflected yet again, adding to the transmitted wave and changing the ratio of the voltage and current at the input, causing the voltage-current ratio to no longer equal the characteristic impedance. This new ratio including the reflected energy is called the input impedance of that particular transmission line and load. The input impedance of an infinite line is equal to the characteristic impedance since the transmitted wave is never reflected back from the end. Equivalently: The characteristic impedance of a line is that impedance which, when terminating an arbitrary length of line at its output, produces an input impedance of equal value. This is so because there is no reflection on a line terminated in its own characteristic impedance. Applying the transmission line model based on the telegrapher's equations as derived below, the general expression for the characteristic impedance of a transmission line is: where This expression extends to DC by letting tend to 0. A surge of energy on a finite transmission line will see an impedance of prior to any reflections returning; hence surge impedance is an alternative name for characteristic impedance. Although an infinite line is assumed, since all quantities are per unit length, the “per length” parts of all the units cancel, and the characteristic impedance is independent of the length of the transmission line. The voltage and current phasors on the line are related by the characteristic impedance as: where the subscripts (+) and (−) mark the separate constants for the waves traveling forward (+) and backward (−). The rightmost expression has a negative sign because the current in the backward wave has the opposite direction to current in the forward wave. Derivation Using the telegrapher's equation The differential equations describing the dependence of the voltage and current on time and space are linear, so that a linear combination of solutions is again a solution. This means that we can consider solutions with a time dependence doing so is functionally equivalent of solving for the Fourier coefficients for voltage and current amplitudes, at some fixed angular frequency Doing so causes the time dependence to factor out, leaving an ordinary differential equation for the coefficients, which will be phasors, dependent on position (space) only. Moreover, the parameters can be generalized to be frequency-dependent. Let and Take the positive direction for and in the loop to be clockwise. We find that and or where These two first-order equations are easily uncoupled by a second differentiation, with the results: and Notice that both and satisfy the same equation. Since is independent of and it can be represented by a single constant (The minus sign is included for later convenience.) That is: so We can write the above equation as which is correct for any transmission line in general. And for typical transmission lines, that are carefully built from wire with low loss resistance and small insulation leakage conductance further, used for high frequencies, the inductive reactance and the capacitive admittance will both be large, so the constant is very close to being a real number: With this definition of the position- or part will appear as in the exponential solutions of the equation, similar to the time-dependent part so the solution reads where and are the constants of integration for the forward moving (+) and backward moving (−) waves, as in the prior section. When we recombine the time-dependent part we obtain the full solution: Since the equation for is the same form, it has a solution of the same form: where and are again constants of integration. The above equations are the wave solution for and . In order to be compatible, they must still satisfy the original differential equations, one of which is Substituting the solutions for and into the above equation, we get or Isolating distinct powers of and combining identical powers, we see that in order for the above equation to hold for all possible values of we must have: For the co-efficients of : For the co-efficients of : Since hence, for valid solutions require It can be seen that the constant defined in the above equations has the dimensions of impedance (ratio of voltage to current) and is a function of primary constants of the line and operating frequency. It is called the “characteristic impedance” of the transmission line, and conventionally denoted by which holds generally, for any transmission line. For well-functioning transmission lines, with either and both very small, or with very high, or all of the above, we get hence the characteristic impedance is typically very close to being a real number. Manufacturers make commercial cables to approximate this condition very closely over a wide range of frequencies. As a limiting case of infinite ladder networks Intuition Consider an infinite ladder network consisting of a series impedance and a shunt admittance Let its input impedance be If a new pair of impedance and admittance is added in front of the network, its input impedance remains unchanged since the network is infinite. Thus, it can be reduced to a finite network with one series impedance and two parallel impedances and Its input impedance is given by the expression which is also known as its iterative impedance. Its solution is: For a transmission line, it can be seen as a limiting case of an infinite ladder network with infinitesimal impedance and admittance at a constant ratio. Taking the positive root, this equation simplifies to: Derivation Using this insight, many similar derivations exist in several books and are applicable to both lossless and lossy lines. Here, we follow an approach posted by Tim Healy. The line is modeled by a series of differential segments with differential series elements and shunt elements (as shown in the figure at the beginning of the article). The characteristic impedance is defined as the ratio of the input voltage to the input current of a semi-infinite length of line. We call this impedance That is, the impedance looking into the line on the left is But, of course, if we go down the line one differential length the impedance into the line is still Hence we can say that the impedance looking into the line on the far left is equal to in parallel with and all of which is in series with and Hence: The added terms cancel, leaving The first-power terms are the highest remaining order. Dividing out the common factor of and dividing through by the factor we get In comparison to the factors whose divided out, the last term, which still carries a remaining factor is infinitesimal relative to the other, now finite terms, so we can drop it. That leads to Reversing the sign applied to the square root has the effect of reversing the direction of the flow of current. Lossless line The analysis of lossless lines provides an accurate approximation for real transmission lines that simplifies the mathematics considered in modeling transmission lines. A lossless line is defined as a transmission line that has no line resistance and no dielectric loss. This would imply that the conductors act like perfect conductors and the dielectric acts like a perfect dielectric. For a lossless line, and are both zero, so the equation for characteristic impedance derived above reduces to: In particular, does not depend any more upon the frequency. The above expression is wholly real, since the imaginary term has canceled out, implying that is purely resistive. For a lossless line terminated in , there is no loss of current across the line, and so the voltage remains the same along the line. The lossless line model is a useful approximation for many practical cases, such as low-loss transmission lines and transmission lines with high frequency. For both of these cases, and are much smaller than and , respectively, and can thus be ignored. The solutions to the long line transmission equations include incident and reflected portions of the voltage and current: When the line is terminated with its characteristic impedance, the reflected portions of these equations are reduced to 0 and the solutions to the voltage and current along the transmission line are wholly incident. Without a reflection of the wave, the load that is being supplied by the line effectively blends into the line making it appear to be an infinite line. In a lossless line this implies that the voltage and current remain the same everywhere along the transmission line. Their magnitudes remain constant along the length of the line and are only rotated by a phase angle. Surge impedance loading In electric power transmission, the characteristic impedance of a transmission line is expressed in terms of the surge impedance loading (SIL), or natural loading, being the power loading at which reactive power is neither produced nor absorbed: in which is the RMS line-to-line voltage in volts. Loaded below its SIL, the voltage at the load will be greater than the system voltage. Above it, the load voltage is depressed. The Ferranti effect describes the voltage gain towards the remote end of a very lightly loaded (or open ended) transmission line. Underground cables normally have a very low characteristic impedance, resulting in an SIL that is typically in excess of the thermal limit of the cable. Practical examples The characteristic impedance of coaxial cables (coax) is commonly chosen to be for RF and microwave applications. Coax for video applications is usually for its lower loss. See also Iterative impedance, characteristic impedance is a limiting case of this References Sources External links Electricity Physical quantities Distributed element circuits Transmission lines de:Leitungstheorie#Die allgemeine L.C3.B6sung der Leitungsgleichungen
Characteristic impedance
[ "Physics", "Mathematics", "Engineering" ]
2,228
[ "Physical phenomena", "Physical quantities", "Quantity", "Electronic engineering", "Distributed element circuits", "Physical properties" ]
40,893
https://en.wikipedia.org/wiki/Coherence%20length
In physics, coherence length is the propagation distance over which a coherent wave (e.g. an electromagnetic wave) maintains a specified degree of coherence. Wave interference is strong when the paths taken by all of the interfering waves differ by less than the coherence length. A wave with a longer coherence length is closer to a perfect sinusoidal wave. Coherence length is important in holography and telecommunications engineering. This article focuses on the coherence of classical electromagnetic fields. In quantum mechanics, there is a mathematically analogous concept of the quantum coherence length of a wave function. Formulas In radio-band systems, the coherence length is approximated by where is the speed of light in vacuum, is the refractive index of the medium, and is the bandwidth of the source or is the signal wavelength and is the width of the range of wavelengths in the signal. In optical communications and optical coherence tomography (OCT), assuming that the source has a Gaussian emission spectrum, the roundtrip coherence length is given by where is the central wavelength of the source, is the group refractive index of the medium, and is the (FWHM) spectral width of the source. If the source has a Gaussian spectrum with FWHM spectral width , then a path offset of will reduce the fringe visibility to 50%. It is important to note that this is a roundtrip coherence length — this definition is applied in applications like OCT where the light traverses the measured displacement twice (as in a Michelson interferometer). In transmissive applications, such as with a Mach–Zehnder interferometer, the light traverses the displacement only once, and the coherence length is effectively doubled. The coherence length can also be measured using a Michelson interferometer and is the optical path length difference of a self-interfering laser beam which corresponds to fringe visibility, where the fringe visibility is defined as where is the fringe intensity. In long-distance transmission systems, the coherence length may be reduced by propagation factors such as dispersion, scattering, and diffraction. Lasers Multimode helium–neon lasers have a typical coherence length on the order of centimeters, while the coherence length of longitudinally single-mode lasers can exceed 1 km. Semiconductor lasers can reach some 100 m, but small, inexpensive semiconductor lasers have shorter lengths, with one source claiming 20 cm. Singlemode fiber lasers with linewidths of a few kHz can have coherence lengths exceeding 100 km. Similar coherence lengths can be reached with optical frequency combs due to the narrow linewidth of each tooth. Non-zero visibility is present only for short intervals of pulses repeated after cavity length distances up to this long coherence length. Other light sources Tolansky's An introduction to Interferometry has a chapter on sources which quotes a line width of around 0.052 angstroms for each of the Sodium D lines in an uncooled low-pressure sodium lamp, corresponding to a coherence length of around 67 mm for each line by itself. Cooling the low pressure sodium discharge to liquid nitrogen temperatures increases the individual D line coherence length by a factor of 6. A very narrow-band interference filter would be required to isolate an individual D line. See also Coherence time Superconducting coherence length Spatial coherence References Electromagnetic radiation Physical optics Waves Optical quantities
Coherence length
[ "Physics", "Mathematics" ]
725
[ "Physical phenomena", "Physical quantities", "Electromagnetic radiation", "Quantity", "Waves", "Motion (physics)", "Radiation", "Optical quantities" ]
40,894
https://en.wikipedia.org/wiki/Coherence%20time
For an electromagnetic wave, the coherence time is the time over which a propagating wave (especially a laser or maser beam) may be considered coherent, meaning that its phase is, on average, predictable. In long-distance transmission systems, the coherence time may be reduced by propagation factors such as dispersion, scattering, and diffraction. The coherence time, usually designated , is calculated by dividing the coherence length by the phase velocity of light in a medium; approximately given by where is the central wavelength of the source, and is the spectral width of the source in units of frequency and wavelength respectively, and is the speed of light in vacuum. A single mode fiber laser has a linewidth of a few kHz, corresponding to a coherence time of a few hundred microseconds. Hydrogen masers have linewidth around 1 Hz, corresponding to a coherence time of about one second. Their coherence length approximately corresponds to the distance from the Earth to the Moon. As of 2022, research groups worldwide have demonstrated superconducting qubits with coherence times up to several 100 μs. See also Atomic coherence Temporal coherence Degree of coherence References Electromagnetic radiation Physical optics Radio frequency propagation Optical quantities
Coherence time
[ "Physics", "Mathematics" ]
270
[ "Physical phenomena", "Spectrum (physical sciences)", "Physical quantities", "Radio frequency propagation", "Electromagnetic radiation", "Quantity", "Electromagnetic spectrum", "Waves", "Radiation", "Optical quantities" ]
40,922
https://en.wikipedia.org/wiki/Communications%20security
Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients. In the North Atlantic Treaty Organization culture, including United States Department of Defense culture, it is often referred to by the abbreviation COMSEC. The field includes cryptographic security, transmission security, emissions security and physical security of COMSEC equipment and associated keying material. COMSEC is used to protect both classified and unclassified traffic on military communications networks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links. Voice over secure internet protocol VOSIP has become the de facto standard for securing voice communication, replacing the need for Secure Terminal Equipment (STE) in much of NATO, including the U.S.A. USCENTCOM moved entirely to VOSIP in 2008. Specialties Cryptographic security: The component of communications security that results from the provision of technically sound cryptosystems and their proper use. This includes ensuring message confidentiality and authenticity. Emission security (EMSEC): The protection resulting from all measures taken to deny unauthorized persons information of value that might be derived from communications systems and cryptographic equipment intercepts and the interception and analysis of compromising emanations from cryptographic equipment, information systems, and telecommunications systems. Transmission security (TRANSEC): The component of communications security that results from the application of measures designed to protect transmissions from interception and exploitation by means other than cryptanalysis (e.g. frequency hopping and spread spectrum). Physical security: The component of communications security that results from all physical measures necessary to safeguard classified equipment, material, and documents from access thereto or observation thereof by unauthorized persons. Related terms AKMS – the Army Key Management System AEK – Algorithmic Encryption Key CT3 – Common Tier 3 CCI – Controlled Cryptographic Item - equipment which contains COMSEC embedded devices ACES – Automated Communications Engineering Software DTD – Data Transfer Device ICOM – Integrated COMSEC, e.g. a radio with built in encryption TEK – Traffic Encryption Key TED – Trunk Encryption Device such as the WALBURN/KG family KEK – Key Encryption Key KPK – Key production key OWK – Over the Wire Key OTAR – Over the Air Rekeying LCMS – Local COMSEC Management Software KYK-13 – Electronic Transfer Device KOI-18 – Tape Reader General Purpose KYX-15 – Electronic Transfer Device KG-30 – family of COMSEC equipment TSEC – Telecommunications Security (sometimes referred to in error transmission security or TRANSEC) SOI – Signal operating instructions SKL – Simple Key Loader TPI – Two person integrity STU-III – (obsolete secure phone, replaced by STE) STE – Secure Terminal Equipment (secure phone) Types of COMSEC equipment: Crypto equipment: Any equipment that embodies cryptographic logic or performs one or more cryptographic functions (key generation, encryption, and authentication). Crypto-ancillary equipment: Equipment designed specifically to facilitate efficient or reliable operation of crypto-equipment, without performing cryptographic functions itself. Crypto-production equipment: Equipment used to produce or load keying material Authentication equipment: DoD Electronic Key Management System The Electronic Key Management System (EKMS) is a United States Department of Defense (DoD) key management, COMSEC material distribution, and logistics support system. The National Security Agency (NSA) established the EKMS program to supply electronic key to COMSEC devices in securely and timely manner, and to provide COMSEC managers with an automated system capable of ordering, generation, production, distribution, storage, security accounting, and access control. The Army's platform in the four-tiered EKMS, AKMS, automates frequency management and COMSEC management operations. It eliminates paper keying material, hardcopy Signal operating instructions (SOI) and saves the time and resources required for courier distribution. It has 4 components: LCMS provides automation for the detailed accounting required for every COMSEC account, and electronic key generation and distribution capability. ACES is the frequency management portion of AKMS. ACES has been designated by the Military Communications Electronics Board as the joint standard for use by all services in development of frequency management and crypto-net planning. CT3 with DTD software is in a fielded, ruggedized hand-held device that handles, views, stores, and loads SOI, Key, and electronic protection data. DTD provides an improved net-control device to automate crypto-net control operations for communications networks employing electronically keyed COMSEC equipment. SKL is a hand-held PDA that handles, views, stores, and loads SOI, Key, and electronic protection data. Key Management Infrastructure (KMI) Program KMI is intended to replace the legacy Electronic Key Management System to provide a means for securely ordering, generating, producing, distributing, managing, and auditing cryptographic products (e.g., asymmetric keys, symmetric keys, manual cryptographic systems, and cryptographic applications). This system is currently being fielded by Major Commands and variants will be required for non-DoD Agencies with a COMSEC Mission. See also Dynamic secrets Electronics technician (United States Navy) Information security Information warfare List of telecommunications encryption terms NSA encryption systems NSA product types Operations security Secure communication Signals intelligence Traffic analysis References External links National Information Systems Security Glossary https://web.archive.org/web/20121002192433/http://www.dtic.mil/whs/directives/corres/pdf/466002p.pdf Cryptography machines Cryptography Military communications Military radio systems Encryption devices
Communications security
[ "Mathematics", "Engineering" ]
1,177
[ "Cybersecurity engineering", "Telecommunications engineering", "Cryptography", "Applied mathematics", "Military communications" ]
40,948
https://en.wikipedia.org/wiki/Configuration%20management
Configuration management (CM) is a management process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. The CM process is widely used by military engineering organizations to manage changes throughout the system lifecycle of complex systems, such as weapon systems, military vehicles, and information systems. Outside the military, the CM process is also used with IT service management as defined by ITIL, and with other domain models in the civil engineering and other industrial engineering segments such as roads, bridges, canals, dams, and buildings. Introduction CM applied over the life cycle of a system provides visibility and control of its performance, functional, and physical attributes. CM verifies that a system performs as intended, and is identified and documented in sufficient detail to support its projected life cycle. The CM process facilitates orderly management of system information and system changes for such beneficial purposes as to revise capability; improve performance, reliability, or maintainability; extend life; reduce cost; reduce risk and liability; or correct defects. The relatively minimal cost of implementing CM is returned manyfold in cost avoidance. The lack of CM, or its ineffectual implementation, can be very expensive and sometimes can have such catastrophic consequences such as failure of equipment or loss of life. CM emphasizes the functional relation between parts, subsystems, and systems for effectively controlling system change. It helps to verify that proposed changes are systematically considered to minimize adverse effects. Changes to the system are proposed, evaluated, and implemented using a standardized, systematic approach that ensures consistency, and proposed changes are evaluated in terms of their anticipated impact on the entire system. CM verifies that changes are carried out as prescribed and that documentation of items and systems reflects their true configuration. A complete CM program includes provisions for the storing, tracking, and updating of all system information on a component, subsystem, and system basis. A structured CM program ensures that documentation (e.g., requirements, design, test, and acceptance documentation) for items is accurate and consistent with the actual physical design of the item. In many cases, without CM, the documentation exists but is not consistent with the item itself. For this reason, engineers, contractors, and management are frequently forced to develop documentation reflecting the actual status of the item before they can proceed with a change. This reverse engineering process is wasteful in terms of human and other resources and can be minimized or eliminated using CM. History Configuration Management originated in the United States Department of Defense in the 1950s as a technical management discipline for hardware material items—and it is now a standard practice in virtually every industry. The CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the "480 series" (i.e., MIL-STD-480, MIL-STD-481 and MIL-STD-483) that were subsequently issued in the 1970s. In 1991, the "480 series" was consolidated into a single standard known as the MIL–STD–973 that was then replaced by MIL–HDBK–61 pursuant to a general DoD goal that reduced the number of military standards in favor of industry technical standards supported by standards developing organizations (SDO). This marked the beginning of what has now evolved into the most widely distributed and accepted standard on CM, ANSI–EIA–649–1998. Now widely adopted by numerous organizations and agencies, the CM discipline's concepts include systems engineering (SE), Integrated Logistics Support (ILS), Capability Maturity Model Integration (CMMI), ISO 9000, Prince2 project management method, COBIT, ITIL, product lifecycle management, and Application Lifecycle Management. Many of these functions and models have redefined CM from its traditional holistic approach to technical management. Some treat CM as being similar to a librarian activity, and break out change control or change management as a separate or stand alone discipline. Overview CM is the practice of handling changes systematically so that a system maintains its integrity over time. CM implements the policies, procedures, techniques, and tools that manage, evaluate proposed changes, track the status of changes, and maintain an inventory of system and support documents as the system changes. CM programs and plans provide technical and administrative direction to the development and implementation of the procedures, functions, services, tools, processes, and resources required to successfully develop and support a complex system. During system development, CM allows program management to track requirements throughout the life-cycle through acceptance and operations and maintenance. As changes inevitably occur in the requirements and design, they must be approved and documented, creating an accurate record of the system status. Ideally the CM process is applied throughout the system lifecycle. Most professionals mix up or get confused with Asset management (AM, see also ISO/IEC 19770), where it inventories the assets on hand. The key difference between CM and AM is that the former does not manage the financial accounting aspect but on service that the system supports or in other words, that the later (AM) is trying to realize value from an IT asset. The CM process for both hardware- and software-configuration items comprises five distinct disciplines as established in the MIL–HDBK–61A and in ANSI/EIA-649. Members of an organization interested in applying a standard change-management process will employ these disciplines as policies and procedures for establishing baselines, manage and control change, and monitor and assess the effectiveness and correctness of progress. The IEEE 12207 process IEEE 12207.2 also has these activities and adds "Release management and delivery". The five disciplines are: CM Planning and Management: a formal document and plan to guide the CM program that includes items such as: Personnel Responsibilities and resources Training requirements Administrative meeting guidelines, including a definition of procedures and tools Baselining processes Configuration control and configuration-status accounting Naming conventions Audits and reviews Subcontractor/vendor CM requirements Configuration Identification (CI): consists of setting and maintaining baselines, which define the system or subsystem architecture, components, and any developments at any point in time. It is the basis by which changes to any part of a system are identified, documented, and later tracked through design, development, testing, and final delivery. CI incrementally establishes and maintains the definitive current basis for Configuration Status Accounting (CSA) of a system and its configuration items (CIs) throughout their lifecycle (development, production, deployment, and operational support) until disposal. Configuration Control: includes the evaluation of all change-requests and change-proposals, and their subsequent approval or disapproval. It covers the process of controlling modifications to the system's design, hardware, firmware, software, and documentation. Configuration Status Accounting: includes the process of recording and reporting configuration item descriptions (e.g., hardware, software, firmware, etc.) and all departures from the baseline during design and production. In the event of suspected problems, the verification of baseline configuration and approved modifications can be quickly determined. Configuration Verification and Audit: an independent review of hardware and software for the purpose of assessing compliance with established performance requirements, commercial and appropriate military standards, and functional, allocated, and product baselines. Configuration audits verify that the system and subsystem configuration documentation complies with the functional and physical performance characteristics before acceptance into an architectural baseline. Software The software configuration management (SCM) process is looked upon by practitioners as the best solution to handling changes in software projects. It identifies the functional and physical attributes of software at various points in time, and performs systematic control of changes to the identified attributes for the purpose of maintaining software integrity and traceability throughout the software development life cycle. The SCM process further defines the need to trace changes, and the ability to verify that the final delivered software has all of the planned enhancements that are supposed to be included in the release. It identifies four procedures that must be defined for each software project to ensure that a sound SCM process is implemented. They are: Configuration identification Configuration control Configuration status accounting Configuration audits These terms and definitions change from standard to standard, but are essentially the same. Configuration identification is the process of identifying the attributes that define every aspect of a configuration item. A configuration item is a product (hardware and/or software) that has an end-user purpose. These attributes are recorded in configuration documentation and baselined. Baselining an attribute forces formal configuration change control processes to be effected in the event that these attributes are changed. Configuration change control is a set of processes and approval stages required to change a configuration item's attributes and to re-baseline them. Configuration status accounting is the ability to record and report on the configuration baselines associated with each configuration item at any moment of time. Configuration audits are broken into functional and physical configuration audits. They occur either at delivery or at the moment of effecting the change. A functional configuration audit ensures that functional and performance attributes of a configuration item are achieved, while a physical configuration audit ensures that a configuration item is installed in accordance with the requirements of its detailed design documentation. Configuration management database ITIL specifies the use of a configuration management system (CMS) or configuration management database (CMDB) as a means of achieving industry best practices for Configuration Management. CMDBs are used to track Configuration Items (CIs) and the dependencies between them, where CIs represent the things in an enterprise that are worth tracking and managing, such as but not limited to computers, software, software licenses, racks, network devices, storage, and even the components within such items. CMS helps manage a federated collection of CMDBs. The benefits of a CMS/CMDB includes being able to perform functions like root cause analysis, impact analysis, change management, and current state assessment for future state strategy development. Configuration Management (CM) is an ITIL-specific ITSM process that tracks all of the individual CIs in an IT system which may be as simple as a single server, or as complex as the entire IT department. In large organizations a configuration manager may be appointed to oversee and manage the CM process. In ITIL version 3, this process has been renamed as Service Asset and Configuration Management. Information assurance For information assurance, CM can be defined as the management of security features and assurances through control of changes made to hardware, software, firmware, documentation, test, test fixtures, and test documentation throughout the life cycle of an information system. CM for information assurance, sometimes referred to as secure configuration management (SCM), relies upon performance, functional, and physical attributes of IT platforms and products and their environments to determine the appropriate security features and assurances that are used to measure a system configuration state. For example, configuration requirements may be different for a network firewall that functions as part of an organization's Internet boundary versus one that functions as an internal local network firewall. Maintenance systems Configuration management is used to maintain an understanding of the status of complex assets with a view to maintaining the highest level of serviceability for the lowest cost. Specifically, it aims to ensure that operations are not disrupted due to the asset (or parts of the asset) overrunning limits of planned lifespan or below quality levels. In the military, this type of activity is often classed as "mission readiness", and seeks to define which assets are available and for which type of mission; a classic example is whether aircraft on board an aircraft carrier are equipped with bombs for ground support or missiles for defense. Operating system configuration management Configuration management can be used to maintain OS configuration files. Many of these systems utilize Infrastructure as Code to define and maintain configuration. The Promise theory of configuration maintenance was developed by Mark Burgess, with a practical implementation on present day computer systems in the software CFEngine able to perform real time repair as well as preventive maintenance. Preventive maintenance Understanding the "as is" state of an asset and its major components is an essential element in preventive maintenance as used in maintenance, repair, and overhaul and enterprise asset management systems. Complex assets such as aircraft, ships, industrial machinery etc. depend on many different components being serviceable. This serviceability is often defined in terms of the amount of usage the component has had since it was new, since fitted, since repaired, the amount of use it has had over its life and several other limiting factors. Understanding how near the end of their life each of these components is has been a major undertaking involving labor-intensive record keeping until recent developments in software. Predictive maintenance Many types of component use electronic sensors to capture data which provides live condition monitoring. This data is analyzed on board or at a remote location by computer to evaluate its current serviceability and increasingly its likely future state using algorithms which predict potential future failures based on previous examples of failure through field experience and modeling. This is the basis for "predictive maintenance". Availability of accurate and timely data is essential in order for CM to provide operational value and a lack of this can often be a limiting factor. Capturing and disseminating the operating data to the various support organizations is becoming an industry in itself. The consumers of this data have grown more numerous and complex with the growth of programs offered by original equipment manufacturers (OEMs). These are designed to offer operators guaranteed availability and make the picture more complex with the operator managing the asset but the OEM taking on the liability to ensure its serviceability. Standards A number of standards support or include configuration management, including: ANSI/EIA-649-1998 National Consensus Standard for Configuration Management EIA-649-A 2004 National Consensus Standard for Configuration Management SAE EIA-649-C 2019 Global Consensus Configuration Management Standard ISO 10007 Quality management systems – Guidelines for configuration management Federal Standard 1037C GEIA Standard 836–2002 Configuration Management Data Exchange and Interoperability IEEE 829 Standard for Software Test Documentation MIL-STD-973 Configuration Management (cancelled on 20 September 2000) NATO STANAG 4427 Configuration Management in Systems Life Cycle Management including NATO ACMP 2000 Policy on Configuration Management NATO ACMP 2009 Guidance on Configuration Management NATO ACMP 2100 Configuration Management Contractual Requirements CMMI CMMI for Development, Version 1.2 Configuration Management CMII-100E CMII Standard for Enterprise Configuration Management Extended List of Configuration Management & Related Standards ITIL Service Asset and Configuration Management ISO 20000:1 2011& 2018 Service Management System. ECSS-M-ST-40C Rev.1 Configuration and information management Guidelines IEEE 828-2012 Standard for Configuration Management in Systems and Software Engineering, published date:2012-03-16 ISO 10007:2017 Quality management – Guidelines for configuration management NATO ACMP-2009 – Guidance on configuration management ANSI/EIA-632-1998 Processes for Engineering a System ANSI/EIA-649-1998 National Consensus Standard for Configuration Management GEIA-HB-649 – Implementation Guide for Configuration Management EIA-836 Consensus Standard for Configuration Management Data Exchange and Interoperability MIL-HDBK-61B Configuration Management Guidance, 7 April 2020 MIL-STD-3046 Configuration Management, 6 March 2013 and canceled on June 1, 2015 Defense Acquisition Guidebook, elements of CM at 4.3.7 SE Processes, attributes of CM at 5.1.7 Lifecycle support Systems Engineering Fundamentals, Chapter 10 Configuration Management Configuration Management Plan United States Dept. of Defense Acquisition document Construction More recently configuration management has been applied to large construction projects which can often be very complex and have a huge number of details and changes that need to be documented. Construction agencies such as the Federal Highway Administration have used configuration management for their infrastructure projects. There are construction-based configuration management tools that aim to document change orders and RFIs in order to ensure a project stays on schedule and on budget. These programs can also store information to aid in the maintenance and modification of the infrastructure when it is completed. One such application, CCSNet, was tested in a case study funded by the Federal Transportation Administration (FTA) in which the efficacy of configuration management was measured through comparing the approximately 80% complete construction of the Los Angeles County Metropolitan Transit Agency (LACMTA) first and second segments of the Red Line, a $5.3 billion rail construction project. This study yielded results indicating a benefit to using configuration management on projects of this nature. See also Change detection Configuration lifecycle management Granular Configuration Automation Comparison of open source configuration management software Dependency List of software engineering topics Interchangeable parts Continuous configuration automation System configuration Systems management References Configuration management Method engineering Technical communication Version control systems Computer occupations Systems engineering Software engineering
Configuration management
[ "Technology", "Engineering" ]
3,395
[ "Systems engineering", "Computer engineering", "Configuration management", "Computer occupations", "Software engineering", "Information technology" ]
41,026
https://en.wikipedia.org/wiki/Dielectric
In electromagnetism, a dielectric (or dielectric medium) is an electrical insulator that can be polarised by an applied electric field. When a dielectric material is placed in an electric field, electric charges do not flow through the material as they do in an electrical conductor, because they have no loosely bound, or free, electrons that may drift through the material, but instead they shift, only slightly, from their average equilibrium positions, causing dielectric polarisation. Because of dielectric polarisation, positive charges are displaced in the direction of the field and negative charges shift in the direction opposite to the field. This creates an internal electric field that reduces the overall field within the dielectric itself. If a dielectric is composed of weakly bonded molecules, those molecules not only become polarised, but also reorient so that their symmetry axes align to the field. The study of dielectric properties concerns storage and dissipation of electric and magnetic energy in materials. Dielectrics are important for explaining various phenomena in electronics, optics, solid-state physics and cell biophysics. Terminology Although the term insulator implies low electrical conduction, dielectric typically means materials with a high polarisability. The latter is expressed by a number called the relative permittivity. Insulator is generally used to indicate electrical obstruction while dielectric is used to indicate the energy storing capacity of the material (by means of polarisation). A common example of a dielectric is the electrically insulating material between the metallic plates of a capacitor. The polarisation of the dielectric by the applied electric field increases the capacitor's surface charge for the given electric field strength. The term dielectric was coined by William Whewell (from dia + electric) in response to a request from Michael Faraday. A perfect dielectric is a material with zero electrical conductivity (cf. perfect conductor infinite electrical conductivity), thus exhibiting only a displacement current; therefore it stores and returns electrical energy as if it were an ideal capacitor. Electric susceptibility The electric susceptibility of a dielectric material is a measure of how easily it polarises in response to an electric field. This, in turn, determines the electric permittivity of the material and thus influences many other phenomena in that medium, from the capacitance of capacitors to the speed of light. It is defined as the constant of proportionality (which may be a tensor) relating an electric field to the induced dielectric polarisation density such that where is the electric permittivity of free space. The susceptibility of a medium is related to its relative permittivity by So in the case of a classical vacuum, The electric displacement is related to the polarisation density by Dispersion and causality In general, a material cannot polarise instantaneously in response to an applied field. The more general formulation as a function of time is That is, the polarisation is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response corresponds to Dirac delta function susceptibility . It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a simple product, The susceptibility (or equivalently the permittivity) is frequency dependent. The change of susceptibility with respect to frequency characterises the dispersion properties of the material. Moreover, the fact that the polarisation can only depend on the electric field at previous times (i.e., for ), a consequence of causality, imposes Kramers–Kronig constraints on the real and imaginary parts of the susceptibility . Dielectric polarisation Basic atomic model In the classical approach to the dielectric, the material is made up of atoms. Each atom consists of a cloud of negative charge (electrons) bound to and surrounding a positive point charge at its center. In the presence of an electric field, the charge cloud is distorted, as shown in the top right of the figure. This can be reduced to a simple dipole using the superposition principle. A dipole is characterised by its dipole moment, a vector quantity shown in the figure as the blue arrow labeled M. It is the relationship between the electric field and the dipole moment that gives rise to the behaviour of the dielectric. (Note that the dipole moment points in the same direction as the electric field in the figure. This is not always the case, and is a major simplification, but is true for many materials.) When the electric field is removed, the atom returns to its original state. The time required to do so is called relaxation time; an exponential decay. This is the essence of the model in physics. The behaviour of the dielectric now depends on the situation. The more complicated the situation, the richer the model must be to accurately describe the behaviour. Important questions are: Is the electric field constant, or does it vary with time? At what rate? Does the response depend on the direction of the applied field (isotropy of the material)? Is the response the same everywhere (homogeneity of the material)? Do any boundaries or interfaces have to be taken into account? Is the response linear with respect to the field, or are there nonlinearities? The relationship between the electric field E and the dipole moment M gives rise to the behaviour of the dielectric, which, for a given material, can be characterised by the function F defined by the equation: When both the type of electric field and the type of material have been defined, one then chooses the simplest function F that correctly predicts the phenomena of interest. Examples of phenomena that can be so modelled include: Refractive index Group velocity dispersion Birefringence Self-focusing Harmonic generation Dipolar polarisation Dipolar polarisation is a polarisation that is either inherent to polar molecules (orientation polarisation), or can be induced in any molecule in which the asymmetric distortion of the nuclei is possible (distortion polarisation). Orientation polarisation results from a permanent dipole, e.g., that arises from the 104.45° angle between the asymmetric bonds between oxygen and hydrogen atoms in the water molecule, which retains polarisation in the absence of an external electric field. The assembly of these dipoles forms a macroscopic polarisation. When an external electric field is applied, the distance between charges within each permanent dipole, which is related to chemical bonding, remains constant in orientation polarisation; however, the direction of polarisation itself rotates. This rotation occurs on a timescale that depends on the torque and surrounding local viscosity of the molecules. Because the rotation is not instantaneous, dipolar polarisations lose the response to electric fields at the highest frequencies. A molecule rotates about 1 radian per picosecond in a fluid, thus this loss occurs at about 1011 Hz (in the microwave region). The delay of the response to the change of the electric field causes friction and heat. When an external electric field is applied at infrared frequencies or less, the molecules are bent and stretched by the field and the molecular dipole moment changes. The molecular vibration frequency is roughly the inverse of the time it takes for the molecules to bend, and this distortion polarisation disappears above the infrared. Ionic polarisation Ionic polarisation is polarisation caused by relative displacements between positive and negative ions in ionic crystals (for example, NaCl). If a crystal or molecule consists of atoms of more than one kind, the distribution of charges around an atom in the crystal or molecule leans to positive or negative. As a result, when lattice vibrations or molecular vibrations induce relative displacements of the atoms, the centers of positive and negative charges are also displaced. The locations of these centers are affected by the symmetry of the displacements. When the centers do not correspond, polarisation arises in molecules or crystals. This polarisation is called ionic polarisation. Ionic polarisation causes the ferroelectric effect as well as dipolar polarisation. The ferroelectric transition, which is caused by the lining up of the orientations of permanent dipoles along a particular direction, is called an order-disorder phase transition. The transition caused by ionic polarisations in crystals is called a displacive phase transition. In biological cells Ionic polarisation enables the production of energy-rich compounds in cells (the proton pump in mitochondria) and, at the plasma membrane, the establishment of the resting potential, energetically unfavourable transport of ions, and cell-to-cell communication (the Na+/K+-ATPase). All cells in animal body tissues are electrically polarised – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarisation results from a complex interplay between ion transporters and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Dielectric dispersion In physics, dielectric dispersion is the dependence of the permittivity of a dielectric material on the frequency of an applied electric field. Because there is a lag between changes in polarisation and changes in the electric field, the permittivity of the dielectric is a complex function of the frequency of the electric field. Dielectric dispersion is very important for the applications of dielectric materials and the analysis of polarisation systems. This is one instance of a general phenomenon known as material dispersion: a frequency-dependent response of a medium for wave propagation. When the frequency becomes higher: The dipolar polarisation can no longer follow the oscillations of the electric field in the microwave region around 1010 Hz, The ionic polarisation and molecular distortion polarisation can no longer track the electric field past the infrared or far-infrared region around 1013 Hz, The electronic polarisation loses its response in the ultraviolet region around 1015 Hz. In the frequency region above ultraviolet, permittivity approaches the constant ε0 in every substance, where ε0 is the permittivity of the free space. Because permittivity indicates the strength of the relation between an electric field and polarisation, if a polarisation process loses its response, permittivity decreases. Dielectric relaxation Dielectric relaxation is the momentary delay (or lag) in the dielectric constant of a material. This is usually caused by the delay in molecular polarisation with respect to a changing electric field in a dielectric medium (e.g., inside capacitors or between two large conducting surfaces). Dielectric relaxation in changing electric fields could be considered analogous to hysteresis in changing magnetic fields (e.g., in inductor or transformer cores). Relaxation in general is a delay or lag in the response of a linear system, and therefore dielectric relaxation is measured relative to the expected linear steady state (equilibrium) dielectric values. The time lag between electrical field and polarisation implies an irreversible degradation of Gibbs free energy. In physics, dielectric relaxation refers to the relaxation response of a dielectric medium to an external, oscillating electric field. This relaxation is often described in terms of permittivity as a function of frequency, which can, for ideal systems, be described by the Debye equation. On the other hand, the distortion related to ionic and electronic polarisation shows behaviour of the resonance or oscillator type. The character of the distortion process depends on the structure, composition, and surroundings of the sample. Debye relaxation Debye relaxation is the dielectric relaxation response of an ideal, noninteracting population of dipoles to an alternating external electric field. It is usually expressed in the complex permittivity ε of a medium as a function of the field's angular frequency ω: where ε∞ is the permittivity at the high frequency limit, where εs is the static, low frequency permittivity, and τ is the characteristic relaxation time of the medium. Separating into the real part and the imaginary part of the complex dielectric permittivity yields: Note that the above equation for is sometimes written with in the denominator due to an ongoing sign convention ambiguity whereby many sources represent the time dependence of the complex electric field with whereas others use . In the former convention, the functions and representing real and imaginary parts are given by whereas in the latter convention . The above equation uses the latter convention. The dielectric loss is also represented by the loss tangent: This relaxation model was introduced by and named after the physicist Peter Debye (1913). It is characteristic for dynamic polarisation with only one relaxation time. Variants of the Debye equation Cole–Cole equation This equation is used when the dielectric loss peak shows symmetric broadening. Cole–Davidson equation This equation is used when the dielectric loss peak shows asymmetric broadening. Havriliak–Negami relaxation This equation considers both symmetric and asymmetric broadening. Kohlrausch–Williams–Watts function Fourier transform of stretched exponential function. Curie–von Schweidler law This shows the response of dielectrics to an applied DC field to behave according to a power law, which can be expressed as an integral over weighted exponential functions. Djordjevic–Sarkar approximation This is used when the dielectric loss is approximately constant for a wide range of frequencies. Paraelectricity Paraelectricity is the nominal behaviour of dielectrics when the dielectric permittivity tensor is proportional to the unit matrix, i.e., an applied electric field causes polarisation and/or alignment of dipoles only parallel to the applied electric field. Contrary to the analogy with a paramagnetic material, no permanent electric dipole needs to exist in a paraelectric material. Removal of the fields results in the dipolar polarisation returning to zero. The mechanisms that causes paraelectric behaviour are distortion of individual ions (displacement of the electron cloud from the nucleus) and polarisation of molecules or combinations of ions or defects. Paraelectricity can occur in crystal phases where electric dipoles are unaligned and thus have the potential to align in an external electric field and weaken it. Most dielectric materials are paraelectrics. A specific example of a paraelectric material of high dielectric constant is strontium titanate. The LiNbO3 crystal is ferroelectric below 1430 K, and above this temperature it transforms into a disordered paraelectric phase. Similarly, other perovskites also exhibit paraelectricity at high temperatures. Paraelectricity has been explored as a possible refrigeration mechanism; polarising a paraelectric by applying an electric field under adiabatic process conditions raises the temperature, while removing the field lowers the temperature. A heat pump that operates by polarising the paraelectric, allowing it to return to ambient temperature (by dissipating the extra heat), bringing it into contact with the object to be cooled, and finally depolarising it, would result in refrigeration. Tunability Tunable dielectrics are insulators whose ability to store electrical charge changes when a voltage is applied. Generally, strontium titanate () is used for devices operating at low temperatures, while barium strontium titanate () substitutes for room temperature devices. Other potential materials include microwave dielectrics and carbon nanotube (CNT) composites. In 2013, multi-sheet layers of strontium titanate interleaved with single layers of strontium oxide produced a dielectric capable of operating at up to 125 GHz. The material was created via molecular beam epitaxy. The two have mismatched crystal spacing that produces strain within the strontium titanate layer that makes it less stable and tunable. Systems such as have a paraelectric–ferroelectric transition just below ambient temperature, providing high tunability. Films suffer significant losses arising from defects. Applications Capacitors Commercially manufactured capacitors typically use a solid dielectric material with high permittivity as the intervening medium between the stored positive and negative charges. This material is often referred to in technical contexts as the capacitor dielectric. The most obvious advantage to using such a dielectric material is that it prevents the conducting plates, on which the charges are stored, from coming into direct electrical contact. More significantly, however, a high permittivity allows a greater stored charge at a given voltage. This can be seen by treating the case of a linear dielectric with permittivity ε and thickness d between two conducting plates with uniform charge density σε. In this case the charge density is given by and the capacitance per unit area by From this, it can easily be seen that a larger ε leads to greater charge stored and thus greater capacitance. Dielectric materials used for capacitors are also chosen such that they are resistant to ionisation. This allows the capacitor to operate at higher voltages before the insulating dielectric ionises and begins to allow undesirable current. Dielectric resonator A dielectric resonator oscillator (DRO) is an electronic component that exhibits resonance of the polarisation response for a narrow range of frequencies, generally in the microwave band. It consists of a "puck" of ceramic that has a large dielectric constant and a low dissipation factor. Such resonators are often used to provide a frequency reference in an oscillator circuit. An unshielded dielectric resonator can be used as a dielectric resonator antenna (DRA). BST thin films From 2002 to 2004, the United States Army Research Laboratory (ARL) conducted research on thin film technology. Barium strontium titanate (BST), a ferroelectric thin film, was studied for the fabrication of radio frequency and microwave components, such as voltage-controlled oscillators, tunable filters and phase shifters. The research was part of an effort to provide the Army with highly-tunable, microwave-compatible materials for broadband electric-field tunable devices, which operate consistently in extreme temperatures. This work improved tunability of bulk barium strontium titanate, which is a thin film enabler for electronics components. In a 2004 research paper, U.S. ARL researchers explored how small concentrations of acceptor dopants can dramatically modify the properties of ferroelectric materials such as BST. Researchers "doped" BST thin films with magnesium, analyzing the "structure, microstructure, surface morphology and film/substrate compositional quality" of the result. The Mg doped BST films showed "improved dielectric properties, low leakage current, and good tunability", meriting potential for use in microwave tunable devices. Some practical dielectrics Dielectric materials can be solids, liquids, or gases. (A high vacuum can also be a useful, nearly lossless dielectric even though its relative dielectric constant is only unity.) Solid dielectrics are perhaps the most commonly used dielectrics in electrical engineering, and many solids are very good insulators. Some examples include porcelain, glass, and most plastics. Air, nitrogen and sulfur hexafluoride are the three most commonly used gaseous dielectrics. Industrial coatings such as Parylene provide a dielectric barrier between the substrate and its environment. Mineral oil is used extensively inside electrical transformers as a fluid dielectric and to assist in cooling. Dielectric fluids with higher dielectric constants, such as electrical grade castor oil, are often used in high voltage capacitors to help prevent corona discharge and increase capacitance. Because dielectrics resist the flow of electricity, the surface of a dielectric may retain stranded excess electrical charges. This may occur accidentally when the dielectric is rubbed (the triboelectric effect). This can be useful, as in a Van de Graaff generator or electrophorus, or it can be potentially destructive as in the case of electrostatic discharge. Specially processed dielectrics, called electrets (which should not be confused with ferroelectrics), may retain excess internal charge or "frozen in" polarisation. Electrets have a semi-permanent electric field, and are the electrostatic equivalent to magnets. Electrets have numerous practical applications in the home and industry. Some dielectrics can generate a potential difference when subjected to mechanical stress, or (equivalently) change physical shape if an external voltage is applied across the material. This property is called piezoelectricity. Piezoelectric materials are another class of very useful dielectrics. Some ionic crystals and polymer dielectrics exhibit a spontaneous dipole moment, which can be reversed by an externally applied electric field. This behaviour is called the ferroelectric effect. These materials are analogous to the way ferromagnetic materials behave within an externally applied magnetic field. Ferroelectric materials often have very high dielectric constants, making them quite useful for capacitors. See also Classification of materials based on permittivity Paramagnetism Clausius-Mossotti relation Dielectric absorption Dielectric losses Dielectric strength Dielectric spectroscopy EIA Class 1 dielectric EIA Class 2 dielectric High-κ dielectric Low-κ dielectric Leakage Linear response function Metamaterial RC delay Rotational Brownian motion Paschen's law – variation of dielectric strength of gas related to pressure Separator (electricity) References Further reading External links Feynman's lecture on dielectrics Dielectric Sphere in an Electric Field Dissemination of IT for the Promotion of Materials Science (DoITPoMS) Teaching and Learning Package "Dielectric Materials" from the University of Cambridge Electric and magnetic fields in matter
Dielectric
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
4,689
[ "Electric and magnetic fields in matter", "Materials science", "Materials", "Condensed matter physics", "Dielectrics", "Matter" ]
41,031
https://en.wikipedia.org/wiki/Diffraction%20grating
In optics, a diffraction grating is an optical grating with a periodic structure that diffracts light, or another type of electromagnetic radiation, into several beams traveling in different directions (i.e., different diffraction angles). The emerging coloration is a form of structural coloration. The directions or diffraction angles of these beams depend on the wave (light) incident angle to the diffraction grating, the spacing or periodic distance between adjacent diffracting elements (e.g., parallel slits for a transmission grating) on the grating, and the wavelength of the incident light. The grating acts as a dispersive element. Because of this, diffraction gratings are commonly used in monochromators and spectrometers, but other applications are also possible such as optical encoders for high-precision motion control and wavefront measurement. For typical applications, a reflective grating has ridges or rulings on its surface while a transmissive grating has transmissive or hollow slits on its surface. Such a grating modulates the amplitude of an incident wave to create a diffraction pattern. Some gratings modulate the phases of incident waves rather than the amplitude, and these types of gratings can be produced frequently by using holography. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating (in a natural form) to be discovered, about a year after Isaac Newton's prism experiments. The first human-made diffraction grating was made around 1785 by Philadelphia inventor David Rittenhouse, who strung hairs between two finely threaded screws. This was similar to notable German physicist Joseph von Fraunhofer's wire diffraction grating in 1821. The principles of diffraction were discovered by Thomas Young and Augustin-Jean Fresnel. Using these principles, Fraunhofer was the first to use a diffraction grating to obtain line spectra and the first to measure the wavelengths of spectral lines with a diffraction grating. In the 1860s, state-of-the-art diffraction gratings with small groove period (d) were manufactured by Friedrich Adolph Nobert (1806–1881) in Greifswald; then the two Americans Lewis Morris Rutherfurd (1816–1892) and William B. Rogers (1804–1882) took over the lead. By the end of the 19th century, the concave gratings of Henry Augustus Rowland (1848–1901) were the best available. A diffraction grating can create "rainbow" colors when it is illuminated by a wide-spectrum (e.g., continuous) light source. Rainbow-like colors from closely spaced narrow tracks on optical data storage disks such as CDs or DVDs are an example of light diffraction caused by diffraction gratings. A usual diffraction grating has parallel lines (It is true for 1-dimensional gratings, but 2 or 3-dimensional gratings are also possible and they have their applications such as wavefront measurement), while a CD has a spiral of finely spaced data tracks. Diffraction colors also appear when one looks at a bright point source through a translucent fine-pitch umbrella fabric covering. Decorative patterned plastic films based on reflective grating patches are inexpensive and commonplace. A similar color separation seen from thin layers of oil (or gasoline, etc.) on water, known as iridescence, is not caused by diffraction from a grating but rather by thin film interference from the closely stacked transmissive layers. Theory of operation For a diffraction grating, the relationship between the grating spacing (i.e., the distance between adjacent grating grooves or slits), the angle of the wave (light) incidence to the grating, and the diffracted wave from the grating is known as the grating equation. Like many other optical formulas, the grating equation can be derived by using the Huygens–Fresnel principle, stating that each point on a wavefront of a propagating wave can be considered to act as a point wave source, and a wavefront at any subsequent point can be found by adding together the contributions from each of these individual point wave sources on the previous wavefront. Gratings may be of the 'reflective' or 'transmissive' type, analogous to a mirror or lens, respectively. A grating has a 'zero-order mode' (where the integer order of diffraction m is set to zero), in which a ray of light behaves according to the laws of reflection (like a mirror) and refraction (like a lens), respectively. An idealized diffraction grating is made up of a set of slits of spacing , that must be wider than the wavelength of interest to cause diffraction. Assuming a plane wave of monochromatic light of wavelength at normal incidence on a grating (i.e., wavefronts of the incident wave are parallel to the grating main plane), each slit in the grating acts as a quasi point wave source from which light propagates in all directions (although this is typically limited to the forward hemisphere from the point source). Of course, every point on every slit to which the incident wave reaches plays as a point wave source for the diffraction wave and all these contributions to the diffraction wave determine the detailed diffraction wave light property distribution, but diffraction angles (at the grating) at which the diffraction wave intensity is highest are determined only by these quasi point sources corresponding the slits in the grating. After the incident light (wave) interacts with the grating, the resulting diffracted light from the grating is composed of the sum of interfering wave components emanating from each slit in the grating; At any given point in space through which the diffracted light may pass, typically called observation point, the path length from each slit in the grating to the given point varies, so the phase of the wave emanating from each of the slits at that point also varies. As a result, the sum of the diffracted waves from the grating slits at the given observation point creates a peak, valley, or some degree between them in light intensity through additive and destructive interference. When the difference between the light paths from adjacent slits to the observation point is equal to an odd integer-multiple of the half of the wavelength, l with an odd integer , the waves are out of phase at that point, and thus cancel each other to create the (locally) minimum light intensity. Similarly, when the path difference is a multiple of , the waves are in phase and the (locally) maximum intensity occurs. For light at the normal incidence to the grating, the intensity maxima occur at diffraction angles , which satisfy the relationship , where is the angle between the diffracted ray and the grating's normal vector, is the distance from the center of one slit to the center of the adjacent slit, and is an integer representing the propagation-mode of interest called the diffraction order. When a plane light wave is normally incident on a grating of uniform period , the diffracted light has maxima at diffraction angles given by a special case of the grating equation as It can be shown that if the plane wave is incident at angle relative to the grating normal, in the plane orthogonal to the grating periodicity, the grating equation becomes which describes in-plane diffraction as a special case of the more general scenario of conical, or off-plane, diffraction described by the generalized grating equation: where is the angle between the direction of the plane wave and the direction of the grating grooves, which is orthogonal to both the directions of grating periodicity and grating normal. Various sign conventions for , and are used; any choice is fine as long as the choice is kept through diffraction-related calculations. When solved for diffracted angle at which the diffracted wave intensity are maximized, the equation becomes The diffracted light that corresponds to direct transmission for a transmissive diffraction grating or specular reflection for a reflective grating is called the zero order, and is denoted . The other diffracted light intensity maxima occur at angles represented by non-zero integer diffraction orders . Note that can be positive or negative, corresponding to diffracted orders on both sides of the zero-order diffracted beam. Even if the grating equation is derived from a specific grating such as the grating in the right diagram (this grating is called a blazed grating), the equation can apply to any regular structure of the same spacing, because the phase relationship between light scattered from adjacent diffracting elements of the grating remains the same. The detailed diffracted light property distribution (e.g., intensity) depends on the detailed structure of the grating elements as well as on the number of elements in the grating, but it always gives maxima in the directions given by the grating equation. Depending on how a grating modulates incident light on it to cause the diffracted light, there are the following grating types: Transmission amplitude diffraction grating, which spatially and periodically modulates the intensity of an incident wave that transmits through the grating (and the diffracted wave is the consequence of this modulation). Reflection amplitude diffraction gratings, which spatially and periodically modulate the intensity of an incident wave that is reflected from the grating. Transmission phase diffraction grating, that spatially and periodically modulates the phase of an incident wave passing through the grating. Reflection phase diffraction grating, that spatially and periodically modulates the phase of an incident wave reflected from the grating. An optical axis diffraction grating, in which the optical axis is spatially and periodically modulated, is also considered either a reflection or transmission phase diffraction grating. The grating equation applies to all these gratings due to the same phase relationship between the diffracted waves from adjacent diffracting elements of the gratings, even if the detailed distribution of the diffracted wave property depends on the detailed structure of each grating. Quantum electrodynamics Quantum electrodynamics (QED) offers another derivation of the properties of a diffraction grating in terms of photons as particles (at some level). QED can be described intuitively with the path integral formulation of quantum mechanics. As such it can model photons as potentially following all paths from a source to a final point, each path with a certain probability amplitude. These probability amplitudes can be represented as a complex number or equivalent vector—or, as Richard Feynman simply calls them in his book on QED, "arrows". For the probability that a certain event will happen, one sums the probability amplitudes for all of the possible ways in which the event can occur, and then takes the square of the length of the result. The probability amplitude for a photon from a monochromatic source to arrive at a certain final point at a given time, in this case, can be modeled as an arrow that spins rapidly until it is evaluated when the photon reaches its final point. For example, for the probability that a photon will reflect off of a mirror and be observed at a given point a given amount of time later, one sets the photon's probability amplitude spinning as it leaves the source, follows it to the mirror, and then to its final point, even for paths that do not involve bouncing off of the mirror at equal angles. One can then evaluate the probability amplitude at the photon's final point; next, one can integrate over all of these arrows (see vector sum), and square the length of the result to obtain the probability that this photon will reflect off of the mirror in the pertinent fashion. The times these paths take are what determines the angle of the probability amplitude arrow, as they can be said to "spin" at a constant rate (which is related to the frequency of the photon). The times of the paths near the classical reflection site of the mirror are nearly the same, so the probability amplitudes point in nearly the same direction—thus, they have a sizable sum. Examining the paths towards the edges of the mirror reveals that the times of nearby paths are quite different from each other, and thus we wind up summing vectors that cancel out quickly. So, there is a higher probability that light will follow a near-classical reflection path than a path further out. However, a diffraction grating can be made out of this mirror, by scraping away areas near the edge of the mirror that usually cancel nearby amplitudes out—but now, since the photons don't reflect from the scraped-off portions, the probability amplitudes that would all point, for instance, at forty-five degrees, can have a sizable sum. Thus, this lets light of the right frequency sum to a larger probability amplitude, and as such possess a larger probability of reaching the appropriate final point. This particular description involves many simplifications: a point source, a "surface" that light can reflect off of (thus neglecting the interactions with electrons) and so forth. The biggest simplification is perhaps in the fact that the "spinning" of the probability amplitude arrows is actually more accurately explained as a "spinning" of the source, as the probability amplitudes of photons do not "spin" while they are in transit. We obtain the same variation in probability amplitudes by letting the time at which the photon left the source be indeterminate—and the time of the path now tells us when the photon would have left the source, and thus what the angle of its "arrow" would be. However, this model and approximation is a reasonable one to illustrate a diffraction grating conceptually. Light of a different frequency may also reflect off of the same diffraction grating, but with a different final point. Gratings as dispersive elements The wavelength dependence in the grating equation shows that the grating separates an incident polychromatic beam into its constituent wavelength components at different angles, i.e., it is angular dispersive. Each wavelength of input beam spectrum is sent into a different direction, producing a rainbow of colors under white light illumination. This is visually similar to the operation of a prism, although the mechanism is very different. A prism refracts waves of different wavelengths at different angles due to their different refractive indices, while a grating diffracts different wavelengths at different angles due to interference at each wavelength. The diffracted beams corresponding to consecutive orders may overlap, depending on the spectral content of the incident beam and the grating density. The higher the spectral order, the greater the overlap into the next order. The grating equation shows that the angles of the diffracted orders only depend on the grooves' period, and not on their shape. By controlling the cross-sectional profile of the grooves, it is possible to concentrate most of the diffracted optical energy in a particular order for a given wavelength. A triangular profile is commonly used. This technique is called blazing. The incident angle and wavelength for which the diffraction is most efficient (the ratio of the diffracted optical energy to the incident energy is the highest) are often called blazing angle and blazing wavelength. The efficiency of a grating may also depend on the polarization of the incident light. Gratings are usually designated by their groove density, the number of grooves per unit length, usually expressed in grooves per millimeter (g/mm), also equal to the inverse of the groove period. The groove period must be on the order of the wavelength of interest; the spectral range covered by a grating is dependent on groove spacing and is the same for ruled and holographic gratings with the same grating constant (meaning groove density or the groove period). The maximum wavelength that a grating can diffract is equal to twice the grating period, in which case the incident and diffracted light are at ninety degrees (90°) to the grating normal. To obtain frequency dispersion over a wider frequency one must use a prism. The optical regime, in which the use of gratings is most common, corresponds to wavelengths between 100 nm and 10 µm. In that case, the groove density can vary from a few tens of grooves per millimeter, as in echelle gratings, to a few thousands of grooves per millimeter. When groove spacing is less than half the wavelength of light, the only present order is the m = 0 order. Gratings with such small periodicity (with respect to the incident light wavelength) are called subwavelength gratings and exhibit special optical properties. Made on an isotropic material the subwavelength gratings give rise to form birefringence, in which the material behaves as if it were birefringent. Fabrication SR (Surface Relief) gratings SR gratings are named due to its surface structure of depressions (low relief) and elevations (high relief). Originally, high-resolution gratings were ruled by high-quality ruling engines whose construction was a large undertaking. Henry Joseph Grayson designed a machine to make diffraction gratings, succeeding with one of 120,000 lines to the inch (approx. 4,724 lines per mm) in 1899. Later, photolithographic techniques created gratings via holographic interference patterns. A holographic grating has sinusoidal grooves as the result of an optical sinusoidal interference pattern on the grating material during its fabrication, and may not be as efficient as ruled gratings, but are often preferred in monochromators because they produce less stray light. A copying technique can make high quality replicas from master gratings of either type, thereby lowering fabrication costs. Semiconductor technology today is also used to etch holographically patterned gratings into robust materials such as fused silica. In this way, low stray-light holography is combined with the high efficiency of deep, etched transmission gratings, and can be incorporated into high-volume, low-cost semiconductor manufacturing technology. VPH (Volume Phase Holography) gratings Another method for manufacturing diffraction gratings uses a photosensitive gel sandwiched between two substrates. A holographic interference pattern exposes the gel, which is later developed. These gratings, called volume phase holography diffraction gratings (or VPH diffraction gratings) have no physical grooves, but instead a periodic modulation of the refractive index within the gel. This removes much of the surface scattering effects typically seen in other types of gratings. These gratings also tend to have higher efficiencies, and allow for the inclusion of complicated patterns into a single grating. A VPH diffraction grating is typically a transmission grating, through which incident light passes and is diffracted, but a VPH reflection grating can also be made by tilting the direction of a refractive index modulation with respect to the grating surface. In older versions of such gratings, environmental susceptibility was a trade-off, as the gel had to be contained at low temperature and humidity. Typically, the photosensitive substances are sealed between two substrates that make them resistant to humidity, and thermal and mechanical stresses. VPH diffraction gratings are not destroyed by accidental touches and are more scratch resistant than typical relief gratings. Blazed gratings A blazed grating is manufactured with grooves that have a sawtooth-shaped cross section, unlike the symmetrical grooves of other gratings. This allows the grating to achieve maximum diffraction efficiency, but in only one diffraction order which is dependent on the angle of the sawtooth grooves, known as the blaze angle. Common uses include specific wavelength selection for tunable lasers, among others. Other gratings A new technology for grating insertion into integrated photonic lightwave circuits is digital planar holography (DPH). DPH gratings are generated in computer and fabricated on one or several interfaces of an optical waveguide planar by using standard micro-lithography or nano-imprinting methods, compatible with mass-production. Light propagates inside the DPH gratings, confined by the refractive index gradient, which provides longer interaction path and greater flexibility in light steering. Examples Diffraction gratings are often used in monochromators, spectrometers, lasers, wavelength division multiplexing devices, optical pulse compressing devices, interferometers, and many other optical instruments. Ordinary pressed CD and DVD media are every-day examples of diffraction gratings and can be used to demonstrate the effect by reflecting sunlight off them onto a white wall. This is a side effect of their manufacture, as one surface of a CD has many small pits in the plastic, arranged in a spiral; that surface has a thin layer of metal applied to make the pits more visible. The structure of a DVD is optically similar, although it may have more than one pitted surface, and all pitted surfaces are inside the disc. Due to the sensitivity to the refractive index of the media, diffraction grating can be used as sensor of fluid properties. In a standard pressed vinyl record when viewed from a low angle perpendicular to the grooves, a similar but less defined effect to that in a CD/DVD is seen. This is due to viewing angle (less than the critical angle of reflection of the black vinyl) and the path of the light being reflected due to this being changed by the grooves, leaving a rainbow relief pattern behind. Diffraction gratings are also used to distribute evenly the frontlight of e-readers such as the Nook Simple Touch with GlowLight. Gratings from electronic components Some everyday electronic components contain fine and regular patterns, and as a result readily serve as diffraction gratings. For example, CCD sensors from discarded mobile phones and cameras can be removed from the device. With a laser pointer, diffraction can reveal the spatial structure of the CCD sensors. This can be done for LCD or LED displays of smart phones as well. Because such displays are usually protected just by transparent casing, experiments can be done without damaging the phones. If accurate measurements are not intended, a spotlight can reveal the diffraction patterns. Natural gratings Striated muscle is the most commonly found natural diffraction grating and, this has helped physiologists in determining the structure of such muscle. Aside from this, the chemical structure of crystals can be thought of as diffraction gratings for types of electromagnetic radiation other than visible light, this is the basis for techniques such as X-ray crystallography. Most commonly confused with diffraction gratings are the iridescent colors of peacock feathers, mother-of-pearl, and butterfly wings. Iridescence in birds, fish and insects is often caused by thin-film interference rather than a diffraction grating. Diffraction produces the entire spectrum of colors as the viewing angle changes, whereas thin-film interference usually produces a much narrower range. The surfaces of flowers can also create a diffraction, but the cell structures in plants are usually too irregular to produce the fine slit geometry necessary for a diffraction grating. The iridescence signal of flowers is thus only appreciable very locally and hence not visible to man and flower visiting insects. However, natural gratings do occur in some invertebrate animals, like the peacock spiders, the antennae of seed shrimp, and have even been discovered in Burgess Shale fossils. Diffraction grating effects are sometimes seen in meteorology. Diffraction coronas are colorful rings surrounding a source of light, such as the sun. These are usually observed much closer to the light source than halos, and are caused by very fine particles, like water droplets, ice crystals, or smoke particles in a hazy sky. When the particles are all nearly the same size they diffract the incoming light at very specific angles. The exact angle depends on the size of the particles. Diffraction coronas are commonly observed around light sources, like candle flames or street lights, in the fog. Cloud iridescence is caused by diffraction, occurring along coronal rings when the particles in the clouds are all uniform in size. See also Angle-sensitive pixel Blazed grating Diffraction efficiency Diffraction from slits Diffraction spike Diffractive solar sail Echelle grating Fraunhofer diffraction Fraunhofer diffraction (mathematics) Fresnel diffraction Grism Henry Augustus Rowland Kapitza–Dirac effect Kirchhoff's diffraction formula N-slit interferometric equation Ultrasonic grating Virtually imaged phased array Zone plate Notes References External links Diffraction Gratings Lecture 9, Youtube Diffraction Gratings — The Crucial Dispersive Element Optics Tutorial — Diffraction Gratings Ruled & Holographic Ray-Tracing program handling general reflective concave gratings for Windows XP and above Interference in Diffraction Grating Beams - Wolfram demonstration Diffraction Optical components Photonics
Diffraction grating
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
5,323
[ "Glass engineering and science", "Optical components", "Spectrum (physical sciences)", "Crystallography", "Diffraction", "Spectroscopy", "Components" ]
41,049
https://en.wikipedia.org/wiki/Direct-sequence%20spread%20spectrum
In telecommunications, direct-sequence spread spectrum (DSSS) is a spread-spectrum modulation technique primarily used to reduce overall signal interference. The direct-sequence modulation makes the transmitted signal wider in bandwidth than the information bandwidth. After the despreading or removal of the direct-sequence modulation in the receiver, the information bandwidth is restored, while the unintentional and intentional interference is substantially reduced. Swiss inventor, Gustav Guanella proposed a "means for and method of secret signals". With DSSS, the message symbols are modulated by a sequence of complex values known as spreading sequence. Each element of the spreading sequence, a so-called chip, has a shorter duration than the original message symbols. The modulation of the message symbols scrambles and spreads the signal in the spectrum, and thereby results in a bandwidth of the spreading sequence. The smaller the chip duration, the larger the bandwidth of the resulting DSSS signal; more bandwidth multiplexed to the message signal results in better resistance against narrowband interference. Some practical and effective uses of DSSS include the code-division multiple access (CDMA) method, the IEEE 802.11b specification used in Wi-Fi networks, and the Global Positioning System. Transmission method Direct-sequence spread-spectrum transmissions multiply the symbol sequence being transmitted with a spreading sequence that has a higher rate than the original message rate. Usually, sequences are chosen such that the resulting spectrum is spectrally white. Knowledge of the same sequence is used to reconstruct the original data at the receiving end. This is commonly implemented by the element-wise multiplication with the spreading sequence, followed by summation over a message symbol period. This process, despreading, is mathematically a correlation of the transmitted spreading sequence with the spreading sequence. In an AWGN channel, the despreaded signal's signal-to-noise ratio is increased by the spreading factor, which is the ratio of the spreading-sequence rate to the data rate. While a transmitted DSSS signal occupies a wider bandwidth than the direct modulation of the original signal would require, its spectrum can be restricted by conventional pulse-shape filtering. If an undesired transmitter transmits on the same channel but with a different spreading sequence, the despreading process reduces the power of that signal. This effect is the basis for the code-division multiple access (CDMA) method of multi-user medium access, which allows multiple transmitters to share the same channel within the limits of the cross-correlation properties of their spreading sequences. Benefits Resistance to unintended or intended jamming Sharing of a single channel among multiple users Reduced signal/background-noise level hampers interception Determination of relative timing between transmitter and receiver Uses The United States GPS, European Galileo and Russian GLONASS satellite navigation systems; earlier GLONASS used DSSS with a single spreading sequence in conjunction with FDMA, while later GLONASS used DSSS to achieve CDMA with multiple spreading sequences. DS-CDMA (Direct-Sequence Code Division Multiple Access) is a multiple access scheme based on DSSS, by spreading the signals from/to different users with different codes. It is the most widely used type of CDMA. Cordless phones operating in the 900 MHz, 2.4 GHz and 5.8 GHz bands IEEE 802.11b 2.4 GHz Wi-Fi, and its predecessor 802.11-1999. (Their successor 802.11g uses both OFDM and DSSS) Automatic meter reading IEEE 802.15.4 (used, e.g., as PHY and MAC layer for Zigbee, or, as the physical layer for WirelessHART) Radio-controlled model Automotive, Aeronautical and Marine vehicles Spread spectrum radar for covertness and resistance to jamming and spoofing See also Complementary code keying Frequency-hopping spread spectrum Linear-feedback shift register Orthogonal frequency-division multiplexing References The Origins of Spread-Spectrum Communications NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management External links Civil Spread Spectrum History Computer network technology Quantized radio modulation modes Wireless networking IEEE 802.11 ja:スペクトラム拡散#直接拡散
Direct-sequence spread spectrum
[ "Technology", "Engineering" ]
844
[ "Wireless networking", "Computer networks engineering" ]
41,057
https://en.wikipedia.org/wiki/Disturbance%20voltage
In telecommunications, a disturbance voltage is an unwanted voltage induced in a system by natural or man-made sources. In telecommunications systems, the disturbance voltage creates currents that limit or interfere with the interchange of information. An example of a disturbance voltage is a voltage that produces (a) false signals in a telephone, (b) Noise (radio) in a radio receiver, or (c) distortion in a received signal. References Electrical parameters Telecommunications engineering Noise (electronics)
Disturbance voltage
[ "Engineering" ]
94
[ "Electrical engineering", "Telecommunications engineering", "Electrical parameters" ]
41,068
https://en.wikipedia.org/wiki/Drop%20%28liquid%29
A drop or droplet is a small column of liquid, bounded completely or almost completely by free surfaces. A drop may form when liquid accumulates at the end of a tube or other surface boundary, producing a hanging drop called a pendant drop. Drops may also be formed by the condensation of a vapor or by atomization of a larger mass of solid. Water vapor will condense into droplets depending on the temperature. The temperature at which droplets form is called the dew point. Surface tension Liquid forms drops because it exhibits surface tension. A simple way to form a drop is to allow liquid to flow slowly from the lower end of a vertical tube of small diameter. The surface tension of the liquid causes the liquid to hang from the tube, forming a pendant. When the drop exceeds a certain size it is no longer stable and detaches itself. The falling liquid is also a drop held together by surface tension. Viscosity and pitch drop experiments Some substances that appear to be solid, can be shown to instead be extremely viscous liquids, because they form drops and display droplet behavior. In the famous pitch drop experiments, pitch – a substance somewhat like solid bitumen – is shown to be a liquid in this way. Pitch in a funnel slowly forms droplets, each droplet taking about 10 years to form and break off. Pendant drop test In the pendant drop test, a drop of liquid is suspended from the end of a tube or by any surface by surface tension. The force due to surface tension is proportional to the length of the boundary between the liquid and the tube, with the proportionality constant usually denoted . Since the length of this boundary is the circumference of the tube, the force due to surface tension is given by where d is the tube diameter. The mass m of the drop hanging from the end of the tube can be found by equating the force due to gravity () with the component of the surface tension in the vertical direction () giving the formula where α is the angle of contact with the tube's front surface, and g is the acceleration due to gravity. The limit of this formula, as α goes to 90°, gives the maximum weight of a pendant drop for a liquid with a given surface tension, . This relationship is the basis of a convenient method of measuring surface tension, commonly used in the petroleum industry. More sophisticated methods are available to take account of the developing shape of the pendant as the drop grows. These methods are used if the surface tension is unknown. Drop adhesion to a solid The drop adhesion to a solid can be divided into two categories: lateral adhesion and normal adhesion. Lateral adhesion resembles friction (though tribologically lateral adhesion is a more accurate term) and refers to the force required to slide a drop on the surface, namely the force to detach the drop from its position on the surface only to translate it to another position on the surface. Normal adhesion is the adhesion required to detach a drop from the surface in the normal direction, namely the force to cause the drop to fly off from the surface. The measurement of both adhesion forms can be done with the Centrifugal Adhesion Balance (CAB). The CAB uses a combination of centrifugal and gravitational forces to obtain any ratio of lateral and normal forces. For example, it can apply a normal force at zero lateral force for the drop to fly off away from the surface in the normal direction or it can induce a lateral force at zero normal force (simulating zero gravity). Droplet The term droplet is a diminutive form of 'drop' – and as a guide is typically used for liquid particles of less than 500 μm diameter. In spray application, droplets are usually described by their perceived size (i.e., diameter) whereas the dose (or number of infective particles in the case of biopesticides) is a function of their volume. This increases by a cubic function relative to diameter; thus, a 50 μm droplet represents a dose in 65 pl and a 500 μm drop represents a dose in 65 nanolitres. Speed A droplet with a diameter of 3 mm has a terminal velocity of approximately 8 m/s. Drops smaller than in diameter will attain 95% of their terminal velocity within . But above this size the distance to get to terminal velocity increases sharply. An example is a drop with a diameter of that may achieve this at . Optics Due to the different refractive index of water and air, refraction and reflection occur on the surfaces of raindrops, leading to rainbow formation. Sound The major source of sound when a droplet hits a liquid surface is the resonance of excited bubbles trapped underwater. These oscillating bubbles are responsible for most liquid sounds, such as running water or splashes, as they actually consist of many drop-liquid collisions. "Dripping tap" noise prevention Reducing the surface tension of a body of liquid makes possible to reduce or prevent noise due to droplets falling into it. This would involve adding soap, detergent or a similar substance to water. The reduced surface tension reduces the noise from dripping. Shape The classic shape associated with a drop (with a pointy end in its upper side) comes from the observation of a droplet clinging to a surface. The shape of a drop falling through a gas is actually more or less spherical for drops less than 2 mm in diameter. Larger drops tend to be flatter on the bottom part due to the pressure of the gas they move through. As a result, as drops get larger, a concave depression forms which leads to the eventual breakup of the drop. Capillary length The capillary length is a length scaling factor that relates gravity, density, and surface tension, and is directly responsible for the shape a droplet for a specific fluid will take. The capillary length stems from the Laplace pressure, using the radius of the droplet. Using the capillary length we can define microdrops and macrodrops. Microdrops are droplets with radius smaller than the capillary length, where the shape of the droplet is governed by surface tension and they form a more or less spherical cap shape. If a droplet has a radius larger than the capillary length, they are known as macrodrops and the gravitational forces will dominate. Macrodrops will be 'flattened' by gravity and the height of the droplet will be reduced. Size Raindrop sizes typically range from 0.5 mm to 4 mm, with size distributions quickly decreasing past diameters larger than 2–2.5 mm. Scientists traditionally thought that the variation in the size of raindrops was due to collisions on the way down to the ground. In 2009, French researchers succeeded in showing that the distribution of sizes is due to the drops' interaction with air, which deforms larger drops and causes them to fragment into smaller drops, effectively limiting the largest raindrops to about 6 mm diameter. However, drops up to 10 mm (equivalent in volume to a sphere of radius 4.5 mm) are theoretically stable and could be levitated in a wind tunnel. The largest recorded raindrop was 8.8 mm in diameter, located at the base of a cumulus congestus cloud in the vicinity of Kwajalein Atoll in July 1999. A raindrop of identical size was detected over northern Brazil in September 1995. Standardized droplet sizes in medicine In medicine, this property is used to create droppers and IV infusion sets which have a standardized diameter, in such a way that 1 millilitre is equivalent to 20 drops. When smaller amounts are necessary (such as paediatrics), microdroppers or paediatric infusion sets are used, in which 1 millilitre = 60 microdrops. Gallery See also Pitch drop experiment Rain Splash (fluid dynamics) Water droplet erosion Dribbling (teapot) References External links Liquid Sculpture – pictures of drops Liquid Art – Galleries of fine art droplet photography (archived 19 March 2008) (Greatly varying) calculation of water waste from dripping tap: , () Liquids Fluid dynamics Articles containing video clips Alcohol measurement ps:څاڅکې
Drop (liquid)
[ "Physics", "Chemistry", "Engineering" ]
1,669
[ "Liquids", "Chemical engineering", "Phases of matter", "Piping", "Matter", "Fluid dynamics" ]
41,078
https://en.wikipedia.org/wiki/Duty%20cycle
A duty cycle or power cycle is the fraction of one period in which a signal or system is active. Duty cycle is commonly expressed as a percentage or a ratio. A period is the time it takes for a signal to complete an on-and-off cycle. As a formula, a duty cycle (%) may be expressed as: Equally, a duty cycle (ratio) may be expressed as: where is the duty cycle, is the pulse width (pulse active time), and is the total period of the signal. Thus, a 60% duty cycle means the signal is on 60% of the time but off 40% of the time. The "on time" for a 60% duty cycle could be a fraction of a second, a day, or even a week, depending on the length of the period. Duty cycles can be used to describe the percent time of an active signal in an electrical device such as the power switch in a switching power supply or the firing of action potentials by a living system such as a neuron. Some publications use as the symbol for duty cycle. As a ratio, duty cycle is unitless and may be given as decimal fraction and percentage alike. An alternative term in use is duty factor. Applications Electrical and electronics In electronics, duty cycle is the percentage of the ratio of pulse duration, or pulse width (PW) to the total period (T) of the waveform. It is generally used to represent time duration of a pulse when it is high (1). In digital electronics, signals are used in rectangular waveform which are represented by logic 1 and logic 0. Logic 1 stands for presence of an electric pulse and 0 for absence of an electric pulse. For example, a signal (10101010) has 50% duty cycle, because the pulse remains high for 1/2 of the period or low for 1/2 of the period. Similarly, for pulse (10001000) the duty cycle will be 25% because the pulse remains high only for 1/4 of the period and remains low for 3/4 of the period. Electrical motors typically use less than a 100% duty cycle. For example, if a motor runs for one out of 100 seconds, or 1/100 of the time, then, its duty cycle is 1/100, or 1 percent. Pulse-width modulation (PWM) is used in a variety of electronic situations, such as power delivery and voltage regulation. In electronic music, music synthesizers vary the duty cycle of their audio-frequency oscillators to obtain a subtle effect on the tone colors. This technique is known as pulse-width modulation. In the printer / copier industry, the duty cycle specification refers to the rated throughput (that is, printed pages) of a device per month. In a welding power supply, the maximum duty cycle is defined as the percentage of time in a 10-minute period that it can be operated continuously before overheating. Biological systems The concept of duty cycles is also used to describe the activity of neurons and muscle fibers. In neural circuits for example, a duty cycle specifically refers to the proportion of a cycle period in which a neuron remains active. Generation One way to generate fairly accurate square wave signals with 1/n duty factor, where n is an integer, is to vary the duty cycle until the nth-harmonic is significantly suppressed. For audio-band signals, this can even be done "by ear"; for example, a -40 dB reduction in the 3rd harmonic corresponds to setting the duty factor to 1/3 with a precision of 1% and -60 dB reduction corresponds to a precision of 0.1%. Mark-space ratio Mark-space ratio, or mark-to-space ratio, is another term for the same concept, to describe the temporal relationship between two alternating periods of a waveform. However, whereas the duty cycle relates the duration of one period to the duration of the entire cycle, the mark-space ratio relates the durations of the two individual periods: where and are the durations of the two alternating periods. References Mechanical engineering Timing in electronic circuits Articles containing video clips
Duty cycle
[ "Physics", "Engineering" ]
846
[ "Electrical engineering", "Applied and interdisciplinary physics", "Mechanical engineering" ]
41,092
https://en.wikipedia.org/wiki/Electric%20field
An electric field (sometimes called E-field) is a physical field that surrounds electrically charged particles. In classical electromagnetism, the electric field of a single charge (or group of charges) describes their capacity to exert attractive or repulsive forces on another charged object. Charged particles exert attractive forces on each other when their charges are opposite, and repulse each other when their charges are the same. Because these forces are exerted mutually, two charges must be present for the forces to take place. These forces are described by Coulomb's law, which says that the greater the magnitude of the charges, the greater the force, and the greater the distance between them, the weaker the force. Informally, the greater the charge of an object, the stronger its electric field. Similarly, an electric field is stronger nearer charged objects and weaker further away. Electric fields originate from electric charges and time-varying electric currents. Electric fields and magnetic fields are both manifestations of the electromagnetic field. Electromagnetism is one of the four fundamental interactions of nature. Electric fields are important in many areas of physics, and are exploited in electrical technology. For example, in atomic physics and chemistry, the interaction in the electric field between the atomic nucleus and electrons is the force that holds these particles together in atoms. Similarly, the interaction in the electric field between atoms is the force responsible for chemical bonding that result in molecules. The electric field is defined as a vector field that associates to each point in space the force per unit of charge exerted on an infinitesimal test charge at rest at that point. The SI unit for the electric field is the volt per meter (V/m), which is equal to the newton per coulomb (N/C). Description The electric field is defined at each point in space as the force that would be experienced by an infinitesimally small stationary test charge at that point divided by the charge. The electric field is defined in terms of force, and force is a vector (i.e. having both magnitude and direction), so it follows that an electric field may be described by a vector field. The electric field acts between two charges similarly to the way that the gravitational field acts between two masses, as they both obey an inverse-square law with distance. This is the basis for Coulomb's law, which states that, for stationary charges, the electric field varies with the source charge and varies inversely with the square of the distance from the source. This means that if the source charge were doubled, the electric field would double, and if you move twice as far away from the source, the field at that point would be only one-quarter its original strength. The electric field can be visualized with a set of lines whose direction at each point is the same as those of the field, a concept introduced by Michael Faraday, whose term 'lines of force' is still sometimes used. This illustration has the useful property that, when drawn so that each line represents the same amount of flux, the strength of the field is proportional to the density of the lines. Field lines due to stationary charges have several important properties, including that they always originate from positive charges and terminate at negative charges, they enter all good conductors at right angles, and they never cross or close in on themselves. The field lines are a representative concept; the field actually permeates all the intervening space between the lines. More or fewer lines may be drawn depending on the precision to which it is desired to represent the field. The study of electric fields created by stationary charges is called electrostatics. Faraday's law describes the relationship between a time-varying magnetic field and the electric field. One way of stating Faraday's law is that the curl of the electric field is equal to the negative time derivative of the magnetic field. In the absence of time-varying magnetic field, the electric field is therefore called conservative (i.e. curl-free). This implies there are two kinds of electric fields: electrostatic fields and fields arising from time-varying magnetic fields. While the curl-free nature of the static electric field allows for a simpler treatment using electrostatics, time-varying magnetic fields are generally treated as a component of a unified electromagnetic field. The study of magnetic and electric fields that change over time is called electrodynamics. Mathematical formulation Electric fields are caused by electric charges, described by Gauss's law, and time varying magnetic fields, described by Faraday's law of induction. Together, these laws are enough to define the behavior of the electric field. However, since the magnetic field is described as a function of electric field, the equations of both fields are coupled and together form Maxwell's equations that describe both fields as a function of charges and currents. Electrostatics In the special case of a steady state (stationary charges and currents), the Maxwell-Faraday inductive effect disappears. The resulting two equations (Gauss's law and Faraday's law with no induction term ), taken together, are equivalent to Coulomb's law, which states that a particle with electric charge at position exerts a force on a particle with charge at position of: where is the force on charged particle caused by charged particle . is the permittivity of free space. is a unit vector directed from to . is the displacement vector from to . Note that must be replaced with , permittivity, when charges are in non-empty media. When the charges and have the same sign this force is positive, directed away from the other charge, indicating the particles repel each other. When the charges have unlike signs the force is negative, indicating the particles attract. To make it easy to calculate the Coulomb force on any charge at position this expression can be divided by leaving an expression that only depends on the other charge (the source charge) where is the component of the electric field at due to . This is the electric field at point due to the point charge ; it is a vector-valued function equal to the Coulomb force per unit charge that a positive point charge would experience at the position . Since this formula gives the electric field magnitude and direction at any point in space (except at the location of the charge itself, , where it becomes infinite) it defines a vector field. From the above formula it can be seen that the electric field due to a point charge is everywhere directed away from the charge if it is positive, and toward the charge if it is negative, and its magnitude decreases with the inverse square of the distance from the charge. The Coulomb force on a charge of magnitude at any point in space is equal to the product of the charge and the electric field at that point The SI unit of the electric field is the newton per coulomb (N/C), or volt per meter (V/m); in terms of the SI base units it is kg⋅m⋅s−3⋅A−1. Superposition principle Due to the linearity of Maxwell's equations, electric fields satisfy the superposition principle, which states that the total electric field, at a point, due to a collection of charges is equal to the vector sum of the electric fields at that point due to the individual charges. This principle is useful in calculating the field created by multiple point charges. If charges are stationary in space at points , in the absence of currents, the superposition principle says that the resulting field is the sum of fields generated by each particle as described by Coulomb's law: where is the unit vector in the direction from point to point is the displacement vector from point to point . Continuous charge distributions The superposition principle allows for the calculation of the electric field due to a distribution of charge density . By considering the charge in each small volume of space at point as a point charge, the resulting electric field, , at point can be calculated as where is the unit vector pointing from to . is the displacement vector from to . The total field is found by summing the contributions from all the increments of volume by integrating the charge density over the volume : Similar equations follow for a surface charge with surface charge density on surface and for line charges with linear charge density on line Electric potential If a system is static, such that magnetic fields are not time-varying, then by Faraday's law, the electric field is curl-free. In this case, one can define an electric potential, that is, a function such that This is analogous to the gravitational potential. The difference between the electric potential at two points in space is called the potential difference (or voltage) between the two points. In general, however, the electric field cannot be described independently of the magnetic field. Given the magnetic vector potential, , defined so that , one can still define an electric potential such that: where is the gradient of the electric potential and is the partial derivative of with respect to time. Faraday's law of induction can be recovered by taking the curl of that equation which justifies, a posteriori, the previous form for . Continuous vs. discrete charge representation The equations of electromagnetism are best described in a continuous description. However, charges are sometimes best described as discrete points; for example, some models may describe electrons as point sources where charge density is infinite on an infinitesimal section of space. A charge located at can be described mathematically as a charge density , where the Dirac delta function (in three dimensions) is used. Conversely, a charge distribution can be approximated by many small point charges. Electrostatic fields Electrostatic fields are electric fields that do not change with time. Such fields are present when systems of charged matter are stationary, or when electric currents are unchanging. In that case, Coulomb's law fully describes the field. Parallels between electrostatic and gravitational fields Coulomb's law, which describes the interaction of electric charges: is similar to Newton's law of universal gravitation: (where ). This suggests similarities between the electric field and the gravitational field , or their associated potentials. Mass is sometimes called "gravitational charge". Electrostatic and gravitational forces both are central, conservative and obey an inverse-square law. Uniform fields A uniform field is one in which the electric field is constant at every point. It can be approximated by placing two conducting plates parallel to each other and maintaining a voltage (potential difference) between them; it is only an approximation because of boundary effects (near the edge of the planes, the electric field is distorted because the plane does not continue). Assuming infinite planes, the magnitude of the electric field is: where is the potential difference between the plates and is the distance separating the plates. The negative sign arises as positive charges repel, so a positive charge will experience a force away from the positively charged plate, in the opposite direction to that in which the voltage increases. In micro- and nano-applications, for instance in relation to semiconductors, a typical magnitude of an electric field is in the order of , achieved by applying a voltage of the order of 1 volt between conductors spaced 1 μm apart. Electromagnetic fields Electromagnetic fields are electric and magnetic fields, which may change with time, for instance when charges are in motion. Moving charges produce a magnetic field in accordance with Ampère's circuital law (with Maxwell's addition), which, along with Maxwell's other equations, defines the magnetic field, , in terms of its curl: where is the current density, is the vacuum permeability, and is the vacuum permittivity. Both the electric current density and the partial derivative of the electric field with respect to time, contribute to the curl of the magnetic field. In addition, the Maxwell–Faraday equation states These represent two of Maxwell's four equations and they intricately link the electric and magnetic fields together, resulting in the electromagnetic field. The equations represent a set of four coupled multi-dimensional partial differential equations which, when solved for a system, describe the combined behavior of the electromagnetic fields. In general, the force experienced by a test charge in an electromagnetic field is given by the Lorentz force law: Energy in the electric field The total energy per unit volume stored by the electromagnetic field is where is the permittivity of the medium in which the field exists, its magnetic permeability, and and are the electric and magnetic field vectors. As and fields are coupled, it would be misleading to split this expression into "electric" and "magnetic" contributions. In particular, an electrostatic field in any given frame of reference in general transforms into a field with a magnetic component in a relatively moving frame. Accordingly, decomposing the electromagnetic field into an electric and magnetic component is frame-specific, and similarly for the associated energy. The total energy stored in the electromagnetic field in a given volume is Electric displacement field Definitive equation of vector fields In the presence of matter, it is helpful to extend the notion of the electric field into three vector fields: where is the electric polarization – the volume density of electric dipole moments, and is the electric displacement field. Since and are defined separately, this equation can be used to define . The physical interpretation of is not as clear as (effectively the field applied to the material) or (induced field due to the dipoles in the material), but still serves as a convenient mathematical simplification, since Maxwell's equations can be simplified in terms of free charges and currents. Constitutive relation The and fields are related by the permittivity of the material, . For linear, homogeneous, isotropic materials and are proportional and constant throughout the region, there is no position dependence: For inhomogeneous materials, there is a position dependence throughout the material: For anisotropic materials the and fields are not parallel, and so and are related by the permittivity tensor (a 2nd order tensor field), in component form: For non-linear media, and are not proportional. Materials can have varying extents of linearity, homogeneity and isotropy. Relativistic effects on electric field Point charge in uniform motion The invariance of the form of Maxwell's equations under Lorentz transformation can be used to derive the electric field of a uniformly moving point charge. The charge of a particle is considered frame invariant, as supported by experimental evidence. Alternatively the electric field of uniformly moving point charges can be derived from the Lorentz transformation of four-force experienced by test charges in the source's rest frame given by Coulomb's law and assigning electric field and magnetic field by their definition given by the form of Lorentz force. However the following equation is only applicable when no acceleration is involved in the particle's history where Coulomb's law can be considered or symmetry arguments can be used for solving Maxwell's equations in a simple manner. The electric field of such a uniformly moving point charge is hence given by: where is the charge of the point source, is the position vector from the point source to the point in space, is the ratio of observed speed of the charge particle to the speed of light and is the angle between and the observed velocity of the charged particle. The above equation reduces to that given by Coulomb's law for non-relativistic speeds of the point charge. Spherical symmetry is not satisfied due to breaking of symmetry in the problem by specification of direction of velocity for calculation of field. To illustrate this, field lines of moving charges are sometimes represented as unequally spaced radial lines which would appear equally spaced in a co-moving reference frame. Propagation of disturbances in electric fields Special theory of relativity imposes the principle of locality, that requires cause and effect to be time-like separated events where the causal efficacy does not travel faster than the speed of light. Maxwell's laws are found to confirm to this view since the general solutions of fields are given in terms of retarded time which indicate that electromagnetic disturbances travel at the speed of light. Advanced time, which also provides a solution for Maxwell's law are ignored as an unphysical solution.For the motion of a charged particle, considering for example the case of a moving particle with the above described electric field coming to an abrupt stop, the electric fields at points far from it do not immediately revert to that classically given for a stationary charge. On stopping, the field around the stationary points begin to revert to the expected state and this effect propagates outwards at the speed of light while the electric field lines far away from this will continue to point radially towards an assumed moving charge. This virtual particle will never be outside the range of propagation of the disturbance in electromagnetic field, since charged particles are restricted to have speeds slower than that of light, which makes it impossible to construct a Gaussian surface in this region that violates Gauss's law. Another technical difficulty that supports this is that charged particles travelling faster than or equal to speed of light no longer have a unique retarded time. Since electric field lines are continuous, an electromagnetic pulse of radiation is generated that connects at the boundary of this disturbance travelling outwards at the speed of light. In general, any accelerating point charge radiates electromagnetic waves however, non-radiating acceleration is possible in a systems of charges. Arbitrarily moving point charge For arbitrarily moving point charges, propagation of potential fields such as Lorenz gauge fields at the speed of light needs to be accounted for by using Liénard–Wiechert potential. Since the potentials satisfy Maxwell's equations, the fields derived for point charge also satisfy Maxwell's equations. The electric field is expressed as: where is the charge of the point source, is retarded time or the time at which the source's contribution of the electric field originated, is the position vector of the particle, is a unit vector pointing from charged particle to the point in space, is the velocity of the particle divided by the speed of light, and is the corresponding Lorentz factor. The retarded time is given as solution of: The uniqueness of solution for for given , and is valid for charged particles moving slower than speed of light. Electromagnetic radiation of accelerating charges is known to be caused by the acceleration dependent term in the electric field from which relativistic correction for Larmor formula is obtained. There exist yet another set of solutions for Maxwell's equation of the same form but for advanced time instead of retarded time given as a solution of: Since the physical interpretation of this indicates that the electric field at a point is governed by the particle's state at a point of time in the future, it is considered as an unphysical solution and hence neglected. However, there have been theories exploring the advanced time solutions of Maxwell's equations, such as Feynman Wheeler absorber theory. The above equation, although consistent with that of uniformly moving point charges as well as its non-relativistic limit, are not corrected for quantum-mechanical effects. Common formulæ Electric field infinitely close to a conducting surface in electrostatic equilibrium having charge density at that point is since charges are only formed on the surface and the surface at the infinitesimal scale resembles an infinite 2D plane. In the absence of external fields, spherical conductors exhibit a uniform charge distribution on the surface and hence have the same electric field as that of uniform spherical surface distribution. See also Classical electromagnetism Relativistic electromagnetism Electricity History of electromagnetic theory Electromagnetic field Magnetism Teltron tube Teledeltos, a conductive paper that may be used as a simple analog computer for modelling fields References External links Electric field in "Electricity and Magnetism", R Nave – Hyperphysics, Georgia State University Frank Wolfs's lectures at University of Rochester, chapters 23 and 24 Fields – a chapter from an online textbook Electrostatics Electromagnetic quantities Electromagnetism
Electric field
[ "Physics", "Mathematics" ]
4,078
[ "Electromagnetism", "Physical phenomena", "Electromagnetic quantities", "Physical quantities", "Quantity", "Fundamental interactions" ]
41,094
https://en.wikipedia.org/wiki/Electromagnetic%20environment
In telecommunications, the term electromagnetic environment (EME) has the following meanings: For a telecommunications system, the spatial distribution of electromagnetic fields surrounding a given site. The electromagnetic environment may be expressed in terms of the spatial and temporal distribution of electric field strength (volts per metre), irradiance (watts per square metre), or energy density (joules per cubic metre). The resulting product of the power and time distribution, in various frequency ranges, of the radiated or conducted electromagnetic emission levels that may be encountered by a military force, system, or platform when performing its assigned mission in its intended operational environment. It is the sum of electromagnetic interference; electromagnetic pulse; hazards of electromagnetic radiation to personnel, ordnance, and volatile materials; and natural phenomena effects of lightning and p-static. All electromagnetic phenomena observable in a given location. References Electromagnetic radiation Telecommunications engineering Electromagnetic compatibility Reliability engineering
Electromagnetic environment
[ "Physics", "Engineering" ]
182
[ "Radio electronics", "Physical phenomena", "Telecommunications engineering", "Electromagnetic compatibility", "Systems engineering", "Reliability engineering", "Electromagnetic radiation", "Radiation", "Electrical engineering" ]
41,098
https://en.wikipedia.org/wiki/Electromagnetic%20radiation%20and%20health
Electromagnetic radiation can be classified into two types: ionizing radiation and non-ionizing radiation, based on the capability of a single photon with more than 10 eV energy to ionize atoms or break chemical bonds. Extreme ultraviolet and higher frequencies, such as X-rays or gamma rays are ionizing, and these pose their own special hazards: see radiation poisoning. The field strength of electromagnetic radiation is measured in volts per meter (V/m). The most common health hazard of radiation is sunburn, which causes between approximately 100,000 and 1 million new skin cancers annually in the United States. In 2011, the World Health Organization (WHO) and the International Agency for Research on Cancer (IARC) have classified radiofrequency electromagnetic fields as possibly carcinogenic to humans (Group 2B). Hazards Dielectric heating from electromagnetic radiation can create a biological hazard. For example, touching or standing around an antenna while a high-power transmitter is in operation can cause burns. The mechanism is the same as that used in a microwave oven. The heating effect varies with the power and the frequency of the electromagnetic energy, as well as the inverse square of distance to the source. The eyes and testes are particularly susceptible to radio frequency heating due to the paucity of blood flow in these areas that could otherwise dissipate the heat buildup. Workplace exposure Radio frequency (RF) energy at power density levels of 1–10 mW/cm2 or higher can cause measurable heating of tissues. Typical RF energy levels encountered by the general public are well below the level needed to cause significant heating, but certain workplace environments near high power RF sources may exceed safe exposure limits. A measure of the heating effect is the specific absorption rate or SAR, which has units of watts per kilogram (W/kg). The IEEE and many national governments have established safety limits for exposure to various frequencies of electromagnetic energy based on SAR, mainly based on ICNIRP Guidelines, which guard against thermal damage. Industrial installations for induction hardening and melting or on welding equipment may produce considerably higher field strengths and require further examination. If the exposure cannot be determined upon manufacturers' information, comparisons with similar systems or analytical calculations, measurements have to be accomplished. The results of the evaluation help to assess possible hazards to the safety and health of workers and to define protective measures. Since electromagnetic fields may influence passive or active implants of workers, it is essential to consider the exposure at their workplaces separately in the risk assessment. Low-level exposure The World Health Organization (WHO) began a research effort in 1996 to study the health effects from the ever-increasing exposure of people to a diverse range of EMR sources. In 2011, the WHO/International Agency for Research on Cancer (IARC) has classified radio frequency electromagnetic fields as possibly carcinogenic to humans (Group 2B), based on an increased risk for glioma and acoustic neuroma associated with wireless phone use. The group responsible for the classification did not quantify the risk. Furthermore, group 2B only indicates a credible association between disease and exposure but does not rule out confounding effects with reasonable confidence. A causal relationship has yet to be established. Epidemiological studies look for statistical correlations between EM exposure in the field and specific health effects. As of 2019, much of the current work is focused on the study of EM fields in relation to cancer. There are publications which support the existence of complex biological and neurological effects of weaker non-thermal electromagnetic fields (see Bioelectromagnetics), including weak ELF electromagnetic fields and modulated RF and microwave fields. Effects by frequency While the most acute exposures to harmful levels of electromagnetic radiation are immediately realized as burns, the health effects due to chronic or occupational exposure may not manifest effects for months or years. Extremely low frequency Extremely low frequency EM waves can span from 0 Hz to 3 kHz, though definitions vary across disciplines. The maximum recommended exposure for the general public is 5 kV/m. ELF waves around 50 Hz to 60 Hz are emitted by power generators, transmission lines and distribution lines, power cables, and electric appliances. Typical household exposure to ELF waves ranges in intensity from 5 V/m for a light bulb to 180 V/m for a stereo, measured at and using 240V power. (120V power systems would be unable to reach this intensity unless an appliance has an internal voltage transformer.) Overhead power lines range from 1kV for local distribution to 1,150 kV for ultra high voltage lines. These can produce electric fields up to 10kV/m on the ground directly underneath, but 50 m to 100 m away these levels return to approximately ambient. Metal equipment must be maintained at a safe distance from energized high-voltage lines. Exposure to ELF waves can induce an electric current. Because the human body is conductive, electric currents and resulting voltages differences typically accumulate on the skin but do not reach interior tissues. People can start to perceive high-voltage charges as tingling when hair or clothing in contact with the skin stands up or vibrates. In scientific tests, only about 10% of people could detect a field intensity in the range of 2-5 kV/m. Such voltage differences can also create electric sparks, similar to a discharge of static electricity when nearly touching a grounded object. When receiving such a shock at 5 kV/m, it was reported as painful by only 7% of test participants and by 50% of participants at 10 kV/m. Associations between exposure to extremely low frequency magnetic fields (ELF MF) and various health outcomes have been investigated through a variety of epidemiological studies. A pooled analysis found consistent evidence of an effect of ELF-MF on childhood leukaemia. An assessment of the burden of disease potentially resulting from ELF MF exposure in Europe found that 1.5–2% of childhood leukaemia cases might be attributable to ELF MF, but uncertainties around causal mechanisms and models of dose-response were found to be considerable. The International Agency for Research on Cancer (IARC) finds "inadequate evidence" for human carcinogenicity. Shortwave Shortwave (1.6 to 30 MHz) diathermy (where EM waves are used to produce heat) can be used as a therapeutic technique for its analgesic effect and deep muscle relaxation, but has largely been replaced by ultrasound. Temperatures in muscles can increase by 4–6 °C, and subcutaneous fat by 15 °C. The FCC has restricted the frequencies allowed for medical treatment, and most machines in the US use 27.12 MHz. Shortwave diathermy can be applied in either continuous or pulsed mode. The latter came to prominence because the continuous mode produced too much heating too rapidly, making patients uncomfortable. The technique only heats tissues that are good electrical conductors, such as blood vessels and muscle. Adipose tissue (fat) receives little heating by induction fields because an electrical current is not actually going through the tissues. Studies have been performed on the use of shortwave radiation for cancer therapy and promoting wound healing, with some success. However, at a sufficiently high energy level, shortwave energy can be harmful to human health, potentially causing damage to biological tissues, for example by overheating or inducing electrical currents. The FCC limits for maximum permissible workplace exposure to shortwave radio frequency energy in the range of 3–30 MHz has a plane-wave equivalent power density of where is the frequency in MHz, and 100 mW/cm2 from 0.3 to 3.0 MHz. For uncontrolled exposure to the general public, the limit is between 1.34 and 30 MHz. Radio and microwave frequencies The classification of mobile phone signals as "possibly carcinogenic to humans" by the World Health Organization (WHO)"a positive association has been observed between exposure to the agent and cancer for which a causal interpretation is considered by the Working Group to be credible, but chance, bias or confounding could not be ruled out with reasonable confidence"has been interpreted to mean that there is very little scientific evidence as to phone signal carcinogenesis. In 2011, the International Agency for Research on Cancer (IARC) classified mobile phone radiation as group 2B, "possibly carcinogenic," rather than group 2A ("probably carcinogenic") or group 1 ("is carcinogenic"). That means that there "could be some risk" of carcinogenicity, so additional research into the long-term, heavy use of mobile phones needs to be conducted. The WHO concluded in 2014 that "A large number of studies have been performed over the last two decades to assess whether mobile phones pose a potential health risk. To date, no adverse health effects have been established as being caused by mobile phone use." Since 1962, the microwave auditory effect or tinnitus has been shown from radio frequency exposure at levels below significant heating. Studies during the 1960s in Europe and Russia claimed to show effects on humans, especially the nervous system, from low energy RF radiation; the studies were disputed at the time. In 2019, reporters from the Chicago Tribune tested the level of radiation from smartphones and found that certain models emitted more than reported by the manufacturers and in some cases more than the U.S. Federal Communications Commission exposure limit. It is unclear if this resulted in any harm to consumers. Some problems apparently involved the phone's ability to detect proximity to a human body and lower the radio power. In response, the FCC began testing some phones itself rather than relying solely on manufacturer certifications. Microwave and other radio frequencies cause heating, and this can cause burns or eye damage if delivered in high intensity, or hyperthermia as with any powerful heat source. Microwave ovens use this form of radiation, and have shielding to prevent it from leaking out and unintentionally heating nearby objects or people. Millimeter waves In 2009, the US TSA introduced full-body scanners as a primary screening modality in airport security, first as backscatter X-ray scanners, which use ionizing radiation and which the European Union banned in 2011 due to health and safety concerns. These were followed by non-ionizing millimeter wave scanners. Likewise WiGig for personal area networks have opened the 60 GHz and above microwave band to SAR exposure regulations. Previously, microwave applications in these bands were for point-to-point satellite communication with minimal human exposure. Infrared Infrared wavelengths longer than 750 nm can produce changes in the lens of the eye. Glassblower's cataract is an example of a heat injury that damages the anterior lens capsule among unprotected glass and iron workers. Cataract-like changes can occur in workers who observe glowing masses of glass or iron without protective eyewear for prolonged periods over many years. Exposing skin to infrared radiation near visible light (IR-A) leads to increased production of free radicals. Short-term exposure can be beneficial (activating protective responses), while prolonged exposure can lead to photoaging. Another important factor is the distance between the worker and the source of radiation. In the case of arc welding, infrared radiation decreases rapidly as a function of distance, so that farther than three feet away from where welding takes place, it does not pose an ocular hazard anymore but, ultraviolet radiation still does. This is why welders wear tinted glasses and surrounding workers only have to wear clear ones that filter UV. Visible light Photic retinopathy is damage to the macular area of the eye's retina that results from prolonged exposure to sunlight, particularly with dilated pupils. This can happen, for example, while observing a solar eclipse without suitable eye protection. The Sun's radiation creates a photochemical reaction that can result in visual dazzling and a scotoma. The initial lesions and edema will disappear after several weeks, but may leave behind a permanent reduction in visual acuity. Moderate and high-power lasers are potentially hazardous because they can burn the retina of the eye, or even the skin. To control the risk of injury, various specifications – for example ANSI Z136 in the US, EN 60825-1/A2 in Europe, and IEC 60825 internationally – define "classes" of lasers depending on their power and wavelength. Regulations prescribe required safety measures, such as labeling lasers with specific warnings, and wearing laser safety goggles during operation (see laser safety). As with its infrared and ultraviolet radiation dangers, welding creates an intense brightness in the visible light spectrum, which may cause temporary flash blindness. Some sources state that there is no minimum safe distance for exposure to these radiation emissions without adequate eye protection. Ultraviolet Sunlight includes sufficient ultraviolet power to cause sunburn within hours of exposure, and the burn severity increases with the duration of exposure. This effect is a response of the skin called erythema, which is caused by a sufficient strong dose of UV-B. The Sun's UV output is divided into UV-A and UV-B: solar UV-A flux is 100 times that of UV-B, but the erythema response is 1,000 times higher for UV-B. This exposure can increase at higher altitudes and when reflected by snow, ice, or sand. The UV-B flux is 2–4 times greater during the middle 4–6 hours of the day, and is not significantly absorbed by cloud cover or up to a meter of water. Ultraviolet light, specifically UV-B, has been shown to cause cataracts and there is some evidence that sunglasses worn at an early age can slow its development in later life. Most UV light from the sun is filtered out by the atmosphere and consequently airline pilots often have high rates of cataracts because of the increased levels of UV radiation in the upper atmosphere. It is hypothesized that depletion of the ozone layer and a consequent increase in levels of UV light on the ground may increase future rates of cataracts. Note that the lens filters UV light, so if it is removed via surgery, one may be able to see UV light. Prolonged exposure to ultraviolet radiation from the sun can lead to melanoma and other skin malignancies. Clear evidence establishes ultraviolet radiation, especially the non-ionizing medium wave UVB, as the cause of most non-melanoma skin cancers, which are the most common forms of cancer in the world. UV rays can also cause wrinkles, liver spots, moles, and freckles. In addition to sunlight, other sources include tanning beds, and bright desk lights. Damage is cumulative over one's lifetime, so that permanent effects may not be evident for some time after exposure. Ultraviolet radiation of wavelengths shorter than 300 nm (actinic rays) can damage the corneal epithelium. This is most commonly the result of exposure to the sun at high altitude, and in areas where shorter wavelengths are readily reflected from bright surfaces, such as snow, water, and sand. UV generated by a welding arc can similarly cause damage to the cornea, known as "arc eye" or welding flash burn, a form of photokeratitis. Fluorescent light bulbs and tubes internally produce ultraviolet light. Normally this is converted to visible light by the phosphor film inside a protective coating. When the film is cracked by mishandling or faulty manufacturing then UV may escape at levels that could cause sunburn or even skin cancer. Regulation In the United States, non-ionizing radiation is regulated in the Radiation Control for Health and Safety Act of 1968 and the Occupational Safety and Health Act of 1970. In Canada, various federal acts govern non-ionizing radiation by originating source, such as the Radiation Emitting Devices Act, the Canada Consumer Product Safety Act, and the Radiocommunication Act. For situations not under federal jurisdiction, Canadian provinces individually set regulations around use of non-ionizing radiation. See also Background radiation Bioinitiative Report Biological effects of radiation on the epigenome Central nervous system effects from radiation exposure during spaceflight Cosmic ray COSMOS cohort study Directed energy weapon Electromagnetic hypersensitivity Electromagnetism EMF measurement Health threat from cosmic rays Light ergonomics Magnetobiology Microwave Wireless device radiation and health Personal RF safety monitor Specific absorption rate References Further reading (over 100 pages)ju External links Information page on electromagnetic fields at the World Health Organization web site CDC – Electric and Magnetic Fields – NIOSH Workplace Safety and Health Topic Biological Effects of Power Frequency Electric and Magnetic Fields (May 1989) (110 pages) prepared for US Congress Office of Technology Assessment by Indira Nair, M.Granger Morgan, Keith Florig, Department of Engineering and Public Policy Carnegie Mellon University Environmental controversies Medical physics Radiation health effects Radiobiology Health effects by subject
Electromagnetic radiation and health
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
3,417
[ "Radiation health effects", "Applied and interdisciplinary physics", "Radiobiology", "Medical physics", "Radiation effects", "Radioactivity" ]
41,130
https://en.wikipedia.org/wiki/Extinction%20ratio
In telecommunications, extinction ratio (re) is the ratio of two optical power levels of a digital signal generated by an optical source, e.g., a laser diode. The extinction ratio may be expressed as a fraction, in dB, or as a percentage. It may be given by where P1 is the optical power level generated when the light source is on, and P0 is the power level generated when the light source is off. The polarization extinction ratio (PER) is the ratio of optical powers of perpendicular polarizations, usually called TE (transverse electric) and TM (transverse magnetic). In telecommunications, the PER is used to characterize the degree of polarization in a polarization-maintaining device or fiber. For coherent transmitter and receiver, the PER is a key parameter, since X polarization and Y polarization are coded with different signals. References Material also incorporated from MIL-STD-2196. Data transmission Telecommunication theory Engineering ratios Laser science
Extinction ratio
[ "Mathematics", "Engineering" ]
199
[ "Quantity", "Metrics", "Engineering ratios" ]
41,136
https://en.wikipedia.org/wiki/Fail-safe
In engineering, a fail-safe is a design feature or practice that, in the event of a failure of the design feature, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlike inherent safety to a particular hazard, a system being "fail-safe" does not mean that failure is naturally inconsequential, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. If and when a "fail-safe" system fails, it remains at least as safe as it was before the failure. Since many types of failure are possible, failure mode and effects analysis is used to examine failure situations and recommend safety design and procedures. Some systems can never be made fail-safe, as continuous availability is needed. Redundancy, fault tolerance, or contingency plans are used for these situations (e.g. multiple independently controlled and fuel-fed engines). Examples Mechanical or physical Examples include: Roller-shutter fire doors that are activated by building alarm systems or local smoke detectors must close automatically when signaled regardless of power. In case of power outage the coiling fire door does not need to close, but must be capable of automatic closing when given a signal from the building alarm systems or smoke detectors. A temperature-sensitive fusible link may be employed to hold the fire doors open against gravity or a closing spring. In case of fire, the link melts and releases the doors, and they close. Some airport baggage carts require that the person hold down a given cart's handbrake switch at all times; if the handbrake switch is released, the brake will activate, and assuming that all other portions of the braking system are working properly, the cart will stop. The handbrake-holding requirement thus both operates according to the principles of "fail-safety" and contributes to (but does not necessarily ensure) the fail-security of the system. This is an example of a dead man's switch. Lawnmowers and snow blowers have a hand-closed lever that must be held down at all times. If it is released, it stops the blade's or rotor's rotation. This also functions as a dead man's switch. Air brakes on railway trains and air brakes on trucks. The brakes are held in the "off" position by air pressure created in the brake system. Should a brake line split, or a carriage become uncoupled, the air pressure will be lost and the brakes applied, by springs in the case of trucks, or by a local air reservoir in trains. It is impossible to drive a truck with a serious leak in the air brake system. (Trucks may also employ wig wags to indicate low air pressure.) Motorized gates – In case of power outage the gate can be pushed open by hand with no crank or key required. However, as this would allow virtually anyone to go through the gate, a fail-secure design is used: In a power outage, the gate can only be opened by a hand crank that is usually kept in a safe area or under lock and key. When such a gate provides vehicle access to homes, a fail-safe design is used, where the door opens to allow fire department access. Safety valves – Various devices that operate with fluids use fuses or safety valves as fail-safe mechanisms. A railway semaphore signal is specially designed so that, should the cable controlling the signal break, the arm returns to the "danger" position, preventing any trains passing the inoperative signal. Isolation valves, and control valves, that are used for example in systems containing hazardous substances, can be designed to close upon loss of power, for example by spring force. This is known as fail-closed upon loss of power. An elevator has brakes that are held off brake pads by the tension of the elevator cable. If the cable breaks, tension is lost and the brakes latch on the rails in the shaft, so that the elevator cabin does not fall. Vehicle air conditioning – Defrost controls require vacuum for diverter damper operation for all functions except defrost. If vacuum fails, defrost is still available. Electrical or electronic Examples include: Many devices are protected from short circuit by fuses, circuit breakers, or current limiting circuits. The electrical interruption under overload conditions will prevent damage or destruction of wiring or circuit devices due to overheating. Avionics using redundant systems to perform the same computation using three different systems. Different results indicate a fault in the system. Drive-by-wire and fly-by-wire controls such as an Accelerator Position Sensor typically have two potentiometers which read in opposite directions, such that moving the control will result in one reading becoming higher, and the other generally equally lower. Mismatches between the two readings indicates a fault in the system, and the ECU can often deduce which of the two readings is faulty. Traffic light controllers use a Conflict Monitor Unit to detect faults or conflicting signals and switch an intersection to an all flashing error signal, rather than displaying potentially dangerous conflicting signals, e.g. showing green in all directions. The automatic protection of programs and/or processing systems when a computer hardware or software failure is detected in a computer system. A classic example is a watchdog timer. See Fail-safe (computer). A control operation or function that prevents improper system functioning or catastrophic degradation in the event of circuit malfunction or operator error; for example, the failsafe track circuit used to control railway block signals. The fact that a flashing amber is more permissive than a solid amber on many railway lines is a sign of a failsafe, as the relay, if not working, will revert to a more restrictive setting. The iron pellet ballast on the Bathyscaphe is dropped to allow the submarine to ascend. The ballast is held in place by electromagnets. If electrical power fails, the ballast is released, and the submarine then ascends to safety. Many nuclear reactor designs have neutron-absorbing control rods suspended by electromagnets. If the power fails, they drop under gravity into the core and shut down the chain reaction in seconds by absorbing the neutrons needed for fission to continue. In industrial automation, alarm circuits are usually "normally closed". This ensures that in case of a wire break the alarm will be triggered. If the circuit were normally open, a wire failure would go undetected, while blocking actual alarm signals. Analog sensors and modulating actuators can usually be installed and wired such that the circuit failure results in an out-of-bound reading – see current loop. For example, a potentiometer indicating pedal position might only travel from 20% to 80% of its full range, such that a cable break or short results in a 0% or 100% reading. In control systems, critically important signals can be carried by a complementary pair of wires (<signal> and <not_signal>). Only states where the two signals are opposite (one is high, the other low) are valid. If both are high or both are low the control system knows that something is wrong with the sensor or connecting wiring. Simple failure modes (dead sensor, cut or unplugged wires) are thereby detected. An example would be a control system reading both the normally open (NO) and normally closed (NC) poles of a SPDT selector switch against common, and checking them for coherency before reacting to the input. In HVAC control systems, actuators that control dampers and valves may be fail-safe, for example, to prevent coils from freezing or rooms from overheating. Older pneumatic actuators were inherently fail-safe because if the air pressure against the internal diaphragm failed, the built-in spring would push the actuator to its home position – of course the home position needed to be the "safe" position. Newer electrical and electronic actuators need additional components (springs or capacitors) to automatically drive the actuator to home position upon loss of electrical power. Programmable logic controllers (PLCs). To make a PLC fail-safe the system does not require energization to stop the drives associated. For example, usually, an emergency stop is a normally closed contact. In the event of a power failure this would remove the power directly from the coil and also the PLC input. Hence, a fail-safe system. If a voltage regulator fails, it can destroy connected equipment. A crowbar (circuit) prevents damage by short-circuiting the power supply as soon as it detects overvoltage. Procedural safety As well as physical devices and systems fail-safe procedures can be created so that if a procedure is not carried out or carried out incorrectly no dangerous action results. For example: Spacecraft trajectory - During early Apollo program missions to the Moon, the spacecraft was put on a free return trajectory — if the engines had failed at lunar orbit insertion, the craft would have safely coasted back to Earth. The pilot of an aircraft landing on an aircraft carrier increases the throttle to full power at touchdown. If the arresting wires fail to capture the aircraft, it is able to take off again; this is an example of fail-safe practice. In railway signalling signals which are not in active use for a train are required to be kept in the 'danger' position. The default position of every controlled absolute signal is therefore "danger", and therefore a positive action — setting signals to "clear" — is required before a train may pass. This practice also ensures that, in case of a fault in the signalling system, an incapacitated signalman, or the unexpected entry of a train, that a train will never be shown an erroneous "clear" signal. Railroad engineers are instructed that a railway signal showing a confusing, contradictory or unfamiliar aspect (for example a colour light signal that has suffered an electrical failure and is showing no light at all) must be treated as showing "danger". In this way, the driver contributes to the fail-safety of the system. Other terminology Fail-safe (foolproof) devices are also known as poka-yoke devices. Poka-yoke, a Japanese term, was coined by Shigeo Shingo, a quality expert. "Safe to fail" refers to civil engineering designs such as the Room for the River project in Netherlands and the Thames Estuary 2100 Plan which incorporate flexible adaptation strategies or climate change adaptation which provide for, and limit, damage, should severe events such as 500-year floods occur. Fail safe and fail secure Fail-safe and fail-secure are distinct concepts. Fail-safe means that a device will not endanger lives or property when it fails. Fail-secure, also called fail-closed, means that access or data will not fall into the wrong hands in a security failure. Sometimes the approaches suggest opposite solutions. For example, if a building catches fire, fail-safe systems would unlock doors to ensure quick escape and allow firefighters inside, while fail-secure would lock doors to prevent unauthorized access to the building. The opposite of fail-closed is called fail-open. Fail active operational Fail active operational can be installed on systems that have a high degree of redundancy so that a single failure of any part of the system can be tolerated (fail active operational) and a second failure can be detected – at which point the system will turn itself off (uncouple, fail passive). One way of accomplishing this is to have three identical systems installed, and a control logic which detects discrepancies. An example for this are many aircraft systems, among them inertial navigation systems and pitot tubes. Failsafe point During the Cold War, "failsafe point" was the term used for the point of no return for American Strategic Air Command nuclear bombers, just outside Soviet airspace. In the event of receiving an attack order, the bombers were required to linger at the failsafe point and wait for a second confirming order; until one was received, they would not arm their bombs or proceed further. The design was to prevent any single failure of the American command system causing nuclear war. This sense of the term entered the American popular lexicon with the publishing of the 1962 novel Fail-Safe. (Other nuclear war command control systems have used the opposite scheme, fail-deadly, which requires continuous or regular proof that an enemy first-strike attack has not occurred to prevent the launching of a nuclear strike.) See also Fail-fast system Control theory Dead man's switch EIA-485 Elegant degradation Failing badly Fail-deadly Fault tolerance IEC 61508 Interlock Safe-life design Safety engineering References Safety Fault-tolerant computer systems
Fail-safe
[ "Technology", "Engineering" ]
2,628
[ "Systems engineering", "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
41,148
https://en.wikipedia.org/wiki/Optical%20amplifier
An optical amplifier is a device that amplifies an optical signal directly, without the need to first convert it to an electrical signal. An optical amplifier may be thought of as a laser without an optical cavity, or one in which feedback from the cavity is suppressed. Optical amplifiers are important in optical communication and laser physics. They are used as optical repeaters in the long distance fiber-optic cables which carry much of the world's telecommunication links. There are several different physical mechanisms that can be used to amplify a light signal, which correspond to the major types of optical amplifiers. In doped fiber amplifiers and bulk lasers, stimulated emission in the amplifier's gain medium causes amplification of incoming light. In semiconductor optical amplifiers (SOAs), electron–hole recombination occurs. In Raman amplifiers, Raman scattering of incoming light with phonons in the lattice of the gain medium produces photons coherent with the incoming photons. Parametric amplifiers use parametric amplification. History The principle of optical amplification was invented by Gordon Gould on November 13, 1957. He filed US Patent US80453959A on April 6, 1959, titled "Light Amplifiers Employing Collisions to Produce Population Inversions" (subsequently amended as a continuation in part and finally issued as on May 4, 1988). The patent covered “the amplification of light by the stimulated emission of photons from ions, atoms or molecules in gaseous, liquid or solid state.” In total, Gould obtained 48 patents related to the optical amplifier that covered 80% of the lasers on the market at the time of issuance. Gould co-founded an optical telecommunications equipment firm, Optelecom Inc., that helped start Ciena Corp with his former head of Light Optics Research, David Huber and Kevin Kimberlin. Huber and Steve Alexander of Ciena invented the dual-stage optical amplifier () that was a key to the first dense wave division multiplexing (DWDM) system, that they released in June 1996. This marked the start of optical networking. Its significance was recognized at the time by optical authority, Shoichi Sudo and technology analyst, George Gilder in 1997, when Sudo wrote that optical amplifiers “will usher in a worldwide revolution called the Information Age” and Gilder compared the optical amplifier to the integrated circuit in importance, predicting that it would make possible the Age of Information. Optical amplification WDM systems are the common basis of all local, metro, national, intercontinental and subsea telecommunications networks and the technology of choice for the fiber optic backbones of the Internet (e.g. fiber-optic cables form a basis of modern-day computer networking). Laser amplifiers Almost any laser active gain medium can be pumped to produce gain for light at the wavelength of a laser made with the same material as its gain medium. Such amplifiers are commonly used to produce high power laser systems. Special types such as regenerative amplifiers and chirped-pulse amplifiers are used to amplify ultrashort pulses. Solid-state amplifiers Solid-state amplifiers are optical amplifiers that use a wide range of doped solid-state materials (Nd: Yb:YAG, Ti:Sa) and different geometries (disk, slab, rod) to amplify optical signals. The variety of materials allows the amplification of different wavelengths, while the shape of the medium can distinguish between those more suitable for energy or average power scaling. Beside their use in fundamental research from gravitational wave detection to high energy physics at the National Ignition Facility they can also be found in many of today's ultra short pulsed lasers. Doped-fiber amplifiers Doped-fiber amplifiers (DFAs) are optical amplifiers that use a doped optical fiber as a gain medium to amplify an optical signal. They are related to fiber lasers. The signal to be amplified and a pump laser are multiplexed into the doped fiber, and the signal is amplified through interaction with the doping ions. Amplification is achieved by stimulated emission of photons from dopant ions in the doped fiber. The pump laser excites ions into a higher energy from where they can decay via stimulated emission of a photon at the signal wavelength back to a lower energy level. The excited ions can also decay spontaneously (spontaneous emission) or even through nonradiative processes involving interactions with phonons of the glass matrix. These last two decay mechanisms compete with stimulated emission reducing the efficiency of light amplification. The amplification window of an optical amplifier is the range of optical wavelengths for which the amplifier yields a usable gain. The amplification window is determined by the spectroscopic properties of the dopant ions, the glass structure of the optical fiber, and the wavelength and power of the pump laser. Although the electronic transitions of an isolated ion are very well defined, broadening of the energy levels occurs when the ions are incorporated into the glass of the optical fiber and thus the amplification window is also broadened. This broadening is both homogeneous (all ions exhibit the same broadened spectrum) and inhomogeneous (different ions in different glass locations exhibit different spectra). Homogeneous broadening arises from the interactions with phonons of the glass, while inhomogeneous broadening is caused by differences in the glass sites where different ions are hosted. Different sites expose ions to different local electric fields, which shifts the energy levels via the Stark effect. In addition, the Stark effect also removes the degeneracy of energy states having the same total angular momentum (specified by the quantum number J). Thus, for example, the trivalent erbium ion (Er3+) has a ground state with J = 15/2, and in the presence of an electric field splits into J + 1/2 = 8 sublevels with slightly different energies. The first excited state has J = 13/2 and therefore a Stark manifold with 7 sublevels. Transitions from the J = 13/2 excited state to the J= 15/2 ground state are responsible for the gain at 1500 nm wavelength. The gain spectrum of the EDFA has several peaks that are smeared by the above broadening mechanisms. The net result is a very broad spectrum (30 nm in silica, typically). The broad gain-bandwidth of fiber amplifiers make them particularly useful in wavelength-division multiplexed communications systems as a single amplifier can be utilized to amplify all signals being carried on a fiber and whose wavelengths fall within the gain window. An erbium-doped waveguide amplifier (EDWA) is an optical amplifier that uses a waveguide to boost an optical signal. Basic principle of EDFA A relatively high-powered beam of light is mixed with the input signal using a wavelength selective coupler (WSC). The input signal and the excitation light must be at significantly different wavelengths. The mixed light is guided into a section of fiber with erbium ions included in the core. This high-powered light beam excites the erbium ions to their higher-energy state. When the photons belonging to the signal at a different wavelength from the pump light meet the excited erbium ions, the erbium ions give up some of their energy to the signal and return to their lower-energy state. A significant point is that the erbium gives up its energy in the form of additional photons which are exactly in the same phase and direction as the signal being amplified. So the signal is amplified along its direction of travel only. This is not unusual – when an atom "lases" it always gives up its energy in the same direction and phase as the incoming light. Thus all of the additional signal power is guided in the same fiber mode as the incoming signal. An optical isolator is usually placed at the output to prevent reflections returning from the attached fiber. Such reflections disrupt amplifier operation and in the extreme case can cause the amplifier to become a laser. The erbium doped amplifier is a high gain amplifier. Noise The principal source of noise in DFAs is amplified spontaneous emission (ASE), which has a spectrum approximately the same as the gain spectrum of the amplifier. Noise figure in an ideal DFA is 3 dB, while practical amplifiers can have noise figure as large as 6–8 dB. As well as decaying via stimulated emission, electrons in the upper energy level can also decay by spontaneous emission, which occurs at random, depending upon the glass structure and inversion level. Photons are emitted spontaneously in all directions, but a proportion of those will be emitted in a direction that falls within the numerical aperture of the fiber and are thus captured and guided by the fiber. Those photons captured may then interact with other dopant ions, and are thus amplified by stimulated emission. The initial spontaneous emission is therefore amplified in the same manner as the signals, hence the term amplified spontaneous emission. ASE is emitted by the amplifier in both the forward and reverse directions, but only the forward ASE is a direct concern to system performance since that noise will co-propagate with the signal to the receiver where it degrades system performance. Counter-propagating ASE can, however, lead to degradation of the amplifier's performance since the ASE can deplete the inversion level and thereby reduce the gain of the amplifier and increase the noise produced relative to the desired signal gain. Noise figure can be analyzed in both the optical domain and in the electrical domain. In the optical domain, measurement of the ASE, the optical signal gain, and signal wavelength using an optical spectrum analyzer permits calculation of the noise figure. For the electrical measurement method, the detected photocurrent noise is evaluated with a low-noise electrical spectrum analyzer, which along with measurement of the amplifier gain permits a noise figure measurement. Generally, the optical technique provides a more simple method, though it is not inclusive of excess noise effects captured by the electrical method such multi-path interference (MPI) noise generation. In both methods, attention to effects such as the spontaneous emission accompanying the input signal are critical to accurate measurement of noise figure. Gain saturation Gain is achieved in a DFA due to population inversion of the dopant ions. The inversion level of a DFA is set, primarily, by the power of the pump wavelength and the power at the amplified wavelengths. As the signal power increases, or the pump power decreases, the inversion level will reduce and thereby the gain of the amplifier will be reduced. This effect is known as gain saturation – as the signal level increases, the amplifier saturates and cannot produce any more output power, and therefore the gain reduces. Saturation is also commonly known as gain compression. To achieve optimum noise performance DFAs are operated under a significant amount of gain compression (10 dB typically), since that reduces the rate of spontaneous emission, thereby reducing ASE. Another advantage of operating the DFA in the gain saturation region is that small fluctuations in the input signal power are reduced in the output amplified signal: smaller input signal powers experience larger (less saturated) gain, while larger input powers see less gain. The leading edge of the pulse is amplified, until the saturation energy of the gain medium is reached. In some condition, the width (FWHM) of the pulse is reduced. Inhomogeneous broadening effects Due to the inhomogeneous portion of the linewidth broadening of the dopant ions, the gain spectrum has an inhomogeneous component and gain saturation occurs, to a small extent, in an inhomogeneous manner. This effect is known as spectral hole burning because a high power signal at one wavelength can 'burn' a hole in the gain for wavelengths close to that signal by saturation of the inhomogeneously broadened ions. Spectral holes vary in width depending on the characteristics of the optical fiber in question and the power of the burning signal, but are typically less than 1 nm at the short wavelength end of the C-band, and a few nm at the long wavelength end of the C-band. The depth of the holes are very small, though, making it difficult to observe in practice. Polarization effects Although the DFA is essentially a polarization independent amplifier, a small proportion of the dopant ions interact preferentially with certain polarizations and a small dependence on the polarization of the input signal may occur (typically < 0.5 dB). This is called polarization dependent gain (PDG). The absorption and emission cross sections of the ions can be modeled as ellipsoids with the major axes aligned at random in all directions in different glass sites. The random distribution of the orientation of the ellipsoids in a glass produces a macroscopically isotropic medium, but a strong pump laser induces an anisotropic distribution by selectively exciting those ions that are more aligned with the optical field vector of the pump. Also, those excited ions aligned with the signal field produce more stimulated emission. The change in gain is thus dependent on the alignment of the polarizations of the pump and signal lasers – i.e. whether the two lasers are interacting with the same sub-set of dopant ions or not. In an ideal doped fiber without birefringence, the PDG would be inconveniently large. Fortunately, in optical fibers small amounts of birefringence are always present and, furthermore, the fast and slow axes vary randomly along the fiber length. A typical DFA has several tens of meters, long enough to already show this randomness of the birefringence axes. These two combined effects (which in transmission fibers give rise to polarization mode dispersion) produce a misalignment of the relative polarizations of the signal and pump lasers along the fiber, thus tending to average out the PDG. The result is that PDG is very difficult to observe in a single amplifier (but is noticeable in links with several cascaded amplifiers). Erbium-doped optical fiber amplifiers The erbium-doped fiber amplifier (EDFA) is the most deployed fiber amplifier as its amplification window coincides with the third transmission window of silica-based optical fiber. The core of a silica fiber is doped with trivalent erbium ions (Er3+) and can be efficiently pumped with a laser at or near wavelengths of 980 nm and 1480 nm, and gain is exhibited in the 1550 nm region. The EDFA amplification region varies from application to application and can be anywhere from a few nm up to ~80 nm. Typical use of EDFA in telecommunications calls for Conventional, or C-band amplifiers (from ~1525 nm to ~1565 nm) or Long, or L-band amplifiers (from ~1565 nm to ~1610 nm). Both of these bands can be amplified by EDFAs, but it is normal to use two different amplifiers, each optimized for one of the bands. The principal difference between C- and L-band amplifiers is that a longer length of doped fiber is used in L-band amplifiers. The longer length of fiber allows a lower inversion level to be used, thereby giving emission at longer wavelengths (due to the band-structure of Erbium in silica) while still providing a useful amount of gain. EDFAs have two commonly used pumping bands – 980 nm and 1480 nm. The 980 nm band has a higher absorption cross-section and is generally used where low-noise performance is required. The absorption band is relatively narrow and so wavelength stabilised laser sources are typically needed. The 1480 nm band has a lower, but broader, absorption cross-section and is generally used for higher power amplifiers. A combination of 980 nm and 1480 nm pumping is generally utilised in amplifiers. Gain and lasing in erbium-doped fibers were first demonstrated in 1986–87 by two groups; one including David N. Payne, R. Mears, I.M Jauncey and L. Reekie, from the University of Southampton and one from AT&T Bell Laboratories, consisting of E. Desurvire, P. Becker, and J. Simpson. The dual-stage optical amplifier which enabled dense wave division multiplexing (DWDM) was invented by Stephen B. Alexander at Ciena Corporation. Doped fiber amplifiers for other wavelength ranges Thulium doped fiber amplifiers have been used in the S-band (1450–1490 nm) and Praseodymium doped amplifiers in the 1300 nm region. However, those regions have not seen any significant commercial use so far and so those amplifiers have not been the subject of as much development as the EDFA. However, Ytterbium doped fiber lasers and amplifiers, operating near 1 micrometre wavelength, have many applications in industrial processing of materials, as these devices can be made with extremely high output power (tens of kilowatts). Semiconductor optical amplifier Semiconductor optical amplifiers (SOAs) are amplifiers which use a semiconductor to provide the gain medium. These amplifiers have a similar structure to Fabry–Pérot laser diodes but with anti-reflection design elements at the end faces. Recent designs include anti-reflective coatings and tilted wave guide and window regions which can reduce end face reflection to less than 0.001%. Since this creates a loss of power from the cavity which is greater than the gain, it prevents the amplifier from acting as a laser. Another type of SOA consists of two regions. One part has a structure of a Fabry-Pérot laser diode and the other has a tapered geometry in order to reduce the power density on the output facet. Semiconductor optical amplifiers are typically made from group III-V compound semiconductors such as GaAs/AlGaAs, InP/InGaAs, InP/InGaAsP and InP/InAlGaAs, though any direct band gap semiconductors such as II-VI could conceivably be used. Such amplifiers are often used in telecommunication systems in the form of fiber-pigtailed components, operating at signal wavelengths between 850 nm and 1600 nm and generating gains of up to 30 dB. The semiconductor optical amplifier is of small size and electrically pumped. It can be potentially less expensive than the EDFA and can be integrated with semiconductor lasers, modulators, etc. However, the performance is still not comparable with the EDFA. The SOA has higher noise, lower gain, moderate polarization dependence and high nonlinearity with fast transient time. The main advantage of SOA is that all four types of nonlinear operations (cross gain modulation, cross phase modulation, wavelength conversion and four wave mixing) can be conducted. Furthermore, SOA can be run with a low power laser. This originates from the short nanosecond or less upper state lifetime, so that the gain reacts rapidly to changes of pump or signal power and the changes of gain also cause phase changes which can distort the signals. This nonlinearity presents the most severe problem for optical communication applications. However it provides the possibility for gain in different wavelength regions from the EDFA. "Linear optical amplifiers" using gain-clamping techniques have been developed. High optical nonlinearity makes semiconductor amplifiers attractive for all optical signal processing like all-optical switching and wavelength conversion. There has been much research on semiconductor optical amplifiers as elements for optical signal processing, wavelength conversion, clock recovery, signal demultiplexing, and pattern recognition. Vertical-cavity SOA A recent addition to the SOA family is the vertical-cavity SOA (VCSOA). These devices are similar in structure to, and share many features with, vertical-cavity surface-emitting lasers (VCSELs). The major difference when comparing VCSOAs and VCSELs is the reduced mirror reflectivity used in the amplifier cavity. With VCSOAs, reduced feedback is necessary to prevent the device from reaching lasing threshold. Due to the extremely short cavity length, and correspondingly thin gain medium, these devices exhibit very low single-pass gain (typically on the order of a few percent) and also a very large free spectral range (FSR). The small single-pass gain requires relatively high mirror reflectivity to boost the total signal gain. In addition to boosting the total signal gain, the use of the resonant cavity structure results in a very narrow gain bandwidth; coupled with the large FSR of the optical cavity, this effectively limits operation of the VCSOA to single-channel amplification. Thus, VCSOAs can be seen as amplifying filters. Given their vertical-cavity geometry, VCSOAs are resonant cavity optical amplifiers that operate with the input/output signal entering/exiting normal to the wafer surface. In addition to their small size, the surface normal operation of VCSOAs leads to a number of advantages, including low power consumption, low noise figure, polarization insensitive gain, and the ability to fabricate high fill factor two-dimensional arrays on a single semiconductor chip. These devices are still in the early stages of research, though promising preamplifier results have been demonstrated. Further extensions to VCSOA technology are the demonstration of wavelength tunable devices. These MEMS-tunable vertical-cavity SOAs utilize a microelectromechanical systems (MEMS) based tuning mechanism for wide and continuous tuning of the peak gain wavelength of the amplifier. SOAs have a more rapid gain response, which is in the order of 1 to 100 ps. Tapered amplifiers For high output power and broader wavelength range, tapered amplifiers are used. These amplifiers consist of a lateral single-mode section and a section with a tapered structure, where the laser light is amplified. The tapered structure leads to a reduction of the power density at the output facet. Typical parameters: wavelength range: 633 to 1480 nm input power: 10 to 50 mW output power: up to 3 W Raman amplifier In a Raman amplifier, the signal is intensified by Raman amplification. Unlike the EDFA and SOA the amplification effect is achieved by a nonlinear interaction between the signal and a pump laser within an optical fiber. There are two types of Raman amplifier: distributed and lumped. A distributed Raman amplifier is one in which the transmission fiber is utilised as the gain medium by multiplexing a pump wavelength with signal wavelength, while a lumped Raman amplifier utilises a dedicated, shorter length of fiber to provide amplification. In the case of a lumped Raman amplifier, a highly nonlinear fiber with a small core is utilised to increase the interaction between signal and pump wavelengths, and thereby reduce the length of fiber required. The pump light may be coupled into the transmission fiber in the same direction as the signal (co-directional pumping), in the opposite direction (contra-directional pumping) or both. Contra-directional pumping is more common as the transfer of noise from the pump to the signal is reduced. The pump power required for Raman amplification is higher than that required by the EDFA, with in excess of 500 mW being required to achieve useful levels of gain in a distributed amplifier. Lumped amplifiers, where the pump light can be safely contained to avoid safety implications of high optical powers, may use over 1 W of optical power. The principal advantage of Raman amplification is its ability to provide distributed amplification within the transmission fiber, thereby increasing the length of spans between amplifier and regeneration sites. The amplification bandwidth of Raman amplifiers is defined by the pump wavelengths utilised and so amplification can be provided over wider, and different, regions than may be possible with other amplifier types which rely on dopants and device design to define the amplification 'window'. Raman amplifiers have some fundamental advantages. First, Raman gain exists in every fiber, which provides a cost-effective means of upgrading from the terminal ends. Second, the gain is nonresonant, which means that gain is available over the entire transparency region of the fiber ranging from approximately 0.3 to 2 μm. A third advantage of Raman amplifiers is that the gain spectrum can be tailored by adjusting the pump wavelengths. For instance, multiple pump lines can be used to increase the optical bandwidth, and the pump distribution determines the gain flatness. Another advantage of Raman amplification is that it is a relatively broad-band amplifier with a bandwidth > 5 THz, and the gain is reasonably flat over a wide wavelength range. However, a number of challenges for Raman amplifiers prevented their earlier adoption. First, compared to the EDFAs, Raman amplifiers have relatively poor pumping efficiency at lower signal powers. Although a disadvantage, this lack of pump efficiency also makes gain clamping easier in Raman amplifiers. Second, Raman amplifiers require a longer gain fiber. However, this disadvantage can be mitigated by combining gain and the dispersion compensation in a single fiber. A third disadvantage of Raman amplifiers is a fast response time, which gives rise to new sources of noise, as further discussed below. Finally, there are concerns of nonlinear penalty in the amplifier for the WDM signal channels. Note: The text of an earlier version of this article was taken from the public domain Federal Standard 1037C. Optical parametric amplifier An optical parametric amplifier allows the amplification of a weak signal-impulse in a nonlinear medium such as a noncentrosymmetric nonlinear medium (e.g. Beta barium borate (BBO)) or even a standard fused silica optical fiber via the Kerr effect. In contrast to the previously mentioned amplifiers, which are mostly used in telecommunication environments, this type finds its main application in expanding the frequency tunability of ultrafast solid-state lasers (e.g. Ti:sapphire). By using a noncollinear interaction geometry optical parametric amplifiers are capable of extremely broad amplification bandwidths. 21st century In the 21st century high power fiber lasers were adopted as an industrial material processing tool, and were expanding into other markets including the medical and scientific markets. One key enhancement enabling penetration into the scientific market was improvement in high finesse fiber amplifiers, which became able to deliver single frequency linewidths (<5 kHz) together with excellent beam quality and stable linearly polarized output. Systems meeting these specifications steadily progressed from a few watts of output power initially, to tens of watts and later hundreds of watts. This power increase was achieved with developments in fiber technology, such as the adoption of stimulated brillouin scattering (SBS) suppression/mitigation techniques within the fiber, and improvements in overall amplifier design, including large mode area (LMA) fibers with a low-aperture core, micro-structured rod-type fiber helical core, or chirally-coupled core fibers, and tapered double-clad fibers (T-DCF). high finesse, high power and pulsed fiber amplifiers delivered power levels exceeding those available from commercial solid-state single-frequency sources, and stable optimized performance, opening up new scientific applications. Implementations There are several simulation tools that can be used to design optical amplifiers. Popular commercial tools have been developed by Optiwave Systems and VPI Systems. See also Nonlinear theory of semiconductor lasers Regenerative amplification References External links Overview of commercially available semiconductor tapered amplifiers Overview of commercially available solid-state amplifiers RP Photonics Encyclopedia on fiber amplifiers and Raman amplifiers Current Trends in Unrepeatered Systems including ROPA Remote Optically-Pumped Amplifier Optical devices Amplifiers Laser science Fiber-optic communications
Optical amplifier
[ "Materials_science", "Technology", "Engineering" ]
5,681
[ "Amplifiers", "Glass engineering and science", "Optical devices" ]
41,210
https://en.wikipedia.org/wiki/Geostationary%20orbit
A geostationary orbit, also referred to as a geosynchronous equatorial orbit (GEO), is a circular geosynchronous orbit in altitude above Earth's equator, in radius from Earth's center, and following the direction of Earth's rotation. An object in such an orbit has an orbital period equal to Earth's rotational period, one sidereal day, and so to ground observers it appears motionless, in a fixed position in the sky. The concept of a geostationary orbit was popularised by the science fiction writer Arthur C. Clarke in the 1940s as a way to revolutionise telecommunications, and the first satellite to be placed in this kind of orbit was launched in 1963. Communications satellites are often placed in a geostationary orbit so that Earth-based satellite antennas do not have to rotate to track them but can be pointed permanently at the position in the sky where the satellites are located. Weather satellites are also placed in this orbit for real-time monitoring and data collection, and navigation satellites to provide a known calibration point and enhance GPS accuracy. Geostationary satellites are launched via a temporary orbit, and then placed in a "slot" above a particular point on the Earth's surface. The satellite requires periodic station-keeping to maintain its position. Modern retired geostationary satellites are placed in a higher graveyard orbit to avoid collisions. History In 1929, Herman Potočnik described both geosynchronous orbits in general and the special case of the geostationary Earth orbit in particular as useful orbits for space stations. The first appearance of a geostationary orbit in popular literature was in October 1942, in the first Venus Equilateral story by George O. Smith, but Smith did not go into details. British science fiction author Arthur C. Clarke popularised and expanded the concept in a 1945 paper entitled Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?, published in Wireless World magazine. Clarke acknowledged the connection in his introduction to The Complete Venus Equilateral. The orbit, which Clarke first described as useful for broadcast and relay communications satellites, is sometimes called the Clarke orbit. Similarly, the collection of artificial satellites in this orbit is known as the Clarke Belt. In technical terminology the orbit is referred to as either a geostationary or geosynchronous equatorial orbit, with the terms used somewhat interchangeably. The first geostationary satellite was designed by Harold Rosen while he was working at Hughes Aircraft in 1959. Inspired by Sputnik 1, he wanted to use a geostationary satellite to globalise communications. Telecommunications between the US and Europe was then possible between just 136 people at a time, and reliant on high frequency radios and an undersea cable. Conventional wisdom at the time was that it would require too much rocket power to place a satellite in a geostationary orbit and it would not survive long enough to justify the expense, so early efforts were put towards constellations of satellites in low or medium Earth orbit. The first of these were the passive Echo balloon satellites in 1960, followed by Telstar 1 in 1962. Although these projects had difficulties with signal strength and tracking, issues that could be solved using geostationary orbits, the concept was seen as impractical, so Hughes often withheld funds and support. By 1961, Rosen and his team had produced a cylindrical prototype with a diameter of , height of , weighing , light and small enough to be placed into orbit. It was spin stabilised with a dipole antenna producing a pancake shaped beam. In August 1961, they were contracted to begin building the real satellite. They lost Syncom 1 to electronics failure, but Syncom 2 was successfully placed into a geosynchronous orbit in 1963. Although its inclined orbit still required moving antennas, it was able to relay TV transmissions, and allowed for US President John F. Kennedy in Washington D.C., to phone Nigerian prime minister Abubakar Tafawa Balewa aboard the USNS Kingsport docked in Lagos on August 23, 1963. The first satellite placed in a geostationary orbit was Syncom 3, which was launched by a Delta D rocket in 1964. With its increased bandwidth, this satellite was able to transmit live coverage of the Summer Olympics from Japan to America. Geostationary orbits have been in common use ever since, in particular for satellite television. Today there are hundreds of geostationary satellites providing remote sensing and communications. Although most populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), with telephone access covering 96% of the population and internet access 90%, some rural and remote areas in developed countries are still reliant on satellite communications. Uses Most commercial communications satellites, broadcast satellites and SBAS satellites operate in geostationary orbits. Communications Geostationary communication satellites are useful because they are visible from a large area of the earth's surface, extending 81° away in latitude and 77° in longitude. They appear stationary in the sky, which eliminates the need for ground stations to have movable antennas. This means that Earth-based observers can erect small, cheap and stationary antennas that are always directed at the desired satellite. However, latency becomes significant as it takes about 240 ms for a signal to pass from a ground based transmitter on the equator to the satellite and back again. This delay presents problems for latency-sensitive applications such as voice communication, so geostationary communication satellites are primarily used for unidirectional entertainment and applications where low latency alternatives are not available. Geostationary satellites are directly overhead at the equator and appear lower in the sky to an observer nearer the poles. As the observer's latitude increases, communication becomes more difficult due to factors such as atmospheric refraction, Earth's thermal emission, line-of-sight obstructions, and signal reflections from the ground or nearby structures. At latitudes above about 81°, geostationary satellites are below the horizon and cannot be seen at all. Because of this, some Russian communication satellites have used elliptical Molniya and Tundra orbits, which have excellent visibility at high latitudes. Meteorology A worldwide network of operational geostationary meteorological satellites is used to provide visible and infrared images of Earth's surface and atmosphere for weather observation, oceanography, and atmospheric tracking. As of 2019 there are 19 satellites in either operation or stand-by. These satellite systems include: the United States' GOES series, operated by NOAA the Meteosat series, launched by the European Space Agency and operated by the European Weather Satellite Organization, EUMETSAT the Republic of Korea COMS-1 and GK-2A multi mission satellites. the Russian Elektro-L satellites the Japanese Himawari series Chinese Fengyun series India's INSAT series These satellites typically capture images in the visual and infrared spectrum with a spatial resolution between 0.5 and 4 square kilometres. The coverage is typically 70°, and in some cases less. Geostationary satellite imagery has been used for tracking volcanic ash, measuring cloud top temperatures and water vapour, oceanography, measuring land temperature and vegetation coverage, facilitating cyclone path prediction, and providing real time cloud coverage and other tracking data. Some information has been incorporated into meteorological prediction models, but due to their wide field of view, full-time monitoring and lower resolution, geostationary weather satellite images are primarily used for short-term and real-time forecasting. Navigation Geostationary satellites can be used to augment GNSS systems by relaying clock, ephemeris and ionospheric error corrections (calculated from ground stations of a known position) and providing an additional reference signal. This improves position accuracy from approximately 5m to 1m or less. Past and current navigation systems that use geostationary satellites include: The Wide Area Augmentation System (WAAS), operated by the United States Federal Aviation Administration (FAA); The European Geostationary Navigation Overlay Service (EGNOS), operated by the ESSP (on behalf of EU's GSA); The Multi-functional Satellite Augmentation System (MSAS), operated by Japan's Ministry of Land, Infrastructure and Transport Japan Civil Aviation Bureau (JCAB); The GPS Aided Geo Augmented Navigation (GAGAN) system being operated by India. The commercial StarFire navigation system, operated by John Deere and C-Nav Positioning Solutions (Oceaneering); The commercial Starfix DGPS System and OmniSTAR system, operated by Fugro. Implementation Launch Geostationary satellites are launched to the east into a prograde orbit that matches the rotation rate of the equator. The smallest inclination that a satellite can be launched into is that of the launch site's latitude, so launching the satellite from close to the equator limits the amount of inclination change needed later. Additionally, launching from close to the equator allows the speed of the Earth's rotation to give the satellite a boost. A launch site should have water or deserts to the east, so any failed rockets do not fall on a populated area. Most launch vehicles place geostationary satellites directly into a geostationary transfer orbit (GTO), an elliptical orbit with an apogee at GEO height and a low perigee. On-board satellite propulsion is then used to raise the perigee, circularise and reach GEO. Orbit allocation Satellites in geostationary orbit must all occupy a single ring above the equator. The requirement to space these satellites apart, to avoid harmful radio-frequency interference during operations, means that there are a limited number of orbital slots available, and thus only a limited number of satellites can be operated in geostationary orbit. This has led to conflict between different countries wishing access to the same orbital slots (countries near the same longitude but differing latitudes) and radio frequencies. These disputes are addressed through the International Telecommunication Union's allocation mechanism under the Radio Regulations. In the 1976 Bogota Declaration, eight countries located on the Earth's equator claimed sovereignty over the geostationary orbits above their territory, but the claims gained no international recognition. Statite proposal A statite is a hypothetical satellite that uses radiation pressure from the sun against a solar sail to modify its orbit. It would hold its location over the dark side of the Earth at a latitude of approximately 30 degrees. A statite is stationary relative to the Earth and Sun system rather than compared to surface of the Earth, and could ease congestion in the geostationary ring. Retired satellites Geostationary satellites require some station keeping to keep their position, and once they run out of thruster fuel they are generally retired. The transponders and other onboard systems often outlive the thruster fuel and by allowing the satellite to move naturally into an inclined geosynchronous orbit some satellites can remain in use, or else be elevated to a graveyard orbit. This process is becoming increasingly regulated and satellites must have a 90% chance of moving over 200 km above the geostationary belt at end of life. Space debris Space debris at geostationary orbits typically has a lower collision speed than at low Earth orbit (LEO) since all GEO satellites orbit in the same plane, altitude and speed; however, the presence of satellites in eccentric orbits allows for collisions at up to 4 km/s. Although a collision is comparatively unlikely, GEO satellites have a limited ability to avoid any debris. At geosynchronous altitude, objects less than 10 cm in diameter cannot be seen from the Earth, making it difficult to assess their prevalence. Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on August 11, 1993, and eventually moved to a graveyard orbit, and in 2006 the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable, although its engineers had enough contact time with the satellite to send it into a graveyard orbit. In 2017, both AMC-9 and Telkom-1 broke apart from an unknown cause. Properties A typical geostationary orbit has the following properties: Inclination: 0° Period: 1436 minutes (one sidereal day) Eccentricity: 0 Argument of perigee: undefined Semi-major axis: 42,164 km Inclination An inclination of zero ensures that the orbit remains over the equator at all times, making it stationary with respect to latitude from the point of view of a ground observer (and in the Earth-centered Earth-fixed reference frame). Period The orbital period is equal to exactly one sidereal day. This means that the satellite will return to the same point above the Earth's surface every (sidereal) day, regardless of other orbital properties. For a geostationary orbit in particular, it ensures that it holds the same longitude over time. This orbital period, T, is directly related to the semi-major axis of the orbit through the formula: where: is the length of the orbit's semi-major axis is the standard gravitational parameter of the central body Eccentricity The eccentricity is zero, which produces a circular orbit. This ensures that the satellite does not move closer or further away from the Earth, which would cause it to track backwards and forwards across the sky. Stability A geostationary orbit can be achieved only at an altitude very close to and directly above the equator. This equates to an orbital speed of and an orbital period of 1,436 minutes, one sidereal day. This ensures that the satellite will match the Earth's rotational period and has a stationary footprint on the ground. All geostationary satellites have to be located on this ring. A combination of lunar gravity, solar gravity, and the flattening of the Earth at its poles causes a precession motion of the orbital plane of any geostationary object, with an orbital period of about 53 years and an initial inclination gradient of about 0.85° per year, achieving a maximal inclination of 15° after 26.5 years. To correct for this perturbation, regular orbital stationkeeping maneuvers are necessary, amounting to a delta-v of approximately 50 m/s per year. A second effect to be taken into account is the longitudinal drift, caused by the asymmetry of the Earth – the equator is slightly elliptical (equatorial eccentricity). There are two stable equilibrium points sometimes called "gravitational wells" (at 75.3°E and 108°W) and two corresponding unstable points (at 165.3°E and 14.7°W). Any geostationary object placed between the equilibrium points would (without any action) be slowly accelerated towards the stable equilibrium position, causing a periodic longitude variation. The correction of this effect requires station-keeping maneuvers with a maximal delta-v of about 2 m/s per year, depending on the desired longitude. Solar wind and radiation pressure also exert small forces on satellites: over time, these cause them to slowly drift away from their prescribed orbits. In the absence of servicing missions from the Earth or a renewable propulsion method, the consumption of thruster propellant for station-keeping places a limitation on the lifetime of the satellite. Hall-effect thrusters, which are currently in use, have the potential to prolong the service life of a satellite by providing high-efficiency electric propulsion. Derivation For circular orbits around a body, the centripetal force required to maintain the orbit (Fc) is equal to the gravitational force acting on the satellite (Fg): From Isaac Newton's universal law of gravitation, , where Fg is the gravitational force acting between two objects, ME is the mass of the Earth, , ms is the mass of the satellite, r is the distance between the centers of their masses, and G is the gravitational constant, . The magnitude of the acceleration, a, of a body moving in a circle is given by: where v is the magnitude of the velocity (i.e. the speed) of the satellite. From Newton's second law of motion, the centripetal force Fc is given by: . As Fc = Fg, , so that Replacing v with the equation for the speed of an object moving around a circle produces: where T is the orbital period (i.e. one sidereal day), and is equal to . This gives an equation for r: The product GME is known with much greater precision than either factor alone; it is known as the geocentric gravitational constant μ = . Hence The resulting orbital radius is . Subtracting the Earth's equatorial radius, , gives the altitude of . The orbital speed is calculated by multiplying the angular speed by the orbital radius: In other planets By the same method, we can determine the orbital altitude for any similar pair of bodies, including the areostationary orbit of an object in relation to Mars, if it is assumed that it is spherical (which it is not entirely). The gravitational constant GM (μ) for Mars has the value of , its equatorial radius is and the known rotational period (T) of the planet is (). Using these values, Mars' orbital altitude is equal to . See also List of orbits List of satellites in geosynchronous orbit Orbital station-keeping Space elevator, which ultimately reaches to and beyond a geostationary orbit Explanatory notes References External links How to get a satellite to geostationary orbit Orbital Mechanics (Rocket and Space Technology) List of satellites in geostationary orbit Clarke Belt Snapshot Calculator 3D Real Time Satellite Tracking Geostationary satellite orbit overview Daily animation of the Earth, made by geostationary satellite 'Electro L' photos Satellite shoots 48 images of the planet every day. Orbital Mechanics for Engineering Students Astrodynamics Earth orbits
Geostationary orbit
[ "Engineering" ]
3,624
[ "Astrodynamics", "Aerospace engineering" ]
22,180,498
https://en.wikipedia.org/wiki/Birman%E2%80%93Wenzl%20algebra
In mathematics, the Birman–Murakami–Wenzl (BMW) algebra, introduced by and , is a two-parameter family of algebras of dimension having the Hecke algebra of the symmetric group as a quotient. It is related to the Kauffman polynomial of a link. It is a deformation of the Brauer algebra in much the same way that Hecke algebras are deformations of the group algebra of the symmetric group. Definition For each natural number n, the BMW algebra is generated by and relations:                        These relations imply the further relations: This is the original definition given by Birman and Wenzl. However a slight change by the introduction of some minus signs is sometimes made, in accordance with Kauffman's 'Dubrovnik' version of his link invariant. In that way, the fourth relation in Birman & Wenzl's original version is changed to (Kauffman skein relation) Given invertibility of m, the rest of the relations in Birman & Wenzl's original version can be reduced to (Idempotent relation) (Braid relations) (Tangle relations) (Delooping relations) Properties The dimension of is . The Iwahori–Hecke algebra associated with the symmetric group is a quotient of the Birman–Murakami–Wenzl algebra . The Artin braid group embeds in the BMW algebra: . Isomorphism between the BMW algebras and Kauffman's tangle algebras It is proved by that the BMW algebra is isomorphic to the Kauffman's tangle algebra . The isomorphism is defined by and Baxterisation of Birman–Murakami–Wenzl algebra Define the face operator as , where and are determined by and . Then the face operator satisfies the Yang–Baxter equation. Now with . In the limits , the braids can be recovered up to a scale factor. History In 1984, Vaughan Jones introduced a new polynomial invariant of link isotopy types which is called the Jones polynomial. The invariants are related to the traces of irreducible representations of Hecke algebras associated with the symmetric groups. showed that the Kauffman polynomial can also be interpreted as a function on a certain associative algebra. In 1989, constructed a two-parameter family of algebras with the Kauffman polynomial as trace after appropriate renormalization. References Representation theory Knot theory Diagram algebras
Birman–Wenzl algebra
[ "Mathematics" ]
510
[ "Representation theory", "Fields of abstract algebra" ]
22,182,482
https://en.wikipedia.org/wiki/Pressurizer%20%28nuclear%20power%29
A pressurizer is a component of a pressurized water reactor. The basic design of the pressurized water reactor includes a requirement that the coolant (water) in the reactor coolant system must not boil. Put another way, the coolant must remain in the liquid state at all times, especially in the reactor vessel. To achieve this, the coolant in the reactor coolant system is maintained at a pressure sufficiently high that boiling does not occur at the coolant temperatures experienced while the plant is operating or in any analyzed possible transient state. To pressurize the coolant system to a higher pressure than the vapor pressure of the coolant at operating temperatures, a separate pressurizing system is required. This is in the form of the pressurizer. Design In a pressurized water reactor plant, the pressurizer is basically a cylindrical pressure vessel with hemispherical ends, mounted with the long axis vertical and directly connected by a single run of piping to the reactor coolant system. It is located inside the reactor containment building. Although the water in the pressurizer is the same reactor coolant as in the rest of the reactor coolant system, it is basically stagnant, i.e. reactor coolant does not flow through the pressurizer continuously as it does in the other parts of the reactor coolant system. Because of its innate incompressibility, water in a connected piping system adjusts equally to pressure changes anywhere in the connected system. The water in the system may not be at the same pressure at all points in the system due to differences in elevation but the pressure at all points responds equally to a pressure change in any one part of the system. From this phenomenon, it was recognized early on that the pressure in the entire reactor coolant system, including the reactor itself, could be controlled by controlling pressure in a small interconnected area of the system and this led to the design of the pressurizer. The pressurizer is a small vessel compared to the other two major vessels of the reactor coolant system, the reactor vessel itself and the steam generator(s). Pressure control Pressure in the pressurizer is controlled by varying the temperature of the coolant in the pressurizer. Water pressure in a closed system tracks water temperature directly; as the temperature goes up, pressure goes up and vice versa. To increase the pressure in the reactor coolant system, large electric heaters in the pressurizer are turned on, raising the coolant temperature in the pressurizer and thereby raising the pressure. To decrease pressure in the reactor coolant system, sprays of relatively cool water are turned on inside the pressurizer, lowering the coolant temperature in the pressurizer and thereby lowering the pressure. Secondary functions The pressurizer has two secondary functions. Water backup and pressure change moderation One is providing a place to monitor water level in the reactor coolant system. Since the reactor coolant system is completely flooded during normal operations, there is no point in monitoring coolant level in any of the other vessels. But early awareness of a reduction of coolant level (or a loss of coolant) is important to the safety of the reactor core. The pressurizer is deliberately located high in the reactor containment building such that, if the pressurizer has sufficient coolant in it, one can be reasonably certain that all the other vessels of the reactor coolant system (which are below it) are fully flooded with coolant. There is therefore, a coolant level monitoring system on the pressurizer and it is the one reactor coolant system vessel that is normally not full of coolant. The other secondary function is to provide a "cushion" for sudden pressure changes in the reactor coolant system. The upper portion of the pressurizer is specifically designed to NOT contain liquid coolant and a reading of full on the level instrumentation allows for that upper portion to not contain liquid coolant. Because the coolant in the pressurizer is quite hot during normal operations, the space above the liquid coolant is vaporized coolant (steam). This steam bubble provides a cushion for pressure changes in the reactor coolant system and the operators ensure that the pressurizer maintains this steam bubble at all times during operations. Allowing liquid coolant to completely fill the pressurizer eliminates this steam bubble, and is referred to in industry as letting the pressurizer "go hard". This would mean that a sudden pressure change can provide a hammer effect to the entire reactor coolant system. Some facilities also call this letting the pressurizer "go solid," although solid simply refers to being completely full of liquid and without a "steam bubble." Over-pressure relief system Part of the pressurizer system is an over-pressure relief system. In the event that pressurizer pressure exceeds a certain maximum, there is a relief valve called the pilot-operated relief valve (PORV) on top of the pressurizer which opens to allow steam from the steam bubble to leave the pressurizer in order to reduce the pressure in the pressurizer. This steam is routed to a large tank (or tanks) in the reactor containment building where it is cooled back into liquid (condensed) and stored for later disposition. There is a finite volume to these tanks and if events deteriorate to the point where the tanks fill up, a secondary pressure relief device on the tank(s), often a rupture disc, allows the condensed reactor coolant to spill out onto the floor of the reactor containment building where it pools in sumps for later disposition. References Pressurized water reactors Nuclear power plant components Pressure vessels
Pressurizer (nuclear power)
[ "Physics", "Chemistry", "Engineering" ]
1,154
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
22,183,423
https://en.wikipedia.org/wiki/Bioceramic
Bioceramics and bioglasses are ceramic materials that are biocompatible. Bioceramics are an important subset of biomaterials. Bioceramics range in biocompatibility from the ceramic oxides, which are inert in the body, to the other extreme of resorbable materials, which are eventually replaced by the body after they have assisted repair. Bioceramics are used in many types of medical procedures. Bioceramics are typically used as rigid materials in surgical implants, though some bioceramics are flexible. The ceramic materials used are not the same as porcelain type ceramic materials. Rather, bioceramics are closely related to either the body's own materials or are extremely durable metal oxides. History Prior to 1925, the materials used in implant surgery were primarily relatively pure metals. The success of these materials was surprising considering the relatively primitive surgical techniques. The 1930s marked the beginning of the era of better surgical techniques as well as the first use of alloys such as vitallium. In 1969, L. L. Hench and others discovered that various kinds of glasses and ceramics could bond to living bone. Hench was inspired by the idea on his way to a conference on materials. He was seated next to a colonel who had just returned from the Vietnam War. The colonel shared that after an injury the bodies of soldiers would often reject the implant. Hench was intrigued and began to investigate materials that would be biocompatible. The final product was a new material which he called bioglass. This work inspired a new field called bioceramics. With the discovery of bioglass, interest in bioceramics grew rapidly. On April 26, 1988, the first international symposium on bioceramics was held in Kyoto, Japan. Applications Ceramics are now commonly used as dental and bone implants. Surgical cermets are used regularly. Joint replacements are commonly coated with bioceramic materials to reduce wear and inflammatory response. Other examples of medical uses for bioceramics are in pacemakers, kidney dialysis machines, and respirators. Mechanical properties and composition Bioceramics are meant to be used in extracorporeal circulation systems (kidney dialysis, for example) or engineered bioreactors; however, they are most common as implants. Ceramics show numerous applications as biomaterials due to their physico-chemical properties. They have the advantage of being inert in the human body, and their hardness and resistance to abrasion makes them useful for bone and tooth replacement. Some ceramics also have excellent resistance to friction, making them useful as replacement materials for malfunctioning joints. Properties such as appearance and electrical insulation are also a concern for specific biomedical applications. Some bioceramics incorporate alumina (Al2O3) as their lifespan is longer than that of the patient's. The material can be used in middle ear ossicles, ocular prostheses, electrical insulation for pacemakers, catheter orifices and in numerous prototypes of implantable systems such as cardiac pumps. Aluminosilicates are commonly used in dental prostheses, pure or in ceramic-polymer composites. The ceramic-polymer composites are a potential way to fill cavities, replacing amalgams suspected to have toxic effects. The aluminosilicates also have a glassy structure. Unlike artificial teeth in resin, the colour of tooth ceramic remains stable. Zirconia doped with yttrium oxide has been proposed as a substitute for alumina for osteoarticular prostheses. The main advantages are a greater failure strength, and a good resistance to fatigue. Vitreous carbon is also used as it is light, resistant to wear, and compatible with blood. It is mostly used in cardiac valve replacement. Diamond can be used for the same application, but in coating form. Calcium phosphate-based ceramics constitute, at present, the preferred bone substitute material in orthopaedic and maxillofacial applications, as they are similar to the main mineral phase of bone in structure and chemical composition. Such synthetic bone substitute or scaffold materials are typically porous, which provides an increased surface area that encourages osseointegration, involving cell colonisation and revascularisation. However, such porous materials generally exhibit lower mechanical strength compared to bone, making highly porous implants very delicate. Since the elastic modulus values of ceramic materials are generally higher than that of the surrounding bone tissue, the implant can cause mechanical stresses at the bone interface. Calcium phosphates usually found in bioceramics include hydroxyapatite (HAP) Ca10(PO4)6(OH)2; tricalcium phosphate β (β TCP): Ca3 (PO4)2; and mixtures of HAP and β TCP. Multipurpose A number of implanted ceramics have not actually been designed for specific biomedical applications. However, they manage to find their way into different implantable systems because of their properties and their good biocompatibility. Among these ceramics, we can cite silicon carbide, titanium nitrides and carbides, and boron nitride. TiN has been suggested as the friction surface in hip prostheses. While cell culture tests show a good biocompatibility, the analysis of implants shows significant wear, related to a delaminating of the TiN layer. Silicon carbide is another modern-day ceramic which seems to provide good biocompatibility and can be used in bone implants. Specific use In addition to being used for their traditional properties, bioactive ceramics have seen specific use for due to their biological activity. Calcium phosphates, oxides, and hydroxides are common examples. Other natural materials — generally of animal origin — such as bioglass and other composites feature a combination of mineral-organic composite materials such as HAP, alumina, or titanium dioxide with the biocompatible polymers (polymethylmethacrylate): PMMA, poly(L-lactic) acid: PLLA, poly(ethylene). Composites can be differentiated as bioresorbable or non-bioresorbable, with the latter being the result of the combination of a bioresorbable calcium phosphate (HAP) with a non-bioresorbable polymer (PMMA, PE). These materials may become more widespread in the future, on account of the many combination possibilities and their aptitude at combining a biological activity with mechanical properties similar to those of the bone. Biocompatibility Bioceramics' properties of being anticorrosive, biocompatible, and aesthetic make them quite suitable for medical usage. Zirconia ceramic has bioinertness and noncytotoxicity. Carbon is another alternative with similar mechanical properties to bone, and it also features blood compatibility, no tissue reaction, and non-toxicity to cells. Bioinert ceramics do not exhibit bonding with the bone, known as osseointegration. However, bioactivity of bioinert ceramics can be achieved by forming composites with bioactive ceramics. Bioactive ceramics, including bioglasses must be non-toxic, and form a bond with bone. In bone repair applications, i.e. scaffolds for bone regeneration, the solubility of bioceramics is an important parameter, and the slow dissolution rate of most bioceramics relative to bone growth rates remains a challenge in their remedial usage. Unsurprisingly, much focus is placed on improving dissolution characteristics of bioceramics while maintaining or improving their mechanical properties. Glass ceramics elicit osteoinductive properties, with higher dissolution rates relative to crystalline materials, while crystalline calcium phosphate ceramics also exhibit non-toxicity to tissues and bioresorption. The ceramic particulate reinforcement has led to the choice of more materials for implant applications that include ceramic/ceramic, ceramic/polymer, and ceramic/metal composites. Among these composites ceramic/polymer composites have been found to release toxic elements into the surrounding tissues. Metals face corrosion related problems, and ceramic coatings on metallic implants degrade over time during lengthy applications. Ceramic/ceramic composites enjoy superiority due to similarity to bone minerals, exhibiting biocompatibility and a readiness to be shaped. The biological activity of bioceramics has to be considered under various in vitro and in vivo studies. Performance needs must be considered in accordance with the particular site of implantation. Processing Technically, ceramics are composed of raw materials such as powders and natural or synthetic chemical additives, favouring either compaction (hot, cold or isostatic), setting (hydraulic or chemical), or accelerating sintering processes. According to the formulation and shaping process used, bioceramics can vary in density and porosity as cements, ceramic depositions, or ceramic composites. Porosity is often desired in bioceramics including bioglasses. Towards improving the performance of transplanted porous bioceramics, numerous processing techniques are available for the control of porosity, pore size distribution and pore alignment. For crystalline materials, grain size and crystalline defects provide further pathways to enhance biodegradation and osseointegration, which are key for effective bone graft and bone transplant materials. This can be achieved by the inclusion of grain refining dopants and by imparting defects in the crystalline structure through various physical means. A developing material processing technique based on the biomimetic processes aims to imitate natural and biological processes and offer the possibility of making bioceramics at ambient temperature rather than through conventional or hydrothermal processes [GRO 96]. The prospect of using these relatively low processing temperatures opens up possibilities for mineral organic combinations with improved biological properties through the addition of proteins and biologically active molecules (growth factors, antibiotics, anti-tumor agents, etc.). However, these materials have poor mechanical properties which can be improved, partially, by combining them with bonding proteins. Commercial usage Common bioactive materials available commercially for clinical use include 45S5 bioactive glass, A/W bioactive glass ceramic, dense synthetic HA, and bioactive composites such as a polyethylene–HA mixture. All these materials form an interfacial bond with adjacent tissue. High-purity alumina bioceramics are currently commercially available from various producers. U.K. manufacturer Morgan Advanced Ceramics (MAC) began manufacturing orthopaedic devices in 1985 and quickly became a recognised supplier of ceramic femoral heads for hip replacements. MAC Bioceramics has the longest clinical history for alumina ceramic materials, manufacturing HIP Vitox® alumina since 1985. Some calcium-deficient phosphates with an apatite structure were thus commercialised as "tricalcium phosphate" even though they did not exhibit the expected crystalline structure of tricalcium phosphate. Currently, numerous commercial products described as HA are available in various physical forms (e.g. granules, specially designed blocks for specific applications). HA/polymer composite (HA/polyethyelene, HAPEXTM) is also commercially available for ear implants, abrasives, and plasma-sprayed coating for orthopedic and dental implants. Bioceramics are also been used in cannabis or delta 8 devices as wicks for the vaporization of such extracts. Future trends Bioceramics have been proposed as a possible treatment for cancer. Two methods of treatment have been proposed: hyperthermia and radiotherapy. Hyperthermia treatment involves implanting a bioceramic material that contains a ferrite or other magnetic material. The area is then exposed to an alternating magnetic field, which causes the implant and surrounding area to heat up. Alternatively, the bioceramic materials can be doped with β-emitting materials and implanted into the cancerous area. Other trends include engineering bioceramics for specific tasks. Ongoing research involves the chemistry, composition, and micro- and nanostructures of the materials to improve their biocompatibility. See also Ceramic-impregnated fabrics References Biomaterials Biomedical engineering Ceramic engineering Materials science Implants (medicine) Inorganic chemistry Oral and maxillofacial surgery Oral surgery Physical chemistry Prosthetics Restorative dentistry
Bioceramic
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,523
[ "Biomaterials", "Biological engineering", "Applied and interdisciplinary physics", "Biomedical engineering", "Materials science", "Materials", "nan", "Ceramic engineering", "Physical chemistry", "Matter", "Medical technology" ]
2,541,207
https://en.wikipedia.org/wiki/Alpha%20Magnetic%20Spectrometer
The Alpha Magnetic Spectrometer (AMS-02) is a particle physics experiment module that is mounted on the International Space Station (ISS). The experiment is a recognized CERN experiment (RE1). The module is a detector that measures antimatter in cosmic rays; this information is needed to understand the formation of the universe and search for evidence of dark matter. The principal investigator is Nobel laureate particle physicist Samuel Ting. The launch of flight STS-134 carrying AMS-02 took place on May 16, 2011, and the spectrometer was installed on May 19, 2011. By April 15, 2015, AMS-02 had recorded over 60 billion cosmic ray events and 90 billion after five years of operation since its installation in May 2011. In March 2013, Professor Ting reported initial results, saying that AMS had observed over 400,000 positrons, with the positron to electron fraction increasing from 10 GeV to 250 GeV. (Later results have shown a decrease in positron fraction at energies over about 275 GeV). There was "no significant variation over time, or any preferred incoming direction. These results are consistent with the positrons originating from the annihilation of dark matter particles in space, but not yet sufficiently conclusive to rule out other explanations." The results have been published in Physical Review Letters. Additional data are still being collected. History The alpha magnetic spectrometer was proposed in 1995 by the Antimatter Study Group, led by MIT particle physicist Samuel Ting, not long after the cancellation of the Superconducting Super Collider. The original name for the instrument was Antimatter Spectrometer, with the stated objective to search for primordial antimatter, with a target resolution of antimatter/matter ≈10−9. The proposal was accepted and Ting became the principal investigator. AMS-01 An AMS prototype designated AMS-01, a simplified version of the detector, was built by the international consortium under Ting's direction and flown into space aboard the on STS-91 in June 1998. By not detecting any antihelium, the AMS-01 established an upper limit of 1.1×10−6 for the antihelium-to-helium flux ratio and proved that the detector concept worked in space. This shuttle mission was the last shuttle flight to the Mir space station. AMS-02 After the flight of the prototype, the group, now labelled the AMS Collaboration, began the development of a full research system designated AMS-02. This development effort involved the work of 500 scientists from 56 institutions and 16 countries organized under United States Department of Energy (DOE) sponsorship. The instrument which eventually resulted from a long evolutionary process has been called "the most sophisticated particle detector ever sent into space", rivaling very large detectors used at major particle accelerators, and has cost four times as much as any of its ground-based counterparts. Its goals have also evolved and been refined over time. As built it is a more comprehensive detector which has a better chance of discovering evidence of dark matter along with other goals. The power requirements for AMS-02 were thought to be too great for a practical independent spacecraft, so AMS-02 was designed to be installed as an external module on the International Space Station and use power from the ISS. The post- plan was to deliver AMS-02 to the ISS by space shuttle in 2005 on station assembly mission UF4.1, but technical difficulties and shuttle scheduling issues added more delays. AMS-02 completed final integration and operational testing at CERN in Geneva, Switzerland which included exposure to energetic proton beams generated by the CERN SPS particle accelerator. AMS-02 was then shipped by specialist hauler to ESA's European Space Research and Technology Centre (ESTEC) facility in the Netherlands, arriving February 16, 2010. Here it underwent thermal vacuum, electromagnetic compatibility, and electromagnetic interference testing. AMS-02 was scheduled for delivery to the Kennedy Space Center in Florida, United States, in late May 2010. This was, however, postponed to August 26, as AMS-02 underwent final alignment beam testing at CERN. A cryogenic, superconducting magnet system was initially installed on the AMS-02. When the Obama administration extended International Space Station operations beyond 2015, the decision was made by AMS management to exchange the AMS-02 superconducting magnet for the non-superconducting magnet previously flown on AMS-01. Although the non-superconducting magnet has a weaker field strength, its on-orbit operational time at ISS is expected to be 10 to 18 years versus only three years for the superconducting version. In December 2018, it was announced that funding for the ISS had been extended to 2030. In 1999, after the successful flight of AMS-01, the total cost of the AMS program was estimated to be $33 million, with AMS-02 planned for flight to the ISS in 2003. After the Space Shuttle Columbia disaster in 2003, and after a number of technical difficulties with the construction of AMS-02, the cost of the program ballooned to an estimated $2 billion. Installation on the International Space Station For several years it was uncertain if AMS-02 would ever be launched because it was not manifested to fly on any of the remaining Space Shuttle flights. After the 2003 Columbia disaster, NASA decided to reduce shuttle flights and retire the remaining shuttles by 2010. A number of flights were removed from the remaining manifest, including the flight for AMS-02. In 2006, NASA studied alternative ways of delivering AMS-02 to the space station, but they all proved to be too expensive. In May 2008, a bill was proposed to launch AMS-02 to the ISS on an additional shuttle flight in 2010 or 2011. The bill was passed by the full U.S. House of Representatives on June 11, 2008. The bill then went before the Senate Commerce, Science and Transportation Committee where it also passed. It was then amended and passed by the full Senate on September 25, 2008, and was passed again by the House on September 27, 2008. It was signed into law by President George W. Bush on October 15, 2008. The bill authorized NASA to add another space shuttle flight to the schedule before the space shuttle program was discontinued. In January 2009, NASA restored AMS-02 to the shuttle manifest. On August 26, 2010, AMS-02 was delivered from CERN to the Kennedy Space Center by a Lockheed C-5 Galaxy jet. It was delivered to the International Space Station on May 19, 2011, as part of station assembly flight ULF6 on shuttle flight STS-134, commanded by Mark Kelly. It was removed from the shuttle cargo bay using the shuttle's robotic arm and handed off to the station's robotic arm for installation. AMS-02 is mounted on top of the Integrated Truss Structure, on USS-02, the zenith side of the S3-element of the truss. Operations, condition and repairs By April 2017, only one of the 4 redundant coolant pumps for the silicon trackers was fully working, and repairs were being planned, despite AMS-02 not being designed to be serviced in space. By 2019, the last pump was being operated intermittently. In November 2019, after four years of planning, special tools and equipment were sent to the ISS for in-situ repairs requiring four EVAs. Liquid carbon dioxide coolant was also replenished. The repairs were conducted by the ISS crew of Expedition 61. The spacewalkers were the expedition commander and ESA astronaut Luca Parmitano, and NASA astronaut Andrew Morgan. Both of them were assisted by NASA astronauts Christina Koch and Jessica Meir who operated the Canadarm2 robotic arm from inside the Station. The spacewalks were described as the "most challenging since [the last] Hubble repairs". The entire spacewalk campaign was a central feature of the Disney+ docuseries Among The Stars. First spacewalk The first spacewalk was conducted on November 15, 2019. The spacewalk began with the removal of the debris shield covering AMS, which was jettisoned to burn up in the atmosphere. The next task was to install three handrails in the vicinity of AMS to prepare for the next spacewalks and remove zip ties on the AMS' vertical support strut. This was followed by the "get ahead" tasks: Parmitano removed the screws from a carbon-fibre cover under the insulation and passed the cover to Morgan to jettison. The spacewalkers also removed the vertical support beam cover. The duration of the spacewalk was 6 hours and 39 minutes. Second spacewalk The second spacewalk was conducted on November 22, 2019. Parmitano and Morgan cut a total of eight stainless steel tubes, including one that vented the remaining carbon dioxide from the old cooling pump. The crew members also prepared a power cable and installed a mechanical attachment device in advance of installing the new cooling system. The duration of the spacewalk was 6 hours and 33 minutes. Third spacewalk The third spacewalk was conducted on December 2, 2019. The crew completed the primary task of installing the upgraded cooling system, called the upgraded tracker thermal pump system (UTTPS), completed the power and data cable connections for the system, and connected all eight cooling lines from the AMS to the new system. The intricate connection work required making a clean cut for each existing stainless steel tube connected to the AMS, then connecting it to the new system through swaging. The astronauts also completed an additional task to install an insulating blanket on the nadir side of the AMS to replace the heat shield and blanket they removed during the first spacewalk to begin the repair work. The flight control team on Earth initiated power-up of the system and confirmed its reception of power and data. The duration of the spacewalk was 6 hours and 2 minutes. Fourth spacewalk The fourth spacewalk was conducted on January 25, 2020. The astronauts conducted leak checks for the cooling system on the AMS and opened a valve to pressurize the system. Parmitano found a leak in one of the AMS's cooling lines. The leak was fixed during the spacewalk. Preliminary testing showed the AMS was responding as expected. Ground teams worked to fill the new AMS thermal control system with carbon dioxide, allowed the system to stabilize, and powered on the pumps to verify and optimize their performance. The tracker, one of several detectors on the AMS, began collecting science data again before the end of the week after the spacewalk. The astronauts also completed an additional task to remove degraded lens filters on two high-definition video cameras. The duration of the spacewalk was 6 hours and 16 minutes. Specifications Mass: Structural material: Stainless steel Power: 2,500 W Internal data rate: 7 Gbit/s Data rate to ground: 2 Mbit/s (typical, average) Primary mission duration: 10 to 18 years Design life: 3 years. Magnetic field intensity: 0.15 tesla produced by a permanent neodymium magnet Original superconducting magnet: 2 coils of niobium-titanium at 1.8 K producing a central field of 0.87 tesla (not used in the actual device) AMS-02 flight magnet changed to non-superconducting AMS-01 version to extend experiment life and to solve reliability problems in the operation of the superconducting system About 1,000 cosmic rays are recorded by the instrument per second, generating about one GB/s of data. This data is filtered and compressed to about 300 kbit/s for download to the operation center POCC at CERN. A mockup of the machine is present inside the operations center at CERN. Design The detector module consists of a series of detectors that are used to determine various characteristics of the radiation and particles as they pass through. Characteristics are determined only for particles that pass through from top to bottom. Particles that enter the detector at any other angles are rejected. From top to bottom the subsystems are identified as: Transition radiation detector measures the velocities of the highest energy particles; Upper time of flight counter, along with the lower time of flight counter, measures the velocities of lower energy particles; Star tracker determines the orientation of the module in space; Silicon tracker (9 disks among 6 locations) measures the coordinates of charged particles in the magnetic field; Has 4 redundant coolant pumps Permanent magnet bends the path of charged particles so they can be identified; Anti-coincidence counter rejects stray particles that enter through the sides; Ring imaging Cherenkov detector measures velocity of fast particles with extreme accuracy; Electromagnetic calorimeter measures the total energy of the particles. Scientific goals The AMS-02 uses the unique environment of space to advance knowledge of the Universe and lead to the understanding of its origin by searching for antimatter, dark matter and measuring cosmic rays. Antimatter Experimental evidence indicates that our galaxy is made of matter; however, scientists believe there are about 100–200 billion galaxies in the observable Universe and some versions of the Big Bang theory of the origin of the Universe require equal amounts of matter and antimatter. Theories that explain this apparent asymmetry violate other measurements. Whether or not there is significant antimatter is one of the fundamental questions of the origin and nature of the Universe. Any observations of an antihelium nucleus would provide evidence for the existence of antimatter in space. In 1999, AMS-01 established a new upper limit of 10−6 for the antihelium/helium flux ratio in the Universe. AMS-02 was designed to search with a sensitivity of 10−9, an improvement of three orders of magnitude over AMS-01, sufficient to reach the edge of the expanding Universe and resolve the issue definitively. Dark matter The visible matter in the Universe, such as stars, adds up to less than 5 percent of the total mass that is known to exist from many other observations. The other 95 percent is dark, either dark matter, which is estimated at 20 percent of the Universe by weight, or dark energy, which makes up the balance. The exact nature of both still is unknown. One of the leading candidates for dark matter is the neutralino. If neutralinos exist, they should be colliding with each other and giving off an excess of charged particles that can be detected by AMS-02. Any peaks in the background positron, antiproton, or gamma ray flux could signal the presence of neutralinos or other dark matter candidates, but would need to be distinguished from poorly known confounding astrophysical signals. Strangelets Six types of quarks (up, down, strange, charm, bottom and top) have been found experimentally; however, the majority of matter on Earth is made up of only up and down quarks. It is a fundamental question whether there exists stable matter made up of strange quarks in combination with up and down quarks. Particles of such matter are known as strangelets. Strangelets might have extremely large mass and very small charge-to-mass ratios. It would be a totally new form of matter. AMS-02 may determine whether this extraordinary matter exists in our local environment. Space radiation environment Cosmic radiation during transit is a significant obstacle to sending humans to Mars. Accurate measurements of the cosmic ray environment are needed to plan appropriate countermeasures. Most cosmic ray studies are done by balloon-borne instruments with flight times that are measured in days; these studies have shown significant variations. AMS-02 operates on the ISS, gathering a large amount of accurate data and allowing measurements of the long term variation of the cosmic ray flux over a wide energy range, for nuclei from protons to iron. In addition to understanding the radiation protection required for astronauts during interplanetary flight, this data will allow the interstellar propagation and origins of cosmic rays to be identified. Results By late 2016, it was reported that AMS-02 had observed over 90 billion cosmic rays. In February 2013, Samuel Ting reported that in its first 18 months of operation AMS had recorded 25 billion particle events including nearly eight billion fast electrons and positrons. The AMS paper reported the positron-electron ratio in the mass range of 0.5 to 350 GeV, providing evidence about the weakly interacting massive particle (WIMP) model of dark matter. On March 30, 2013, the first results from the AMS experiment were announced by the CERN press office. The first physics results were published in Physical Review Letters on April 3, 2013. A total of 6.8×106 positron and electron events were collected in the energy range from 0.5 to 350 GeV. The positron fraction (of the total electron plus positron events) steadily increased from energies of 10 to 250 GeV, but the slope decreased by an order of magnitude above 20 GeV, even though the fraction of positrons still increased. There was no fine structure in the positron fraction spectrum, and no anisotropies were observed. The accompanying Physics Viewpoint said that "The first results from the space-borne Alpha Magnetic Spectrometer confirm an unexplained excess of high-energy positrons in Earth-bound cosmic rays." These results are consistent with the positrons originating from the annihilation of dark matter particles in space, but not yet sufficiently conclusive to rule out other explanations. Ting said "Over the coming months, AMS will be able to tell us conclusively whether these positrons are a signal for dark matter, or whether they have some other origin." On September 18, 2014, new results with almost twice as much data were presented in a talk at CERN and published in Physical Review Letters. A new measurement of positron fraction up to 500 GeV was reported, showing that positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The AMS team presented for 3 days at CERN in April 2015, covering new data on 300 million proton events and helium flux. It revealed in December 2016 that it had discovered a few signals consistent with antihelium nuclei amidst several billion helium nuclei. The result remains to be verified, and the team is currently trying to rule out possible contamination. A study from 2019, using data from NASA's Fermi Gamma-ray Space Telescope discovered a halo around the nearby pulsar Geminga. The accelerated electrons and positrons collide with nearby starlight. The collision boosts the light to much higher energies. Geminga alone could be responsible for as much as 20% of the high-energy positrons seen by the AMS-02 experiment. The AMS-02 on the ISS has, as of 2021, recorded eight events that seem to indicate the detection of antihelium-3. Over a twelve-year period aboard the ISS, the AMS has accumulated a dataset of more than 230 billion cosmic rays, spanning energies reaching multi-TeV levels. The precise measurements obtained by the magnetic spectrometer enable data presentation with an accuracy approaching ~1%. Particularly significant is the high-energy data regarding elementary particles such as electrons, positrons, protons, and antiprotons, which presents challenges to theoretical frameworks. Additionally, observations of nuclei and isotopes reveal energy dependencies that deviate from theoretical predictions. The extensive dataset collected by AMS necessitates a reevaluation of existing models of the cosmos, as discussed at the APS April meeting in 2024. See also List of space telescopes (Astronomical Space Observatories) Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) – an Italian-international cosmic ray mission launched in 2006 with similar goals Scientific research on the ISS References Further reading External links AMS Collaboration Homepage AMS Homepage at CERN. Inc. construction diagrams AMS Homepage at the Johnson Space Center NASA AMS-02 Project Fact Sheet NASA AMS-02 Project Home Page with real-time cosmic ray count An animated movie of the STS-134 mission showing the installation of AMS-02 (72 MB) Alpha Magnetic Spectrometer – image collection – AMS-02 on Facebook A Costly Quest for the Dark Heart of the Cosmos (New York Times, 16 November 2010) Route To Space Alliance – European Transport for The Space and Aeronautic Industries Record for AMS-02 experiment on INSPIRE-HEP Record for AMS-01 experiment on INSPIRE-HEP Particle experiments Cosmic-ray experiments Components of the International Space Station International Space Station experiments Experiments for dark matter search Spacecraft launched in 2011 Space science experiments Piggyback mission CERN experiments
Alpha Magnetic Spectrometer
[ "Physics" ]
4,304
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
2,541,664
https://en.wikipedia.org/wiki/Equidistribution%20theorem
In mathematics, the equidistribution theorem is the statement that the sequence a, 2a, 3a, ... mod 1 is uniformly distributed on the circle , when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure . History While this theorem was proved in 1909 and 1910 separately by Hermann Weyl, Wacław Sierpiński and Piers Bohl, variants of this theorem continue to be studied to this day. In 1916, Weyl proved that the sequence a, 22a, 32a, ... mod 1 is uniformly distributed on the unit interval. In 1937, Ivan Vinogradov proved that the sequence pn a mod 1 is uniformly distributed, where pn is the nth prime. Vinogradov's proof was a byproduct of the odd Goldbach conjecture, that every sufficiently large odd number is the sum of three primes. George Birkhoff, in 1931, and Aleksandr Khinchin, in 1933, proved that the generalization x + na, for almost all x, is equidistributed on any Lebesgue measurable subset of the unit interval. The corresponding generalizations for the Weyl and Vinogradov results were proven by Jean Bourgain in 1988. Specifically, Khinchin showed that the identity holds for almost all x and any Lebesgue integrable function ƒ. In modern formulations, it is asked under what conditions the identity might hold, given some general sequence bk. One noteworthy result is that the sequence 2ka mod 1 is uniformly distributed for almost all, but not all, irrational a. Similarly, for the sequence bk = 2ka, for every irrational a, and almost all x, there exists a function ƒ for which the sum diverges. In this sense, this sequence is considered to be a universally bad averaging sequence, as opposed to bk = k, which is termed a universally good averaging sequence, because it does not have the latter shortcoming. A powerful general result is Weyl's criterion, which shows that equidistribution is equivalent to having a non-trivial estimate for the exponential sums formed with the sequence as exponents. For the case of multiples of a, Weyl's criterion reduces the problem to summing finite geometric series. See also Diophantine approximation Low-discrepancy sequence Dirichlet's approximation theorem Three-gap theorem References Historical references P. Bohl, (1909) Über ein in der Theorie der säkularen Störungen vorkommendes Problem, J. reine angew. Math. 135, pp. 189–283. W. Sierpinski, (1910) Sur la valeur asymptotique d'une certaine somme, Bull Intl. Acad. Polonaise des Sci. et des Lettres (Cracovie) series A, pp. 9–11. Modern references Joseph M. Rosenblatt and Máté Weirdl, Pointwise ergodic theorems via harmonic analysis, (1993) appearing in Ergodic Theory and its Connections with Harmonic Analysis, Proceedings of the 1993 Alexandria Conference, (1995) Karl E. Petersen and Ibrahim A. Salama, eds., Cambridge University Press, Cambridge, . (An extensive survey of the ergodic properties of generalizations of the equidistribution theorem of shift maps on the unit interval. Focuses on methods developed by Bourgain.) Elias M. Stein and Rami Shakarchi, Fourier Analysis. An Introduction, (2003) Princeton University Press, pp 105–113 (Proof of the Weyl's theorem based on Fourier Analysis) Ergodic theory Diophantine approximation Theorems in number theory
Equidistribution theorem
[ "Mathematics" ]
780
[ "Mathematical theorems", "Ergodic theory", "Theorems in number theory", "Mathematical relations", "Mathematical problems", "Diophantine approximation", "Approximations", "Number theory", "Dynamical systems" ]
2,544,024
https://en.wikipedia.org/wiki/SH2%20domain
The SH2 (Src Homology 2) domain is a structurally conserved protein domain contained within the Src oncoprotein and in many other intracellular signal-transducing proteins. SH2 domains bind to phosphorylated tyrosine residues on other proteins, modifying the function or activity of the SH2-containing protein. The SH2 domain may be considered the prototypical modular protein-protein interaction domain, allowing the transmission of signals controlling a variety of cellular functions. SH2 domains are especially common in adaptor proteins that aid in the signal transduction of receptor tyrosine kinase pathways. Structure and interactions SH2 domains contain about 100 amino acid residues and exhibit a central antiparallel β-sheet centered between two α-helices. Binding to phosphotyrosine-containing peptides involves a strictly-conserved Arg residue that pairs with the negatively-charged phosphate on the phosphotyrosine, and a surrounding pocket that recognizes flanking sequences on the target peptide. Compared to other signaling proteins, SH2 domains exhibit only a moderate degree of specificity for their target peptides, due to the relative weakness of the interactions with the flanking sequences. Over 100 human proteins are known to contain SH2 domains. A variety of tyrosine-containing sequences have been found to bind SH2 domains and are conserved across a wide range of organisms, performing similar functions. Binding of a phosphotyrosine-containing protein to an SH2 domain may lead to either activation or inactivation of the SH2-containing protein, depending on the types of interactions formed between the SH2 domain and other domains of the enzyme. Mutations that disrupt the structural stability of the SH2 domain, or that affect the binding of the phosphotyrosine peptide of the target, are involved in a range of diseases including X-linked agammaglobulinemia and severe combined immunodeficiency. Diversity SH2 domains are not present in yeast and appear at the boundary between protozoa and animalia in organisms such as the social amoeba Dictyostelium discoideum. A detailed bioinformatic examination of SH2 domains of human and mouse reveals 120 SH2 domains contained within 115 proteins encoded by the human genome, representing a rapid rate of evolutionary expansion among the SH2 domains. A large number of SH2 domain structures have been solved and many SH2 proteins have been knocked out in mice. Applications SH2 domains, and other binding domains, have been used in protein engineering to create protein assemblies. Protein assemblies are formed when several proteins bind to one another to create a larger structure (called a supramolecular assembly). Using molecular biology techniques, fusion proteins of specific enzymes and SH2 domains have been created, which can bind to each other to form protein assemblies. Since SH2 domains require phosphorylation in order for binding to occur, the use of kinase and phosphatase enzymes gives researchers control over whether protein assemblies will form or not. High affinity engineered SH2 domains have been developed and utilized for protein assembly applications. The goal of most protein assembly formation is to increase the efficiency of metabolic pathways via enzymatic co-localization. Other applications of SH2 domain mediated protein assemblies have been in the formation of high density fractal-like structures, which have extensive molecular trapping properties. Examples Human proteins containing this domain include: ABL1; ABL2 BCAR3; BLK; BLNK; BMX; BTK CHN2; CISH; CRK; CRKL; CSK DAPP1 FER; FES; FGR; FRK; FYN GRAP; GRAP2; GRB10; GRB14; GRB2; GRB7 HCK; HSH2D INPP5D; INPPL1; ITK; JAK2; LCK; LCP2; LYN MATK; NCK1; NCK2 PIK3R1; PIK3R2; PIK3R3; PLCG1; PLCG2; PTK6; PTPN11; PTPN6; RASA1 SH2B1; SH2B2; SH2B3; SH2D1A; SH2D1B; SH2D2A; SH2D3A; SH2D3C; SH2D4A; SH2D4B; SH2D5; SH2D6; SH3BP2; SHB; SHC1; SHC3; SHC4; SHD; SHE SLA; SLA2 SOCS1; SOCS2; SOCS3; SOCS4; SOCS5; SOCS6; SOCS7 SRC; SRMS STAT1; STAT2; STAT3; STAT4; STAT5A; STAT5B; STAT6 SUPT6H; SYK TEC; TENC1; TNS; TNS1; TNS3; TNS4; TXK VAV1; VAV2; VAV3 YES1; ZAP70 See also Phosphotyrosine-binding domains also bind phosphorylated tyrosines Anthony Pawson, discoverer of the SH2 Domain References External links SH2 Domain website created by lab of Dr. Piers Nash Protein domains Signal transduction Peripheral membrane proteins
SH2 domain
[ "Chemistry", "Biology" ]
1,119
[ "Protein classification", "Signal transduction", "Protein domains", "Biochemistry", "Neurochemistry" ]
2,544,098
https://en.wikipedia.org/wiki/Orientation%20%28geometry%29
In geometry, the orientation, attitude, bearing, direction, or angular position of an object – such as a line, plane or rigid body – is part of the description of how it is placed in the space it occupies. More specifically, it refers to the imaginary rotation that is needed to move the object from a reference placement to its current placement. A rotation may not be enough to reach the current placement, in which case it may be necessary to add an imaginary translation to change the object's position (or linear position). The position and orientation together fully describe how the object is placed in space. The above-mentioned imaginary rotation and translation may be thought to occur in any order, as the orientation of an object does not change when it translates, and its position does not change when it rotates. Euler's rotation theorem shows that in three dimensions any orientation can be reached with a single rotation around a fixed axis. This gives one common way of representing the orientation using an axis–angle representation. Other widely used methods include rotation quaternions, rotors, Euler angles, or rotation matrices. More specialist uses include Miller indices in crystallography, strike and dip in geology and grade on maps and signs. A unit vector may also be used to represent an object's normal vector direction or the relative direction between two points. Typically, the orientation is given relative to a frame of reference, usually specified by a Cartesian coordinate system. Mathematical representations Three dimensions In general the position and orientation in space of a rigid body are defined as the position and orientation, relative to the main reference frame, of another reference frame, which is fixed relative to the body, and hence translates and rotates with it (the body's local reference frame, or local coordinate system). At least three independent values are needed to describe the orientation of this local frame. Three other values describe the position of a point on the object. All the points of the body change their position during a rotation except for those lying on the rotation axis. If the rigid body has rotational symmetry not all orientations are distinguishable, except by observing how the orientation evolves in time from a known starting orientation. For example, the orientation in space of a line, line segment, or vector can be specified with only two values, for example two direction cosines. Another example is the position of a point on the Earth, often described using the orientation of a line joining it with the Earth's center, measured using the two angles of longitude and latitude. Likewise, the orientation of a plane can be described with two values as well, for instance by specifying the orientation of a line normal to that plane, or by using the strike and dip angles. Further details about the mathematical methods to represent the orientation of rigid bodies and planes in three dimensions are given in the following sections. Two dimensions In two dimensions the orientation of any object (line, vector, or plane figure) is given by a single value: the angle through which it has rotated. There is only one degree of freedom and only one fixed point about which the rotation takes place. Multiple dimensions When there are dimensions, specification of an orientation of an object that does not have any rotational symmetry requires independent values. Rigid body in three dimensions Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections. Euler angles The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Tait–Bryan angles These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles. Orientation vector Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector. A similar method, called axis–angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure). Orientation matrix With the introduction of matrices, the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix. The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue). The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe. The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation. Orientation quaternion Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions. Usage examples Rigid body The attitude of a rigid body is its orientation as described, for example, by the orientation of a frame fixed in the body relative to a fixed reference frame. The attitude is described by attitude coordinates, and consists of at least three coordinates. One scheme for orienting a rigid body is based upon body-axes rotation; successive rotations three times about the axes of the body's fixed reference frame, thereby establishing the body's Euler angles. Another is based upon roll, pitch and yaw, although these terms also refer to incremental deviations from the nominal attitude See also Angular displacement Attitude control Body relative direction Directional statistics Oriented area Plane of rotation Rotation formalisms in three dimensions Signed direction Terms of orientation Triad method References External links Euclidean geometry Rotation in three dimensions
Orientation (geometry)
[ "Physics", "Mathematics" ]
1,452
[ "Topology", "Space", "Geometry", "Spacetime", "Orientation (geometry)" ]
2,544,439
https://en.wikipedia.org/wiki/Coherent%20control
Coherent control is a quantum mechanics-based method for controlling dynamic processes by light. The basic principle is to control quantum interference phenomena, typically by shaping the phase of laser pulses. The basic ideas have proliferated, finding vast application in spectroscopy, mass spectra, quantum information processing, laser cooling, ultracold physics and more. Brief History The initial idea was to control the outcome of chemical reactions. Two approaches were pursued: in the time domain, a "pump-dump" scheme where the control is the time delay between pulses in the frequency domain, interfering pathways controlled by one and three photons. The two basic methods eventually merged with the introduction of optimal control theory. Experimental realizations soon followed in the time domain and in the frequency domain. Two interlinked developments accelerated the field of coherent control: experimentally, it was the development of pulse shaping by a spatial light modulator and its employment in coherent control. The second development was the idea of automatic feedback control and its experimental realization. Controllability Coherent control aims to steer a quantum system from an initial state to a target state via an external field. For given initial and final (target) states, the coherent control is termed state-to-state control. A generalization is steering simultaneously an arbitrary set of initial pure states to an arbitrary set of final states i.e. controlling a unitary transformation. Such an application sets the foundation for a quantum gate operation. Controllability of a closed quantum system has been addressed by Tarn and Clark. Their theorem based in control theory states that for a finite-dimensional, closed-quantum system, the system is completely controllable, i.e. an arbitrary unitary transformation of the system can be realized by an appropriate application of the controls if the control operators and the unperturbed Hamiltonian generate the Lie algebra of all Hermitian operators. Complete controllability implies state-to-state controllability. The computational task of finding a control field for a particular state-to-state transformation is difficult and becomes more difficult with the increase in the size of the system. This task is in the class of hard inversion problems of high computational complexity. The algorithmic task of finding the field that generates a unitary transformation scales factorial more difficult with the size of the system. This is because a larger number of state-to-state control fields have to be found without interfering with the other control fields. It has been shown that solving general quantum optimal control problems is equivalent to solving Diophantine equations. It therefore follows from the negative answer to Hilbert's tenth problem that quantum optimal controllability is in general undecidable. Once constraints are imposed controllability can be degraded. For example, what is the minimum time required to achieve a control objective? This is termed the "quantum speed limit". The speed limit can be calculated by quantizing Ulam's control conjecture. Constructive approach to coherent control The constructive approach uses a set of predetermined control fields for which the control outcome can be inferred. The pump dump scheme in the time domain and the three vs one photon interference scheme in the frequency domain are prime examples. Another constructive approach is based on adiabatic ideas. The most well studied method is Stimulated raman adiabatic passage STIRAP which employs an auxiliary state to achieve complete state-to-state population transfer. One of the most prolific generic pulse shapes is a chirped pulse a pulse with a varying frequency in time. Optimal control Optimal control as applied in coherent control seeks the optimal control field for steering a quantum system to its objective. For state-to-state control the objective is defined as the maximum overlap at the final time T with the state : where the initial state is . The time dependent control Hamiltonian has the typical form: where is the control field. Optimal control solves for the optimal field using the calculus of variations introducing Lagrange multipliers. A new objective functional is defined where is a wave function like Lagrange multiplier and the parameter regulates the integral intensity. Variation of with respect to and leads to two coupled Schrödinger equations. A forward equation for with initial condition and a backward equation for the Lagrange multiplier with final condition . Finding a solution requires an iterative approach. Different algorithms have been applied for obtaining the control field such as the Krotov method. A local in time alternative method has been developed, where at each time step, the field is calculated to direct the state to the target. A related method has been called tracking Experimental applications Some applications of coherent control are Unimolecular and bimolecular chemical reactions. The biological photoisomerization of Retinal. The field of nuclear magnetic resonance. The field of ultracold matter for photoassociation. Quantum information processing. Attosecond physics. Another important issue is the spectral selectivity of two photon resonant excitation coherent control. A similar but non-resonant two photon excitation from the 1s1s to the 1s3s state of the He atom was investigated with ab-initio quantum mechanics es well. These concepts can be applied to single pulse Raman spectroscopy and microscopy. As one of the cornerstones for enabling quantum technologies, optimal quantum control keeps evolving and expanding into areas as diverse as quantum-enhanced sensing, manipulation of single spins, photons, or atoms, optical spectroscopy, photochemistry, magnetic resonance (spectroscopy as well as medical imaging), quantum information processing, and quantum simulation. References Further reading Principles of the Quantum Control of Molecular Processes, by Moshe Shapiro, Paul Brumer, pp. 250. . Wiley-VCH, (2003). "Quantum control of Molecular Processes", Moshe Shapiro and Paul Brumer, Wiley-VCH (2012). Rice, Stuart Alan, and Meishan Zhao. Optical control of molecular dynamics. New York: John Wiley, 2000. d'Alessandro, Domenico. Introduction to quantum control and dynamics. CRC press, 2007. David J. Tannor, "Introduction to Quantum Mechanics: A Time-dependent Perspective", (University Science Books, Sausalito, 2007). Chemical reactions Quantum mechanics Control theory
Coherent control
[ "Physics", "Chemistry", "Mathematics" ]
1,269
[ "Applied mathematics", "Control theory", "Theoretical physics", "Quantum mechanics", "nan", "Dynamical systems" ]
2,544,518
https://en.wikipedia.org/wiki/Capacitance%20multiplier
A capacitance multiplier is designed to make a capacitor function like a much larger capacitor. This can be achieved in at least two ways. An active circuit, using a device such as a transistor or operational amplifier A passive circuit, using autotransformers. These are typically used for calibration standards. The General Radio / IET labs 1417 is one such example. Capacitor multipliers make low-frequency filters and long-duration timing circuits possible that would be impractical with actual capacitors. Another application is in DC power supplies where very low ripple voltage (under load) is of paramount importance, such as in class-A amplifiers. Transistor-based Here the capacitance of capacitor C1 is multiplied by approximately the transistor's current gain (β). Without Q, R2 would be the load on the capacitor. With Q in place, the loading imposed upon C1 is simply the load current reduced by a factor of (β + 1). Consequently, C1 appears multiplied by a factor of (β + 1) when viewed by the load. Another way is to look at this circuit as an emitter follower with capacitor C1 holding voltage at base constant with load of input impedance of Q1: R2 multiplied by (1 + β), so the output current is stabilized much more against power line voltage noise. Operational amplifier based Here, the capacitance of capacitor C1 is multiplied by the ratio of resistances: C = C1 * R1 / R2 at the Vi node. The synthesized capacitance also brings a series resistance approximately equal to R2, and a leakage current appears across the capacitance because of the input offsets of OP. These problems can be avoided by a circuit with two op amps. In this circuit the input to OP1 can be a.c.-coupled if necessary, and the capacitance can be made variable by making the ratio of R1 to R2 variable. C = C1 * (1 + (R2 / R1)). In the circuits described above the capacitance is grounded, but floating capacitance multipliers are possible. A negative capacitance multiplier can be created with a negative impedance converter. Autotransformer based These permit the synthesis of accurate values of large capacitance (e.g., 1 F) by multiplying the capacitance of a high-precision lower value capacitor by the use of two transformers. Its function acts as a reference standard, not as a general-purpose circuit element. The resulting device is a four-terminal element and cannot be used at dc. References IET Labs 1417 FOUR-TERMINAL CAPACITANCE STANDARD Electricity Electronic circuits
Capacitance multiplier
[ "Engineering" ]
593
[ "Electronic engineering", "Electronic circuits" ]
2,545,328
https://en.wikipedia.org/wiki/Fl%C3%A8che%20%28architecture%29
A flèche (; ) is the name given to spires in Gothic architecture. In French, the word is applied to any spire, but in English it has the technical meaning of a spirelet or spike on the rooftop of a building. In particular, the spirelets often were built atop the crossings of major churches in mediaeval French Gothic architecture are called flèches. On the ridge of the roof on top of the crossing (the intersection of the nave and the transepts) of a church, flèches were typically light, delicate, timber-framed constructions with a metallic sheath of lead or copper. They are often richly decorated with architectural and sculptural embellishments: tracery, crockets, and miniature buttresses serve to adorn the flèche. Flèches are often very tall: the Gothic Revival spire of Notre-Dame de Paris (18582019) by Eugène Viollet-le-Duc was about before its destruction in the Notre-Dame de Paris fire, while the 16th century flèche of Amiens Cathedral is high. The highest flèche in the world was built at the end of the 19th century for Rouen Cathedral, high in total. A short spire or flèche surrounded by a parapet is common on churches in Hertfordshire; as a result, this type of flèche is called a Hertfordshire spike. See also Flèche faîtière Ridge turret Notes Architectural elements Church architecture
Flèche (architecture)
[ "Technology", "Engineering" ]
293
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
2,545,846
https://en.wikipedia.org/wiki/Enstrophy
In fluid dynamics, the enstrophy can be interpreted as another type of potential density; or, more concretely, the quantity directly related to the kinetic energy in the flow model that corresponds to dissipation effects in the fluid. It is particularly useful in the study of turbulent flows, and is often identified in the study of thrusters as well as in combustion theory and meteorology. Given a domain and a once-weakly differentiable vector field which represents a fluid flow, such as a solution to the Navier-Stokes equations, its enstrophy is given by:where . This quantity is the same as the squared seminorm of the solution in the Sobolev space . Incompressible flow In the case that the flow is incompressible, or equivalently that , the enstrophy can be described as the integral of the square of the vorticity : or, in terms of the flow velocity: In the context of the incompressible Navier-Stokes equations, enstrophy appears in the following useful result: The quantity in parentheses on the left is the kinetic energy in the flow, so the result says that energy declines proportional to the kinematic viscosity times the enstrophy. See also Atmospheric circulation Turbulence References Further reading Continuum mechanics Fluid dynamics Turbulence
Enstrophy
[ "Physics", "Chemistry", "Engineering" ]
272
[ "Turbulence", "Continuum mechanics", "Chemical engineering", "Classical mechanics", "Piping", "Fluid dynamics" ]
2,546,032
https://en.wikipedia.org/wiki/Light%20of%20Saratoga
The Light of Saratoga is a legend located in the Big Thicket of Southeast Texas. This legend of a mysterious light is also known as the Ghost Road of Saratoga, the Saratoga Light, and Bragg Light by local residents. Located on a dirt road, it is a light that may appear and disappear at random during the dark of night without explanation. Witness observations There are different beliefs as far as what the ghostly light could be, such as swamp gas and similar natural occurrences. The most popular story surrounding this legend is that a railroad worker was decapitated in a railway accident, and the light is that of his lantern as his ghost searches endlessly for his head. Two similar phenomena are the Paulding Light in Michigan's Upper Peninsula just north of Watersmeet and the Maco light in south-eastern North Carolina. Coincidentally, the same story of a headless railroad conductor also is offered as the explanation for these mysterious lights. Geography Located in Texas between Beaumont and Livingston, approximately 16 miles west of Kountze, Texas. The dirt road runs north–south starting at the south end at a bend on Farm-to-Market Road 787 that is 1.7 miles north of the intersection of FM 787-770, near Saratoga and ending at the north end at Farm-to-Market Road 1293 near the ghost town of Bragg Station. The cause of the light has not been established but one common explanation is that the light is the result of car lights from a nearby highway, however the light is usually seen when facing north and the highway can only be seen while facing south. History In 1902, Gulf, Colorado and Santa Fe Railway hacked a survey line from Bragg to Saratoga, bought right-of-way and opened the Big Thicket forest with a railroad, and the Saratoga train began its daily trips to Beaumont, carrying people, cattle, oil and logs. When the area's oil booms and virgin pine gave out, road crews pulled up the rails in 1934, the right-of-way was purchased by the county and the tram road became a county road. City and villages Kountze, Texas Bragg, Texas (ghost town) See also Ghost light Gallery References External links Bragg Road - The Ghost Road of Hardin County Big Thicket Light - From the Handbook of Texas Online The Ghost Road of Hardin County by Jim King - Historical article with photo of the light and map of the area The Big Thicket Light - From www.texasescapes.com Saratoga Ghost Road - Bragg Road Ghost Light - From www.texasescapes.com Atmospheric ghost lights Weather lore Reportedly haunted locations in Texas American ghosts Texas culture UFO-related phenomena
Light of Saratoga
[ "Physics" ]
539
[ "Weather", "Physical phenomena", "Weather lore" ]
2,546,505
https://en.wikipedia.org/wiki/Mogul%20lamp
A mogul lamp or six way lamp is a floor lamp which has a large center light bulb surrounded by three (or four) smaller bulbs that may be candelabra-style or standard medium-base bulbs, each mounted base-down. This entire setting is typically covered, at least partially, by a large cylindrical (or bell-shaped) fabric shade which is fitted over the reflector bowl, an upturned, white-colored glass, hemispherical diffuser surrounding the center bulb. The top of the lamp is usually designed to sit just above eye-level of an average adult person standing next to it, to avoid unpleasant glare from unshaded bulbs. Etymology The lamp is named after the Great Mogul. Details The bulb socket in the center has a larger diameter (an E39 or E40 mogul base) than a regular E26 or E27 Edison screw light socket, and is typically made of cast porcelain for the higher temperatures. Mogul-base lamps are available for industrial use in larger power ratings (250–1500) and in halogen, mercury vapor, high-pressure sodium and metal-halide lamp configurations. Compact fluorescent mogul-base bulbs are also available, as are adaptors to allow medium-base bulbs to be used in mogul sockets. There are usually two three-way switches near the top of the floor lamp to operate the bulbs. One controls the three-way center bulb, and the other turns on one, two, or all three (or four) of the peripheral bulbs. The center bulb may be very high power (often a three-way, 100-200-300 watt bulb), where the others are usually 60 watts or less. Some models have a night light in the base operated by a foot switch. One model turns the current light settings on or off by moving the lamp pole up or down. This design allows sixteen different combinations of brightness to be obtained. The result is that one lamp can provide a very soft, diffuse glow or quickly adjusted to illuminate an entire room, and everything in between. Popular in the 1920s and 1930s, mogul lamps can be obtained in thrift or antique stores and can still be purchased new. Mogul lamps and mathematics Mogul lamps are also the subject of a mathematics problem concerning the number of possible combinations of power that can be obtained. As it turns out, the name "Six Way Lamp" is somewhat deceiving since there are in fact 16 possible combinations (without the night-light), including combinations with all lamps of either switch off. The term probably comes from the fact that the design incorporates two "three-way" switches. See also Floor lamp Light fixtures
Mogul lamp
[ "Engineering" ]
549
[ "Design stubs", "Design" ]
18,490,588
https://en.wikipedia.org/wiki/Drill%20pipe
Drill pipe, is hollow, thin-walled, steel or aluminium alloy piping that is used on drilling rigs. It is hollow to allow drilling fluid to be pumped down the hole through the bit and back up the annulus. It comes in a variety of sizes, strengths, and wall thicknesses, but is typically 27 to 32 feet in length (Range 2). Longer lengths, up to 45 feet, exist (Range 3). Background Drill stems must be designed to transfer drilling torque for combined lengths that often exceed several miles down into the Earth's crust, and also must be able to resist pressure differentials between inside and outside (or vice versa), and have sufficient strength to suspend the total weight of deeper components. For deep wells this requires tempered steel tubes that are expensive, and owners spend considerable efforts to reuse them after finishing a well. A used drill stem is inspected on site, or off location. Ultrasonic testing and modified instruments similar to the spherometer are used at inspection sites to identify defects from metal fatigue, in order to preclude fracture of the drill stem during future wellboring. Drill pipe is most often considered premium class, which is 80% remaining body wall (RBW). After inspection determines that the RBW is below 80%, the pipe is considered to be Class 2 or "yellow band" pipe. Eventually the drill pipe will be graded as scrap and marked with a red band. Drill pipe is a portion of the overall drill string. The drill string consists of both drill pipe and the bottom hole assembly (BHA), which is the tubular portion closest to the bit. The BHA will be made of thicker walled heavy weight drill pipe (HWDP) and drill collars, which have a larger outside diameter and provide weight to the drill bit and stiffness to the drilling assembly. Other BHA components can include a mud motor, measurement while drilling (MWD) apparatus, stabilizers, and various specialty downhole tools. The drill stem includes the entire drill string, plus the kelly that imparts rotation and torque to the drill pipe at the top. See Drilling rig (petroleum) for a diagram of a drilling rig. Manufacturing process Modern drill pipe is made from the welding of at least three separate pieces: box tool joint, pin tool joint, and the tube. The green tubes are received by the drill pipe manufacturer from the steel mill. The ends of the tubes are then upset to increase the cross-sectional area of the ends. The tube end may be externally upset (EU), internally upset (IU), or internally and externally upset (IEU). Standard max upset dimensions are specified in API 5DP, but the exact dimensions of the upset are proprietary to the manufacturer. After upsetting, the tube then goes through a heat treating process. Drill pipe steel is commonly quenched and tempered to achieve high yield strengths (135 ksi is a common tube yield strength). The tool joints (connectors) are also received by the manufacturer as green tubes. After a quench and temper heat treat, the tool joints are cut into box (female) and pin (male) threads. Tool joints are commonly 120 ksi Specified Minimum Yield Strength (SMYS), rather than the 135 ksi of the tube. They generally are stiffer than the tube, increasing the likelihood of fatigue failure at the junction. The lower SMYS on the connection increases the fatigue resistance. Higher strength steels are typically harder and more brittle, making them more susceptible to cracking and subsequent stress crack propagation. Tubes and tool joints are welded using rotary inertia or direct drive friction welding. The tube is held stationary while the tool joint is revolved at high RPMs. The tool joint is then firmly pressed onto the upset end of the tube while the tool joint is rotating. The heat and force during this interaction weld the two together. Once the "ram horns" or excess material is removed, the weld line can only be seen under a microscope. Inertia friction welding is the traditional proven method. Direct drive friction welding is controlled and monitored up to 1,000 times a second, resulting in a fine quality weld that does not necessarily need a full heat treat quench and temper regime. References Drilling fluid Piping
Drill pipe
[ "Chemistry", "Engineering" ]
872
[ "Piping", "Chemical engineering", "Mechanical engineering", "Building engineering" ]
18,491,612
https://en.wikipedia.org/wiki/LP%20Aquarii
LP Aquarii is a pulsating variable star in the constellation of Aquarius that varies between magnitudes 6.30 and 6.64. The position of the star near the ecliptic means it is subject to lunar occultations. The star's variability was first detected in the Hipparcos satellite data, and it was given its variable star designation in 1999. References M-type giants Slow irregular variables Aquarius (constellation) Durchmusterung objects 214983 112078 Aquarii, LP
LP Aquarii
[ "Astronomy" ]
110
[ "Constellations", "Aquarius (constellation)" ]
18,492,176
https://en.wikipedia.org/wiki/Cryofixation
Cryofixation is a technique for fixation or stabilisation of biological materials as the first step in specimen preparation for the electron microscopy and cryo-electron microscopy. Typical specimens for cryofixation include small samples of plant or animal tissue, cell suspensions of microorganisms or cultured cells, suspensions of viruses or virus capsids and samples of purified macromolecules, especially proteins. Types of cryo-fixation are freezing-drying, freezing-substitution and freezing-etching. Plunge freezing The method involves ultra-rapid cooling of small tissue or cell samples to the temperature of liquid nitrogen (−196 °C) or below, stopping all motion and metabolic activity and preserving the internal structure by freezing all fluid phases solid. Typically, a sample is plunged into liquid nitrogen or into liquid ethane or liquid propane in a container cooled by liquid nitrogen. The ultimate objective is to freeze the specimen so rapidly (at 104 to 106 K per second) that ice crystals are unable to form, or are prevented from growing big enough to cause damage to the specimen's ultrastructure. The formation of samples containing specimens in amorphous ice is the "holy grail" of biological cryomicroscopy. In practice, it is very difficult to achieve high enough cooling rates to produce amorphous ice in specimens more than a few micrometres in thickness. For this purpose, plunging a specimen into liquid nitrogen at its boiling point (−196 °C) does not always freeze the specimen fast enough, for several reasons. First, the liquid nitrogen boils rapidly around the specimen forming a film of insulating gas that slows heat transfer to the cryogenic liquid, known as the Leidenfrost effect. Cooling rates can be improved by pumping the liquid nitrogen with a rotary vane vacuum pump for a few tens of seconds before plunging the specimen into it. This lowers the temperature of the liquid nitrogen below its boiling point, so that when the specimen is plunged into it, it envelops the specimen closely for a brief period of time and extracts heat from it more efficiently. Even faster cooling can be obtained by plunging specimens into liquid propane or ethane (ethane has been found to be more efficient) cooled very close to their melting points using liquid nitrogen or by slamming the specimen against highly polished liquid nitrogen-cooled metal surfaces made of copper or silver. Secondly, two properties of water itself prevent rapid cryofixation in large specimens. The thermal conductivity of ice is very low compared with that of metals, and water releases of latent heat of fusion as it freezes, defeating rapid cooldown in specimens more than a few micrometres thick. High-pressure freezing High pressure helps prevent the formation of large ice crystals. Self pressurized rapid freezing (SPRF) can utilize many different cryogens has recently been touted as an attractive and low cost alternative to high pressure freezing (HPF). Cold pressurized nitrogen substitutes ethanol at temperatures roughly 123K. The warm ethanol is then expelled by the freezing LN2 and most likely produces an ethanol-nitrogen mixture that gradually becomes colder and colder. Freeze-drying Drying times are reduced by up to 30% with proper freeze drying. See also Cryopreservation References Microscopy Scientific techniques
Cryofixation
[ "Chemistry" ]
668
[ "Microscopy" ]
18,493,095
https://en.wikipedia.org/wiki/Open%20world
In video games, an open world is a virtual world in which the player can approach objectives freely, as opposed to a world with more linear and structured gameplay. Notable games in this category include The Legend of Zelda (1986), Grand Theft Auto V (2013), Red Dead Redemption 2 (2018) and Minecraft (2011). Games with open or free-roaming worlds typically lack level structures like walls and locked doors, or the invisible walls in more open areas that prevent the player from venturing beyond them; only at the bounds of an open-world game will players be limited by geographic features like vast oceans or impassable mountains. Players typically do not encounter loading screens common in linear level designs when moving about the game world, with the open-world game using strategic storage and memory techniques to load the game world dynamically and seamlessly. Open-world games still enforce many restrictions in the game environment, either because of absolute technical limitations or in-game limitations imposed by a game's linearity. While the openness of the game world is an important facet to games featuring open worlds, the main draw of open-world games is about providing the player with autonomy—not so much the freedom to do anything they want in the game (which is nearly impossible with current computing technology), but the ability to choose how to approach the game and its challenges in the order and manner as the player desires while still constrained by gameplay rules. Examples of high level of autonomy in computer games can be found in massively multiplayer online role-playing games (MMORPG) or in single-player games adhering to the open-world concept such as the Fallout series. The main appeal of open-world gameplay is that it provides a simulated reality and allows players to develop their character and its behavior in the direction and pace of their own choosing. In these cases, there is often no concrete goal or end to the game, although there may be the main storyline, such as with games like The Elder Scrolls V: Skyrim. Gameplay and design An open world is a level or game designed as nonlinear, open areas with many ways to reach an objective. Some games are designed with both traditional and open-world levels. An open world facilitates greater exploration than a series of smaller levels, or a level with more linear challenges. Reviewers have judged the quality of an open world based on whether there are interesting ways for the player to interact with the broader level when they ignore their main objective. Some games actually use real settings to model an open world, such as New York City. A major design challenge is to balance the freedom of an open world with the structure of a dramatic storyline. Since players may perform actions that the game designer did not expect, the game's writers must find creative ways to impose a storyline on the player without interfering with their freedom. As such, games with open worlds will sometimes break the game's story into a series of missions, or have a much simpler storyline altogether. Other games instead offer side-missions to the player that do not disrupt the main storyline. Most open-world games make the character a blank slate that players can project their own thoughts onto, although several games such as Landstalker: The Treasures of King Nole offer more character development and dialogue. Writing in 2005, David Braben described the narrative structure of current video games as "little different to the stories of those Harold Lloyd films of the 1920s", and considered genuinely open-ended stories to be the "Holy Grail" for the fifth generation of gaming. Gameplay designer Manveer Heir, who worked on Mass Effect 3 and Mass Effect Andromeda for Electronic Arts, said that there are difficulties in the design of an open-world game since it is difficult to predict how players will approach solving gameplay challenges offered by a design, in contrast to a linear progression, and needs to be a factor in the game's development from its onset. Heir opined that some of the critical failings of Andromeda were due to the open world being added late in development. Some open-world games, to guide the player towards major story events, do not provide the world's entire map at the start of the game, but require the player to complete a task to obtain part of that map, often identifying missions and points of interest when they view the map. This has been derogatorily referred to as "Ubisoft towers", as this mechanic was promoted in Ubisoft's Assassin's Creed series (the player climbing a large tower as to observe the landscape around it and identify waypoints nearby) and reused in other Ubisoft games, including Far Cry, Might & Magic X: Legacy and Watch Dogs. Other games that use this approach include Middle-earth: Shadow of Mordor, The Legend of Zelda: Breath of the Wild, and Marvel's Spider-Man. Rockstar games like GTA IV and the Red Dead Redemption series lock out sections of the map as "barricaded by law enforcement" until a specific point in the story has been reached. Games with open worlds typically give players infinite lives or continues, although some force the player to start from the beginning should they die too many times. There is also a risk that players may get lost as they explore an open world; thus designers sometimes try to break the open world into manageable sections. The scope of open-world games requires the developer to fully detail every possible section of the world the player may be able to access, unless methods like procedural generation are used. The design process, due to its scale, may leave numerous game world glitches, bugs, incomplete sections, or other irregularities that players may find and potentially take advantage of. The term "open world jank" has been used to apply to games where the incorporation of the open world gameplay elements may be poor, incomplete, or unnecessary to the game itself such that these glitches and bugs become more apparent, though are generally not game-breaking, such as the case for No Man's Sky near its launch. Distinctions between open world and sandbox games The mechanics of open-world games are often overlapped with ideas of sandbox games, but these are considered different concepts. Whereas open world refers to the lack of limits for the player's exploration of the game's world, sandbox games are based on the ability of giving the player tools for creative freedom within the game to approach objectives, if such objectives are present. For example, Microsoft Flight Simulator is an open-world game as one can fly anywhere within the mapped world, but is not considered a sandbox game as there are few creative aspects brought into the game. Emergent gameplay The combination of open world and sandbox mechanics can lead towards emergent gameplay, complex reactions that emerge (either expectedly or unexpectedly) from the interaction of relatively simple game mechanics. According to Peter Molyneux, emergent gameplay appears wherever a game has a good simulation system that allows players to play in the world and have it respond realistically to their actions. It is what made SimCity and The Sims compelling to players. Similarly, being able to freely interact with the city's inhabitants in Grand Theft Auto added an extra dimension to the series. In recent years game designers have attempted to encourage emergent play by providing players with tools to expand games through their own actions. Examples include in-game web browsers in EVE Online and The Matrix Online; XML integration tools and programming languages in Second Life; shifting exchange rates in Entropia Universe; and the complex object-and-grammar system used to solve puzzles in Scribblenauts. Other examples of emergence include interactions between physics and artificial intelligence. One challenge that remains to be solved, however, is how to tell a compelling story using only emergent technology. In an op-ed piece for BBC News, David Braben, co-creator of Elite, called truly open-ended game design "The Holy Grail" of modern video gaming, citing games like Elite and the Grand Theft Auto series as early steps in that direction. Peter Molyneux has also stated that he believes emergence (or emergent gameplay) is where video game development is headed in the future. He has attempted to implement emergent gameplay to a great extent in some of his games, particularly Black & White and Fable. Procedural generation of open worlds Procedural generation refers to content generated algorithmically rather than manually, and is often used to generate game levels and other content. While procedural generation does not guarantee that a game or sequence of levels is nonlinear, it is an important factor in reducing game development time and opens up avenues making it possible to generate larger and more or less unique seamless game worlds on the fly and using fewer resources. This kind of procedural generation is known as worldbuilding, in which general rules are used to construct a believable world. Most 4X and roguelike games make use of procedural generation to some extent to generate game levels. SpeedTree is an example of a developer-oriented tool used in the development of The Elder Scrolls IV: Oblivion and aimed at speeding up the level design process. Procedural generation also made it possible for the developers of Elite, David Braben and Ian Bell, to fit the entire game—including thousands of planets, dozens of trade commodities, multiple ship types and a plausible economic system—into less than 22 kilobytes of memory. More recently, No Man's Sky procedurally generated over 18 quintillion planets including flora, fauna, and other features that can be researched and explored. History 20th century There is no consensus on what the earliest open-world game is, due to differing definitions of how large or open a world needs to be. Inverse provides some early examples games that established elements of the open world: Jet Rocket, a 1970 Sega electro-mechanical arcade game that, while not a video game, predated the flight simulator genre to give the player free roaming capabilities, and dnd, a 1975 text-based adventure game for the PLATO system that offered non-linear gameplay. Ars Technica traces the concept back to the free-roaming exploration of 1976 text adventure game Colossal Cave Adventure, which inspired the free-roaming exploration of Adventure (1980), but notes that it was not until 1984 that what "we now know as open-world gaming" took on a "definite shape" with 1984 space simulator Elite, considered a pioneer of the open world; Gamasutra argues that its open-ended sandbox style is rooted in flight simulators, such as SubLOGIC's Flight Simulator (1979/80), noting most flight sims "offer a 'free flight' mode that allows players to simply pilot the aircraft and explore the virtual world". Others trace the concept back to 1981 CRPG Ultima, which had a free-roaming overworld map inspired by tabletop RPG Dungeons & Dragons. The overworld maps of the first five Ultima games, released up to 1988, lacked a single, unified scale, with towns and other places represented as icons; this style was adopted by the first three Dragon Quest games, released from 1986 to 1988 in Japan. Early examples of open-world gameplay in adventure games include The Portopia Serial Murder Case (1983) and The Lords of Midnight (1984), with open-world elements also found in The Hobbit (1982) and Valhalla (1983). The strategy video game, The Seven Cities of Gold (1984), is also cited as an early open-world game, influencing Sid Meier's Pirates! (1987). Eurogamer also cites British computer games such as Ant Attack (1983) and Sabre Wulf (1984) as early examples. According to Game Informers Kyle Hilliard, Hydlide (1984) and The Legend of Zelda (1986) were among the first open-world games, along with Ultima. IGN traces the roots of open-world game design to The Legend of Zelda, which it argues is "the first really good game based on exploration", while noting that it was anticipated by Hydlide, which it argues is "the first RPG that rewarded exploration". According to GameSpot, never "had a game so open-ended, nonlinear, and liberating been released for the mainstream market" before The Legend of Zelda. According to The Escapist, The Legend of Zelda was an early example of open-world, nonlinear gameplay, with an expansive and cohesive world, inspiring many games to adopt a similar open-world design. Mercenary (1985) has been cited as the first open world 3D action-adventure game. There were also other open-world games in the 1980s, such as Back to Skool (1985), Turbo Esprit (1986) and Alternate Reality: The City (1985). Wasteland, released in 1988, is also considered an open-world game. The early 1990s saw open-world games such as The Terminator (1990), The Adventures of Robin Hood (1991), and Hunter (1991), which IGN describes as the first sandbox game to feature full 3D, third-person graphics, and Ars Technica argues "has one of the strongest claims to the title of GTA forebear". Sierra On-Line's 1992 adventure game King's Quest VI has an open world; almost half of the quests are optional, many have multiple solutions, and players can solve most in any order. Atari Jaguar launch title, Cybermorph (1993), was notable for its open 3D polygonal-world and non-linear gameplay. Quarantine (1994) is an example of an open-world driving game from this period, while Iron Soldier (1994) is an open-world mech game. The director of 1997's Blade Runner argues that that game was the first open world three-dimensional action adventure game. IGN considers Nintendo's Super Mario 64 (1996) revolutionary for its 3D open-ended free-roaming worlds, which had rarely been seen in 3D games before, along with its analog stick controls and camera control. Other 3D examples include Mystical Ninja Starring Goemon (1997), Ocarina of Time (1998), the DMA Design (Rockstar North) game Body Harvest (1998), the Angel Studios (Rockstar San Diego) games Midtown Madness (1999) and Midnight Club: Street Racing (2000), the Reflections Interactive (Ubisoft Reflections) game Driver (1999), and the Rareware games Banjo-Kazooie (1998), Donkey Kong 64 (1999), and Banjo-Tooie (2000). 1UP considers Sega's adventure Shenmue (1999) the originator of the "open city" subgenre, touted as a "FREE" ("Full Reactive Eyes Entertainment") game giving players the freedom to explore an expansive sandbox city with its own day-night cycles, changing weather, and fully voiced non-player characters going about their daily routines. The game's large interactive environments, wealth of options, level of detail and the scope of its urban sandbox exploration has been compared to later sandbox games like Grand Theft Auto III and its sequels, Sega's own Yakuza series, Fallout 3, and Deadly Premonition. 21st century Grand Theft Auto has had over 200 million sales. Creative director Gary Penn, who previously worked on Frontier: Elite II, cited Elite as a key influence, calling it "basically Elite in a city", and mentioned other team members being influenced by Syndicate and Mercenary. Grand Theft Auto III combined elements from previous games, and fused them together into a new immersive 3D experience that helped define open-world games for a new generation. Executive producer Sam Houser described it as "Zelda meets Goodfellas", while producer Dan Houser also cited The Legend of Zelda: Ocarina of Time and Super Mario 64 as influences. Radio stations had been implemented earlier in games such as Maxis' SimCopter (1996), the ability to beat or kill non-player characters date back to games such as The Portopia Serial Murder Case (1983), and Valhalla (1983) and the way in which players run over pedestrians and get chased by police has been compared to Pac-Man (1980). After the release of Grand Theft Auto III, many games which employed a 3D open world, such as Ubisoft's Watch Dogs and Deep Silver's Saints Row series, were labeled, often disparagingly, as Grand Theft Auto clones, much as how many early first-person shooters were called "Doom clones". Other examples include World of Warcraft, The Elder Scrolls and Fallout series of games, which feature a large and diverse world, offering tasks and possibilities to play. In the Assassin's Creed series, which began in 2007, players explore historic open-world settings. These include the Holy Land during the Third Crusade in Assassin's Creed, Renaissance Italy in Assassin's Creed II and Brotherhood, Constantinople during the rise of the Ottoman Empire in Revelations, New England during the American Revolution in Assassin's Creed III, the Caribbean during the Golden Age of Piracy in Black Flag, the North Atlantic during the French and Indian War in Rogue, Paris during the French Revolution in Unity, London at the onset of the Second Industrial Revolution in Syndicate, Ptolemaic Egypt in Origins, Classical Greece during the Peloponnesian War in Odyssey, and Medieval England and Norway during the Viking Age in Valhalla. The series intertwines factual history with a fictional storyline. In the fictional storyline, the Templars and the Assassins, two secret organisations inspired by their real-life counterparts, have been mortal enemies for all of known history. Their conflict stems from the Templars' desire to have peace through control, which directly contrasts the Assassins' wish for peace with free will. Their fighting influences much of history, as the sides often back real historical forces. For example, during the American Revolution depicted in Assassin's Creed III, the Templars initially support the British, while the Assassins side with the American colonists. S.T.A.L.K.E.R.: Shadow of Chernobyl was developed by GSC Game World in 2007, followed by two other games, a prequel and a sequel. The free world style of the zone was divided into huge maps, like sectors, and the player can go from one sector to another, depending on required quests or just by choice. In 2011, Dan Ryckert of Game Informer wrote that open-world crime games were "a major force" in the gaming industry for the preceding decade. Another popular sandbox game is Minecraft, which has since become the best-selling video game of all time, selling over 238 million copies worldwide on multiple platforms by April 2021. Minecrafts procedurally generated overworlds cover a virtual 3.6 billion square kilometers. The Outerra Engine is a world rendering engine in development since 2008 that is capable of seamlessly rendering whole planets from space down to ground level. Anteworld is a world-building game and free tech-demo of the Outerra Engine that builds upon real-world data to render planet Earth realistically on a true-to-life scale. No Man's Sky, released in 2016, is an open-world game set in a virtually infinite universe. According to the developers, through procedural generation, the game is able to produce more than 18 quintillion ( or 18,000,000,000,000,000,000) planets to explore. Several critics found that the nature of the game can become repetitive and monotonous, with the survival gameplay elements being lackluster and tedious. Jake Swearingen in New York said that the players can procedurally generate 18.6 quintillion unique planets, but they can't procedurally generate 18.6 quintillion unique things to do. Updates have aimed to address these criticisms. In 2017, the open-world design of The Legend of Zelda: Breath of the Wild was described by critics as being revolutionary and by developers as a paradigm shift for open-world design. In contrast to the more structured approach of most open-world games, Breath of the Wild features a large and fully interactive world that is generally unstructured and rewards the exploration and manipulation of its world. Inspired by the original 1986 Legend of Zelda, the open world of Breath of the Wild integrates multiplicative gameplay, where "objects react to the player's actions and the objects themselves also influence each other". Along with a physics engine, the game's open-world also integrates a chemistry engine, "which governs the physical properties of certain objects and how they relate to each other", rewarding experimentation. Nintendo has described the game's approach to open-world design as "open-air". See also Nonlinear gameplay Persistent world References Further reading Michael Llewellyn: 15 Open World Games More Mature Than Grand Theft Auto V. thegamer.com. April 24, 2017. Video game terminology
Open world
[ "Technology" ]
4,315
[ "Computing terminology", "Video game terminology" ]
18,493,882
https://en.wikipedia.org/wiki/LY-404187
LY-404187 is an AMPA receptor positive allosteric modulator which was developed by Eli Lilly and Company. It is a member of the biarylpropylsulfonamide class of AMPA receptor potentiators. LY-404187 has been demonstrated to enhance cognitive function in animal studies, and has also shown effects suggesting antidepressant action as well as having possible application in the treatment of schizophrenia, Parkinson's disease and ADHD. These effects appear to be mediated through multiple mechanisms of action secondary to AMPA receptor potentiation, with a prominent effect seen in research being increased levels of BDNF in the brain. It may therefore be continued on to human trials, although Eli Lilly has developed a whole family of biarylpropylsulfonamide derivatives and it is unclear at this stage which compound is most likely to be selected for further development. See also AMPA receptor positive allosteric modulator References AMPA receptor positive allosteric modulators Drugs developed by Eli Lilly and Company Nitriles Sulfonamides Experimental drugs Isopropyl compounds Biphenyls
LY-404187
[ "Chemistry" ]
234
[ "Nitriles", "Functional groups" ]
18,494,381
https://en.wikipedia.org/wiki/MSpot
mSpot Inc. is the developer of Samsung Music Hub – an all-in-one mobile music service that includes a streaming catalog, cloud music storage, radio, and music store. The service was currently available for Samsung smart mobile devices in the U.S. and EU countries including the UK, France, Germany, Italy and Spain. mSpot became a wholly owned, independently operated subsidiary of Samsung Electronics, in May 2012: in July, 2013, mSpot employees became the Music Team for Samsung's Media Solutions Center. mSpot was founded in 2004 by Daren Tsui and Ed Ho, now CEO and CTO respectively. The company was initially launched as a mobile radio service. In early 2005, Sprint was building the first 2.5 G network and requested mSpot to provide one of the first mobile radio channels for the first service. Initially the service launched with 8 music channels and 5 talk channels (NPR, Acuweather); and is soon expanded to provide sports and entertainment channels, including premium content channels like NPR, and sports. Soon after, mSpot launched a white-labeled mobile entertainment platform offering music and video content, which is later licensed by other wireless carriers including AT&T and T-Mobile. In 2006, mSpot began offering full-length mobile movies that are streamed over wireless networks. Studios that first offered content for the service include Disney and Universal: Scarface is the first mobile movie offered on mSpot Movies, initially offered on Sprint. In early 2008, Island Def Jam partnered with mSpot to bring label-sponsored radio to mSpot Radio. In May 2010, mSpot Music became one of the first "cloud" music services available in the U.S., ahead of Google Music and similar services from Apple and others. References Mobile technology companies
MSpot
[ "Technology" ]
364
[ "Mobile technology companies" ]
10,457,720
https://en.wikipedia.org/wiki/Engine%E2%80%93generator
An engine–generator is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of equipment. This combination is also called an engine–generator set or a gen-set. In many contexts, the engine is taken for granted and the combined unit is simply called a generator. An engine–generator may be a fixed installation, part of a vehicle, or made small enough to be portable. Components In addition to the engine and generator, engine–generators generally include a fuel supply, a constant engine speed regulator (governor) in diesel and a generator voltage regulator, cooling and exhaust systems, and lubrication system. Units larger than about 1 kW rating often have a battery and electric starter motor; very large units may start with compressed air supplied either to an air driven starter motor or introduced directly to the engine cylinders to initiate engine rotation. Standby power generating units often include an automatic starting system and a transfer switch to disconnect the load from the utility power source when there is a power failure and connect it to the generator. Types Engine–generators are available in a wide range of power ratings. These include small, hand-portable units that can supply several hundred watts of power, hand-cart mounted units that can supply several thousand watts and stationary or trailer-mounted units that can supply over a million watts. Regardless of the size, generators may run on gasoline, diesel, natural gas, propane, bio-diesel, water, sewage gas or hydrogen. Most of the smaller units are built to use gasoline (petrol) as a fuel, and the larger ones have various fuel types, including diesel, natural gas and propane (liquid or gas). Some engines may also operate on diesel and gas simultaneously (bi-fuel operation). Engines Many engine–generators use a reciprocating engine, with fuels mentioned above. This can be a steam engine, such as most coal-powered fossil-fuel power plants use. Some engine–generators use a turbine as the engine, such as the industrial gas turbines used in peaking power plants and the microturbines used in some hybrid electric buses. The generator voltage (volts), frequency (Hz) and power (watts) ratings are selected to suit the load that will be connected. Portable engine–generators may require an external power conditioner to safely operate some types of electronic equipment. Engine-driven generators fueled on natural gas fuel often form the heart of small-scale (less than 1,000 kW) combined heat and power installations. Three phase There are only a few portable three-phase generator models available in the US. Most of the portable units available are single-phase generators and most of the three-phase generators manufactured are large industrial type generators. In other countries where three-phase power is more common in households, portable generators are available from a few kW and upwards. Inverter generator Small portable generators may use an inverter. Inverter models can run at slower RPMs to generate the power that is necessary, thus reducing the noise of the engine and making it more fuel-efficient. Inverter generators are best to power sensitive electronic devices such as computers and lights that use a ballast, as they have a low total harmonic distortion. Since the load on the electric generator causes the speed of the engine to fall, this has an adverse effect on the frequency and voltage of the electrical output. By using an electronic inverter to produce the required AC output, its voltage and frequency can be stable over the power range of the generator. Another advantage is that the generated electric power from the engine-driven generator can be a polyphase output at a higher frequency and at a waveform more suitable for rectification to produce the DC to feed the inverter. This reduces the weight and size of the unit. A typical modern inverter–generator produces 3kVA and weighs c. 26 kg making it convenient for handling by one person. Mid-size stationary engine–generator The mid-size stationary engine–generator pictured here is a 100 kVA set which produces 415 V at around 110 A. It is powered by a 6.7-liter turbocharged Perkins Phaser 1000 Series engine, and consumes approximately 27 liters of fuel an hour, on a 400-liter tank. Diesel engines in the UK can run on red diesel and rotate at 1,500 or 3,000 rpm. This produces power at 50 Hz, which is the frequency used in Europe. In regions where the frequency is 60 Hz such as in North America, generators rotate at 1,800 rpm or another divisor of 3600. Diesel engine–generator sets operated at their peak efficiency point can produce between 3 and 4 kilowatt hours of electrical energy for each liter of diesel fuel consumed, with lower efficiency at partial loads. Large scale generator sets Many generators produce enough kilowatts to power anything from a business to a full-sized hospital. These units are particularly useful in providing backup power solutions for companies which have serious economic costs associated with a shutdown caused by an unplanned power outage. For example, a hospital is in constant need of electricity, because several life-preserving medical devices run on electricity, like ventilators. A very common use is a railway diesel electric locomotive, some units having over . Large generators are also used on board ships that utilize a diesel-electric powertrain. Voltages and frequencies may vary in different installations. Applications Engine–generators are used to provide electrical power in areas where utility (central station) electricity is unavailable, or where electricity is only needed temporarily. Small generators are sometimes used to provide electricity to power tools at construction sites. Trailer-mounted generators supply temporary installations of lighting, sound amplification systems, amusement rides, etc. A wattage chart can be used to calculate the estimated power usage for different types of equipment to determine how many watts are necessary for a portable generator. Trailer-mounted generators or mobile generators, diesel generators are also used for emergencies or backup where either a redundant system is required or no generator is on-site. To make the hookup faster and safer, a tie-in panel is frequently installed near the building switchgear that contains connectors such as camlocks. The tie-in panel may also contain a phase rotation indicator (for 3-phase systems) and a circuit breaker. Camlock connectors are rated for 400 amps up to 480-volt systems and used with 4/0 type W cable connecting to the generator. Tie-in panel designs are common between 200- and 3000-amp applications. Standby electrical generators are permanently installed and used to immediately provide electricity to critical loads during temporary interruptions of the utility power supply. Hospitals, communications service installations, data processing centers, sewage pumping stations, and many other important facilities are equipped with standby power generators. Some standby power generators can automatically detect the loss of grid power, start the motor, run using fuel from a natural gas line, detect when grid power is restored, and then turn itself off—with no human interaction. Privately owned generators are especially popular in areas where grid power is undependable or unavailable. Trailer-mounted generators can be towed to disaster areas where grid power has been temporarily disrupted. Safety Every year, incorrectly used portable generators result in deaths from carbon monoxide poisoning. A 5.5 kW portable generator will generate the same amount of carbon monoxide as six cars, which can quickly build up to fatal levels if the generator has been placed indoors. Using portable generators in garages, or near open windows or air conditioning vents can also result in carbon monoxide poisoning. Additionally, it is important to prevent backfeeding when using a portable engine generator, which can harm utility workers or people in other buildings. Before turning on a diesel- or gasoline-powered generator, users should make sure that the main breaker is in the "off" position, to ensure that the electric current does not reverse. Exhausting extremely hot flue gases from gen-sets can be done by factory-built positive pressure chimneys (certified to UL 103 test standard) or general utility schedule 40 black iron pipe. It is recommended to use insulation to reduce pipe skin temperature and reduce excessive heat gain into the mechanical room. There are also excessive pressure relief valves available to relieve the pressure from potential backfires and to maintain the integrity of the exhaust pipe. See also Diesel electric locomotive Diesel electric multiple unit Diesel generator Electric generator Fuel cell Head end power Motor–generator Standby generator Stationary engine References External links CDC: Electrical Safety and Generators Electrical generators Engines
Engine–generator
[ "Physics", "Technology" ]
1,743
[ "Physical systems", "Electrical generators", "Machines", "Engines" ]
10,463,746
https://en.wikipedia.org/wiki/IPC%20%28electronics%29
IPC is a trade association whose aim is to standardize the assembly and production requirements of electronic equipment and assemblies. IPC is headquartered in Bannockburn, Illinois, United States with additional offices in Washington, D.C. Atlanta, Ga., and Miami, Fla. in the United States, and overseas offices in China, Japan, Thailand, India, Germany, and Belgium. IPC is accredited by the American National Standards Institute (ANSI) as a standards developing organization and is known globally for its standards. It publishes the most widely used acceptability standards in the electronics industry. History It was founded in 1957 as the Institute of Printed Circuits. Its name was later changed to the Institute for Interconnecting and Packaging Electronic Circuits to highlight the expansion from bare boards to packaging and electronic assemblies. In 1999, the organization formally changed its name to IPC with the accompanying tagline, Association Connecting Electronics Industries. Standards IPC standards are used by the electronics manufacturing industry. IPC-A-610, Acceptability of Electronic Assemblies, is used worldwide by original equipment manufacturers and EMS companies. There are more than 3600 trainers worldwide who are certified to train and test on the standard. Standards are created by committees of industry volunteers. Task groups have been formed in China, the United States, and Denmark. Standards published by IPC include: General documents IPC-T-50 Terms and Definitions IPC-2615 Printed Board Dimensions and Tolerances IPC-D-325 Documentation Requirements for Printed Boards IPC-A-31 Flexible Raw Material Test Pattern IPC-ET-652 Guidelines and Requirements for Electrical Testing of Unpopulated Printed Boards Design specifications IPC-2612 Sectional Requirements for Electronic Diagramming Documentation (Schematic and Logic Descriptions) IPC-2141A Design Guide for High-Speed Controlled Impedance Circuit Boards IPC-2221 Generic Standard on Printed Board Design IPC-2223 Sectional Design Standard for Flexible Printed Boards IPC-2251 Design Guide for the Packaging of High Speed Electronic Circuit IPC-7351B Generic Requirements for Surface Mount Design and Land Pattern Standards Material specifications IPC-FC-234 Pressure Sensitive Adhesives Assembly Guidelines for Single-Sided and Double-Sided Flexible Printed Circuits IPC-4562 Metal Foil for Printed Wiring Applications IPC-4101 Laminate Prepreg Materials Standard for Printed Boards IPC-4202 Flexible Base Dielectrics for Use in Flexible Printed Circuitry IPC-4203 Adhesive Coated Dielectric Films for Use as Cover Sheets for Flexible Printed Circuitry and Flexible Adhesive Bonding Films IPC-4204 Flexible Metal-Clad Dielectrics for Use in Fabrication of Flexible Printed Circuitry Performance and inspection documents IPC-A-600 Acceptability of Printed Boards IPC-A-610 Acceptability of Electronic Assemblies IPC-6011 Generic Performance Specification for Printed Boards IPC-6012 Qualification and Performance Specification for Rigid Printed Boards IPC-6013 Specification for Printed Wiring, Flexible and Rigid-Flex IPC-6018 Qualification and Performance Specification for High Frequency (Microwave) Printed Boards IPC- 6202 IPC/JPCA Performance Guide Manual for Single- and Double-Sided Flexible Printed Wiring Boards PAS-62123 Performance Guide Manual for Single & Double Sided Flexible Printed Wiring Boards IPC-TF-870 Qualification and Performance of Polymer Thick Film Printed Boards Flex assembly and materials standards IPC-FA-251 Assembly Guidelines for Single and Double Sided Flexible Printed Circuits IPC-3406 Guidelines for Electrically Conductive Surface Mount Adhesives IPC-3408 General Requirements for Anisotropically Conductive Adhesives Films Market research IPC members are eligible to participate in IPC’s statistical programs, which provide free monthly or quarterly reports for specific industry and product markets. Statistical programs cover the electronics manufacturing services (EMS), printed circuit board (PCB), laminate, process consumables, solder and assembly equipment segments. Annual reports are distributed for the EMS and PCB segments, covering market size and sales growth, with breakdowns by product type and product mix as well as revenue trends from value-added services, trends in materials, financial metrics, and forecasts for total production in the Americas and the world. Monthly market reports for the EMS and PCB segments provide recent data on market size, sales and order growth, book-to-bill ratios and near-term forecasts. Notes External links Trade associations based in the United States Standards organizations Printed circuit board manufacturing 1957 establishments in Illinois
IPC (electronics)
[ "Engineering" ]
925
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
10,465,001
https://en.wikipedia.org/wiki/Eigenvalue%20perturbation
In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system that is perturbed from one with known eigenvectors and eigenvalues . This is useful for studying how sensitive the original system's eigenvectors and eigenvalues are to changes in the system. This type of analysis was popularized by Lord Rayleigh, in his investigation of harmonic vibrations of a string perturbed by small inhomogeneities. The derivations in this article are essentially self-contained and can be found in many texts on numerical linear algebra or numerical functional analysis. This article is focused on the case of the perturbation of a simple eigenvalue (see in multiplicity of eigenvalues). Why generalized eigenvalues? In the entry applications of eigenvalues and eigenvectors we find numerous scientific fields in which eigenvalues are used to obtain solutions. Generalized eigenvalue problems are less widespread but are a key in the study of vibrations. They are useful when we use the Galerkin method or Rayleigh-Ritz method to find approximate solutions of partial differential equations modeling vibrations of structures such as strings and plates; the paper of Courant (1943) is fundamental. The Finite element method is a widespread particular case. In classical mechanics, generalized eigenvalues may crop up when we look for vibrations of multiple degrees of freedom systems close to equilibrium; the kinetic energy provides the mass matrix , the potential strain energy provides the rigidity matrix . For further details, see the first section of this article of Weinstein (1941, in French) With both methods, we obtain a system of differential equations or Matrix differential equation with the mass matrix , the damping matrix and the rigidity matrix . If we neglect the damping effect, we use , we can look for a solution of the following form ; we obtain that and are solution of the generalized eigenvalue problem Setting of perturbation for a generalized eigenvalue problem Suppose we have solutions to the generalized eigenvalue problem, where and are matrices. That is, we know the eigenvalues and eigenvectors for . It is also required that the eigenvalues are distinct. Now suppose we want to change the matrices by a small amount. That is, we want to find the eigenvalues and eigenvectors of where with the perturbations and much smaller than and respectively. Then we expect the new eigenvalues and eigenvectors to be similar to the original, plus small perturbations: Steps We assume that the matrices are symmetric and positive definite, and assume we have scaled the eigenvectors such that where is the Kronecker delta. Now we want to solve the equation In this article we restrict the study to first order perturbation. First order expansion of the equation Substituting in (1), we get which expands to Canceling from (0) () leaves Removing the higher-order terms, this simplifies to In other words, no longer denotes the exact variation of the eigenvalue but its first order approximation. As the matrix is symmetric, the unperturbed eigenvectors are orthogonal and so we use them as a basis for the perturbed eigenvectors. That is, we want to construct with , where the are small constants that are to be determined. In the same way, substituting in (2), and removing higher order terms, we get The derivation can go on with two forks. First fork: get first eigenvalue perturbation Eigenvalue perturbation We start with (3) we left multiply with and use (2) as well as its first order variation (5); we get or We notice that it is the first order perturbation of the generalized Rayleigh quotient with fixed : Moreover, for , the formula should be compared with Bauer-Fike theorem which provides a bound for eigenvalue perturbation. Eigenvector perturbation We left multiply (3) with for and get We use for . or As the eigenvalues are assumed to be simple, for Moreover (5) (the first order variation of (2) ) yields We have obtained all the components of . Second fork: Straightforward manipulations Substituting (4) into (3) and rearranging gives Because the eigenvectors are -orthogonal when is positive definite, we can remove the summations by left-multiplying by : By use of equation (1) again: The two terms containing are equal because left-multiplying (1) by gives Canceling those terms in (6) leaves Rearranging gives But by (2), this denominator is equal to 1. Thus Then, as for (assumption simple eigenvalues) by left-multiplying equation (5) by : Or by changing the name of the indices: To find , use the fact that: implies: Summary of the first order perturbation result In the case where all the matrices are Hermitian positive definite and all the eigenvalues are distinct, for infinitesimal and (the higher order terms in (3) being neglected). So far, we have not proved that these higher order terms may be neglected. This point may be derived using the implicit function theorem; in next section, we summarize the use of this theorem in order to obtain a first order expansion. Theoretical derivation Perturbation of an implicit function. In the next paragraph, we shall use the Implicit function theorem (Statement of the theorem ); we notice that for a continuously differentiable function , with an invertible Jacobian matrix , from a point solution of , we get solutions of with close to in the form where is a continuously differentiable function ; moreover the Jacobian marix of is provided by the linear system . As soon as the hypothesis of the theorem is satisfied, the Jacobian matrix of may be computed with a first order expansion of , we get ; as , it is equivalent to equation . Eigenvalue perturbation: a theoretical basis. We use the previous paragraph (Perturbation of an implicit function) with somewhat different notations suited to eigenvalue perturbation; we introduce , with with . In order to use the Implicit function theorem, we study the invertibility of the Jacobian with . Indeed, the solution of may be derived with computations similar to the derivation of the expansion. When is a simple eigenvalue, as the eigenvectors form an orthonormal basis, for any right-hand side, we have obtained one solution therefore, the Jacobian is invertible. The implicit function theorem provides a continuously differentiable function hence the expansion with little o notation: . with This is the first order expansion of the perturbed eigenvalues and eigenvectors. which is proved. Results of sensitivity analysis with respect to the entries of the matrices The results This means it is possible to efficiently do a sensitivity analysis on as a function of changes in the entries of the matrices. (Recall that the matrices are symmetric and so changing will also change , hence the term.) Similarly Eigenvalue sensitivity, a small example A simple case is ; however you can compute eigenvalues and eigenvectors with the help of online tools such as (see introduction in Wikipedia WIMS) or using Sage SageMath. You get the smallest eigenvalue and an explicit computation ; more over, an associated eigenvector is ; it is not an unitary vector; so ; we get and ; hence ; for this example , we have checked that or . Existence of eigenvectors Note that in the above example we assumed that both the unperturbed and the perturbed systems involved symmetric matrices, which guaranteed the existence of linearly independent eigenvectors. An eigenvalue problem involving non-symmetric matrices is not guaranteed to have linearly independent eigenvectors, though a sufficient condition is that and be simultaneously diagonalizable. The case of repeated eigenvalues A technical report of Rellich for perturbation of eigenvalue problems provides several examples. The elementary examples are in chapter 2. The report may be downloaded from archive.org. We draw an example in which the eigenvectors have a nasty behavior. Example 1 Consider the following matrix and For , the matrix has eigenvectors belonging to eigenvalues . Since for if are any normalized eigenvectors belonging to respectively then where are real for It is obviously impossible to define , say, in such a way that tends to a limit as because has no limit as Note in this example that is not only continuous but also has continuous derivatives of all orders. Rellich draws the following important consequence. << Since in general the individual eigenvectors do not depend continuously on the perturbation parameter even though the operator does, it is necessary to work, not with an eigenvector, but rather with the space spanned by all the eigenvectors belonging to the same eigenvalue. >> Example 2 This example is less nasty that the previous one. Suppose is the 2x2 identity matrix, any vector is an eigenvector; then is one possible eigenvector. But if one makes a small perturbation, such as Then the eigenvectors are and ; they are constant with respect to so that is constant and does not go to zero. See also Perturbation theory (quantum mechanics) Bauer–Fike theorem References . Further reading Books . Bhatia, R. (1987). Perturbation bounds for matrix eigenvalues. SIAM. Report Journal papers Simon, B. (1982). Large orders and summability of eigenvalue perturbation theory: a mathematical overview. International Journal of Quantum Chemistry, 21(1), 3-25. Crandall, M. G., & Rabinowitz, P. H. (1973). Bifurcation, perturbation of simple eigenvalues, and linearized stability. Archive for Rational Mechanics and Analysis, 52(2), 161-180. Stewart, G. W. (1973). Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM review, 15(4), 727-764. Löwdin, P. O. (1962). Studies in perturbation theory. IV. Solution of eigenvalue problem by projection operator formalism. Journal of Mathematical Physics, 3(5), 969-982. Perturbation theory Differential calculus Multivariable calculus Linear algebra Numerical linear algebra
Eigenvalue perturbation
[ "Physics", "Mathematics" ]
2,239
[ "Perturbation theory", "Calculus", "Quantum mechanics", "Differential calculus", "Linear algebra", "Multivariable calculus", "Algebra" ]
10,470,165
https://en.wikipedia.org/wiki/Thermal%20contact
In heat transfer and thermodynamics, a thermodynamic system is said to be in thermal contact with another system if it can exchange energy through the process of heat. Perfect thermal isolation is an idealization as real systems are always in thermal contact with their environment to some extent. When two solid bodies are in contact, a resistance to heat transfer exists between the bodies. The study of heat conduction between such bodies is called thermal contact conductance (or thermal contact resistance). References See also Thermal equilibrium - When two objects A and B are in thermal contact and there is no net transfer of thermal energy from A to B or from B to A, they are said to be in thermal equilibrium. The majority of objects experiencing thermal equilibrium still do exchange thermal energy but do so equally so that the net heat transfer is zero. Perfect thermal contact Zeroth law of thermodynamics - When two objects A and B are in thermal equilibrium with a third object C then, A and B are said to be in thermal equilibrium with each other. Thermodynamics Heat transfer
Thermal contact
[ "Physics", "Chemistry", "Mathematics" ]
222
[ "Thermodynamics stubs", "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics", "Physical chemistry stubs", "Dynamical systems" ]
19,515,615
https://en.wikipedia.org/wiki/Carter%20PAV
The Carter PAV (Personal Air Vehicle) is a two-bladed, compound autogyro developed by Carter Aviation Technologies to demonstrate slowed rotor technology. The design has an unpowered rotor mounted on top of the fuselage, wings like a conventional fixed-wing aircraft mounted underneath, and a controllable pitch pusher propeller at the rear of the fuselage. Heavy weights ( each) are placed in the rotor tips to enhance rotational energy and to reduce flapping. Development When the CarterCopter was damaged in 2005 due to a gear-up landing caused by pilot error, the cost of repair was deemed higher than the cost of making a new aircraft with the added benefit of incorporating lessons learned from the first aircraft. Design of the PAV was begun during 2005. Several changes and development problems occurred along the way; twin boom was deemed unnecessary, so a single boom was constructed, and flaws in rotor blades and hub were revealed during testing and then corrected. On 16 November 2009, the AAI Corporation (a division of Textron) signed a 40-year exclusive license agreement with the company concerning all unmanned aircraft systems, one of which was intended to deliver of cargo similar to the unmanned Kaman K-MAX, but over a future range of compared to the demonstrated or more of the K-MAX. The agreement committed CarterCopters to developing the technology to maturity, in exchange for exclusive rights to develop UAVs for the next 40 years. The first product in the AAI agreement was to be an autonomous slowed rotor/compound (SR/C) aircraft based on the Carter Personal Air Vehicle. "Critical Design Review" (CDR) for AAI Corporation was performed around January 2010 when the prototype was already being built. Usually a CDR is performed before a vehicle is built. In 2014, Carter said they bought back the license from AAI and is seeking production partners outside USA, hoping for production 3–5 years later. Testing The PAV was taxi tested in autumn of 2010 at Olney Airport after FAA Special Airworthiness Certificate on 27 July 2010, and performed traffic pattern movement on 2 December 2010, piloted by Larry Neal at the controls and co-pilot Robert Luna. Larry Neal was also one of the pilots of the CarterCopter at Olney in 2005. The first flight occurred on 5 January 2011 at Olney without wings and lasted 36 minutes, which qualified Carter for a milestone payment. Carter stated that the PAV performed its first zero-roll jump take-off on 18 January 2011, to a height of . Eight jump take-offs were performed. There are some electrical issues with the aircraft, and it is not in volume production. The PAV flew traffic patterns with wings at Olney in January 2012, and has since flown winged test flights. It flew a few hours at a time, but its flight certificate restricted it to within of Olney. As of June 2012, development of the PAV is a year behind schedule due to various technical problems, and a delay of a further year was caused by rotor RPM software control issues. Carter received funding from the Wichita Falls Economic Development Corporation in 2010 to complete the PAV. Carter views the lack of a PAV flight simulator as a mistake, and attempts to build one. The previous CarterCopter was designed using a flight simulator. Carter says that the PAV has a lift to drag ratio of 10–15, and reached an advance ratio of 0.85 in 2012. According to Carter, the PAV reached Mu-1 on 7 November 2013. It also achieved a speed of , and the rotor was slowed down to 113 rpm. The PAV flew its first public show flight outside Olney when it flew to Wichita Falls later that month. Carter says the PAV has achieved a speed of at an altitude of , a Mu of 1.13 and an L/D of 11.6-15. Carter has applied to the FAA to change the PAVs certificate from research and development to demonstration. The second PAV (called PAV-II, registration N210AV) was flight approved in March 2014, and demonstrated at Sun 'n Fun air festival and MacDill Air Force Base in 2014, both in Florida. In July 2014, it was displayed at Oshkosh Airshow. Carter says it has flown at . The first non-Carter pilots flew the aircraft in 2015. Design Computer aided design and X-plane flight simulation were used during development. Unlike the twin-boom CarterCopter, the PAV has a single tailboom. A tilting mast allows the rotor to be tilted 15 degrees forward and 30 degrees aft to allow different centres of gravity and wing angle-of-attacks. Helicopter rotors are designed to operate at a fixed RPM (within a narrow range of a few percent), whereas Carter uses RPM ranges between 100 and 350. Most aircraft have two energy parameters (speed and altitude) which the pilot can trade between, but Carter technology attempts to use rotor rotation as a third energy parameter. The purpose of the Slowed Rotor/Compound aircraft is to enhance the flight envelope compared to fixed-wing aircraft, helicopters and traditional autogyros, by minimizing the dangerous areas of the stall speed diagram/height-velocity diagram as well as moving the speed limit up. The PAV has traditional airplane-like controls (Vernier type), but the stick also controls the rotor. Most controls were automated in 2011, and jump-takeoff is performed at the push of a button. Materials used include glass fiber, aluminum, titanium, and steel, as well as autoclaved carbon/epoxy prepreg with aramid honeycomb core on the PAV-II. The tip weights had been made of tungsten, while the current (2013) are made of steel. Suppliers for the aircraft include Blue Mountain Avionics for avionics and air-to-ground video and telemetry, and Sky Ox Oxygen Systems as the PAV is not pressurized. 60 channels of information convey sensor measurements from the aircraft to a ground computer, and 4 video cameras tape the flights. The engine is equipped with a performance enhancement system by Nitrous Express. Operation The PAV has flight characteristics similar to other Carter aircraft. When stationary on the ground, the engine powers up the flat pitch rotor to 370 RPM, and the engine is then disengaged from the rotor to provide full power to the propeller. The rotor now has substantial rotational energy due to the tip weights (usable temporary eng1 equivalent to ), and the rotor blades are pitched to push air down and lift the aircraft in a jump takeoff. While altitude is reached, the aircraft transitions into forward flight using the pusher propeller, and the rotor shifts to autorotation (windmilling) with air flowing up through the rotor. As speed increases, the air flow increases rotor RPM like other autogyros. Once sufficient airspeed is reached (around ) for the small wings to provide lift, rotor blades are feathered to reduce rotor speed to 100 RPM and minimize drag, and lift is provided mostly by the wings when speed reaches . Rotor lift is reduced to 10%, and flight efficiency is somewhat below that of a commercial jet plane. Specifications (PAV) See also References Notes Bibliography Flight International article External links Carter Aviation PAV webpage 2010s United States experimental aircraft Autogyros Wichita Falls, Texas Pusher aircraft Slowed rotor Single-engined piston helicopters Aircraft first flown in 2011
Carter PAV
[ "Engineering" ]
1,513
[ "Slowed rotor", "Aerospace engineering" ]
19,516,582
https://en.wikipedia.org/wiki/Bailey%20pair
In mathematics, a Bailey pair is a pair of sequences satisfying certain relations, and a Bailey chain is a sequence of Bailey pairs. Bailey pairs were introduced by while studying the second proof Rogers 1917 of the Rogers–Ramanujan identities, and Bailey chains were introduced by . Definition The q-Pochhammer symbols are defined as: A pair of sequences (αn,βn) is called a Bailey pair if they are related by or equivalently Bailey's lemma Bailey's lemma states that if (αn,βn) is a Bailey pair, then so is (α'n,β'n) where In other words, given one Bailey pair, one can construct a second using the formulas above. This process can be iterated to produce an infinite sequence of Bailey pairs, called a Bailey chain. Examples An example of a Bailey pair is given by gave a list of 130 examples related to Bailey pairs. References Special functions Q-analogs
Bailey pair
[ "Mathematics" ]
197
[ "Special functions", "Q-analogs", "Combinatorics" ]
19,518,308
https://en.wikipedia.org/wiki/SimRank
SimRank is a general similarity measure, based on a simple and intuitive graph-theoretic model. SimRank is applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, SimRank is a measure that says "two objects are considered to be similar if they are referenced by similar objects." Although SimRank is widely adopted, it may output unreasonable similarity scores which are influenced by different factors, and can be solved in several ways, such as introducing an evidence weight factor, inserting additional terms that are neglected by SimRank or using PageRank-based alternatives. Introduction Many applications require a measure of "similarity" between objects. One obvious example is the "find-similar-document" query, on traditional text corpora or the World-Wide Web. More generally, a similarity measure can be used to cluster objects, such as for collaborative filtering in a recommender system, in which “similar” users and items are grouped based on the users’ preferences. Various aspects of objects can be used to determine similarity, usually depending on the domain and the appropriate definition of similarity for that domain. In a document corpus, matching text may be used, and for collaborative filtering, similar users may be identified by common preferences. SimRank is a general approach that exploits the object-to-object relationships found in many domains of interest. On the Web, for example, two pages are related if there are hyperlinks between them. A similar approach can be applied to scientific papers and their citations, or to any other document corpus with cross-reference information. In the case of recommender systems, a user’s preference for an item constitutes a relationship between the user and the item. Such domains are naturally modeled as graphs, with nodes representing objects and edges representing relationships. The intuition behind the SimRank algorithm is that, in many domains, similar objects are referenced by similar objects. More precisely, objects and are considered to be similar if they are pointed from objects and , respectively, and and are themselves similar. The base case is that objects are maximally similar to themselves . It is important to note that SimRank is a general algorithm that determines only the similarity of structural context. SimRank applies to any domain where there are enough relevant relationships between objects to base at least some notion of similarity on relationships. Obviously, similarity of other domain-specific aspects are important as well; these can — and should be combined with relational structural-context similarity for an overall similarity measure. For example, for Web pages SimRank can be combined with traditional textual similarity; the same idea applies to scientific papers or other document corpora. For recommendation systems, there may be built-in known similarities between items (e.g., both computers, both clothing, etc.), as well as similarities between users (e.g., same gender, same spending level). Again, these similarities can be combined with the similarity scores that are computed based on preference patterns, in order to produce an overall similarity measure. Basic SimRank equation For a node in a directed graph, we denote by and the set of in-neighbors and out-neighbors of , respectively. Individual in-neighbors are denoted as , for , and individual out-neighbors are denoted as , for . Let us denote the similarity between objects and by . Following the earlier motivation, a recursive equation is written for . If then is defined to be . Otherwise, where is a constant between and . A slight technicality here is that either or may not have any in-neighbors. Since there is no way to infer any similarity between and in this case, similarity is set to , so the summation in the above equation is defined to be when or . Matrix representation of SimRank Given an arbitrary constant between and , let be the similarity matrix whose entry denotes the similarity score , and be the column normalized adjacency matrix whose entry if there is an edge from to , and 0 otherwise. Then, in matrix notations, SimRank can be formulated as where is an identity matrix. Computing SimRank A solution to the SimRank equations for a graph can be reached by iteration to a fixed-point. Let be the number of nodes in . For each iteration , we can keep entries , where gives the score between and on iteration . We successively compute based on . We start with where each is a lower bound on the actual SimRank score : To compute from , we use the basic SimRank equation to get: for , and for . That is, on each iteration , we update the similarity of using the similarity scores of the neighbours of from the previous iteration according to the basic SimRank equation. The values are nondecreasing as increases. It was shown in that the values converge to limits satisfying the basic SimRank equation, the SimRank scores , i.e., for all , . The original SimRank proposal suggested choosing the decay factor and a fixed number of iterations to perform. However, the recent research showed that the given values for and generally imply relatively low accuracy of iteratively computed SimRank scores. For guaranteeing more accurate computation results, the latter paper suggests either using a smaller decay factor (in particular, ) or taking more iterations. CoSimRank CoSimRank is a variant of SimRank with the advantage of also having a local formulation, i.e. CoSimRank can be computed for a single node pair. Let be the similarity matrix whose entry denotes the similarity score , and be the column normalized adjacency matrix. Then, in matrix notations, CoSimRank can be formulated as: where is an identity matrix. To compute the similarity score of only a single node pair, let , with being a vector of the standard basis, i.e., the -th entry is 1 and all other entries are 0. Then, CoSimRank can be computed in two steps: Step one can be seen a simplified version of Personalized PageRank. Step two sums up the vector similarity of each iteration. Both, matrix and local representation, compute the same similarity score. CoSimRank can also be used to compute the similarity of sets of nodes, by modifying . Further research on SimRank Fogaras and Racz suggested speeding up SimRank computation through probabilistic computation using the Monte Carlo method. Antonellis et al. extended SimRank equations to take into consideration (i) evidence factor for incident nodes and (ii) link weights. Yu et al. further improved SimRank computation via a fine-grained memoization method to share small common parts among different partial sums. Chen and Giles discussed the limitations and proper use cases of SimRank. Partial Sums Memoization Lizorkin et al. proposed three optimization techniques for speeding up the computation of SimRank: Essential nodes selection may eliminate the computation of a fraction of node pairs with a-priori zero scores. Partial sums memoization can effectively reduce repeated calculations of the similarity among different node pairs by caching part of similarity summations for later reuse. A threshold setting on the similarity enables a further reduction in the number of node pairs to be computed. In particular, the second observation of partial sums memoization plays a paramount role in greatly speeding up the computation of SimRank from to , where is the number of iterations, is average degree of a graph, and is the number of nodes in a graph. The central idea of partial sums memoization consists of two steps: First, the partial sums over are memoized as and then is iteratively computed from as Consequently, the results of , , can be reused later when we compute the similarities for a given vertex as the first argument. See also PageRank Citations Sources Cluster analysis algorithms Similarity measures
SimRank
[ "Physics" ]
1,591
[ "Similarity measures", "Physical quantities", "Distance" ]
19,519,695
https://en.wikipedia.org/wiki/Board-to-board%20connector
Board-to-board (BTB) connectors are used to connect printed circuit boards (PCB), electronic components that contain a conductive pattern printed on the surface of the insulating base in an accurate and repeatable manner. Each terminal on a BTB connector is connected to a PCB. A BTB connector includes housing and a specific number of terminals. The terminal is made from a conductive material (mostly copper alloy), and plated to improve conductivity and antirust. Terminals transmit the current/signal between PCBs connected by BTB; the housing is made of insulating material (mostly plastic). Classification BTB connectors are divided up into four mounting types: Through-hole technology Surface-mount technology Plug-in technology Solderless stacking mezzanine technology BTB connectors are selected by considering the mounting method, pin pitch, number of the rows (aka number of the ways), pin length, stacker height etc. See also Pin header Signal integrity Bus slot Edge connector References External links FPC Prototyping Printed circuit board manufacturing Electrical signal connectors
Board-to-board connector
[ "Engineering" ]
226
[ "Electrical engineering", "Electronic engineering", "Printed circuit board manufacturing" ]
12,754,000
https://en.wikipedia.org/wiki/Complex%20metal%20hydride
Complex metal hydrides are salts wherein the anions contain hydrides. In older chemical literature and even in contemporary materials science textbooks, a "metal hydride" is assumed to be nonmolecular, i.e. three-dimensional lattices of atomic ions. In such systems, hydrides are often interstitial and nonstoichiometric, and the bonding between the metal and hydrogen atoms is significantly ionic. In contrast, complex metal hydrides typically contain more than one type of metal or metalloid and may be soluble but invariably react with water. They exhibit ionic bonding between a positive metal ion with molecular anions containing the hydride. In such materials the hydrogen is bonded with significant covalent character to the second metal or metalloid atoms. Examples In general, complex metal hydrides have the formula MxM'yHn, where M is an alkali metal cation or cation complex and M' is a metal or metalloid. Well known examples feature group 13 elements, especially boron and aluminium including sodium aluminium hydride, NaAlH4 ), lithium aluminium hydride, LiAlH4, and lithium borohydride, (LiBH4). Complex metal hydrides are often soluble in etherial solvents. Other complex metal hydrides are numerous. Illustrative examples include the salts [MgBr(THF)2]4FeH6 and K2ReH9. See also Ionic hydrides Hydrogen storage References Metal hydrides Inorganic chemistry Hydrogen storage
Complex metal hydride
[ "Chemistry" ]
327
[ "Metal hydrides", "Inorganic compounds", "nan", "Reducing agents" ]
12,755,108
https://en.wikipedia.org/wiki/Patterning%20by%20etching%20at%20the%20nanoscale
Patterning by Etching at the nanoscale (PENs) is a soft lithographic technique in which the bonds in the polydimethylsiloxane (PDMS) matrix are broken to controlably etch PDMS (i.e. dissolve) at a slow rate along the outside of a PDMS channel formed with a patterned PDMS stamp applied to a surface. The channel in the stamp can be enlarged in the order of tens of nanometers to several micrometres. Exposing a fresh area of a surface that can be reacted with. Summary PDMS contains polymer chains of silicon-oxygen bonds, these bonds can be broken by fluoride containing species, in the same way that silicon wafers are prepared by etching with hydrofluoric acid, ammonium fluoride and related compounds. By placing a PDMS stamp that contains a channel that can be externally filled on to a surface, that surface can be functionalised in the area of the channel. By then running an etching solution through the channel, part of the PDMS will be removed. Exposing a fresh area of the surface. This can then be functionaliesd by appropriate chemistry. The width of feature produced is controlled by etchant and time. To apply this technique for the production of small patterned features it is necessary that the surface can be reacted to passivate it in the area exposed by the channel, followed by etching and then reacted in away that will only occur in the newly exposed area. References Perring M., Mitchell, M., Kenis P. J. A., Bowden N. B., Chem. Mat 2007 19(11), 2903 Nanoelectronics
Patterning by etching at the nanoscale
[ "Materials_science" ]
350
[ "Nanotechnology", "Nanoelectronics" ]
12,756,281
https://en.wikipedia.org/wiki/Clinostat
A clinostat is a device which uses rotation to negate the effects of gravitational pull on plant growth (gravitropism) and development (gravimorphism). It has also been used to study the effects of microgravity on cell cultures, animal embryos and spider webs. Description A single axis (or horizontal) clinostat consists of a disc attached to a motor. They were originally clockwork but nowadays an electric motor is used. The disc is held vertically and the motor rotates it slowly at rates in the order of one revolution per minute. A plant is attached to the disc so that it is held horizontally. The slow rotation means that the plant experiences a gravitational pull that is averaged over 360 degrees, thus approximating a weightless environment. Clinostats have also been used to cancel out effects of sunlight and other stimuli besides gravity. This type of clinostat must be exactly horizontal to simulate absence of gravity. If the clinostat is at an angle from horizontal, a net gravity vector is perceived, the magnitude of which depends on the angle. This can be used to simulate lunar gravity (ca. 1/6 g) which requires an angle from the horizontal of ca. 10 deg., i.e. sin−1(1/6). A plant only reacts to gravity if the gravistimulation is maintained for longer than a critical amount of time, called the minimal presentation time (MPT). For many plant organs the MPT lies somewhere between 10 and 200 seconds, and therefore a clinostat should rotate on a comparable timescale in order to avoid a gravitropic response. However, presentation time is cumulative, and if a clinostat's rotation is repeatedly stopped at a single position, even for periods as short as 0.5 s, a gravitropic response can result. The presentation time for animals is one or two orders of magnitude faster than this, thus precluding the use of the slow rotation clinostat for most animal studies. However the fast rotation clinostat can be, and is, used for the study of animal cell cultures and embryos. Types and application The usual type of clinostat turns slowly to avoid centrifugal effects and this is called the "slow rotation clinostat". There has been debate as to the most suitable speed of rotation: if it is too slow the plant has time to begin physiological responses to gravity; if it is too fast, centrifugal forces and mechanical strains introduce artifacts. The optimal rotational speed has been investigated by comparison to 'true' responses to microgravity as seen in space-grown plants, and determined to be between 0.3 and 3 rpm for most plant systems. The fast rotating clinostat (generally turning at between 30 and 150 rpm) can only be used for small samples (cell cultures in vials a few mm in diameter) typically in liquid media. Under these conditions excessive centrifugal effects, which precludes its use on larger samples, are avoided. A single-axis clinostat only produces the effect of weightlessness along its axis of rotation. A 3D or two-axis clinostat (generally called a random positioning machine or RPM), can average gravitational pull over all directions. These machines often consist of two frames, one positioned inside the other, each rotating independently. An alternative to the clinostat for simulating microgravity is the free fall machine (FFM). Small samples (such as cell suspensions) are allowed to free fall under gravity for about a metre, with the period of free fall lasting just under a second. They are then pushed back to the top of the apparatus by a briefly applied large force (c. 20 g for 20 ms - the "bounce"), and allowed to fall again, and so on. The principle of the machine is that most of the time is spent in zero g free fall. The periods spent under high g are assumed to be too short to be detected by the physiological mechanism of the biological samples, which consequently only perceive the time spent in free fall. Problems associated with the use of the horizontal clinostat A number of problems have been pointed out in the use of clinostats to simulate microgravity: gravitational effects still occur, they just have no net direction. Therefore rather than simulating microgravity they are best thought of as inducing omnilateral gravistimulation leaves of large plants flop about as they rotate; this may cause an increase in ethylene production, which may in turn cause some of the phenomena otherwise attributed to agravitropism. Other researchers have questioned this interpretation, and it has been suggested that ethylene may have a role in the gravitropic response vibration from the motor and other motion effects may lead to artifacts. History The clinostat was invented in 1879 by Julius von Sachs, who built a clockwork-powered machine. However a similar concept had been pioneered as early as 1703 by Denis Dodart. The first electric-powered clinostat (1897) was made by Newcombe. See also Gravitropism Large diameter centrifuge Random positioning machine Free fall machine References Citations External links Clinostat Page: A web site dedicated to space biology studies on Earth The Clinopage Gravity experimentation website Clinostats Patents Cell culture (Class 435/297.400) , Rhodes, Percy H. (Huntsville, AL), Miller, Teresa Y. (Falkville, AL), Snyder, Robert S. (Huntsville, AL), " Hollow fiber clinostat for simulating microgravity in cell culture" Gravitational instruments Laboratory equipment
Clinostat
[ "Technology", "Engineering" ]
1,182
[ "Measuring instruments", "Gravitational instruments" ]