id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
7,066,763
https://en.wikipedia.org/wiki/Shack%E2%80%93Hartmann%20wavefront%20sensor
A Shack–Hartmann (or Hartmann–Shack) wavefront sensor (SHWFS) is an optical instrument used for characterizing an imaging system. It is a wavefront sensor commonly used in adaptive optics systems. It consists of an array of lenses (called lenslets) of the same focal length. Each is focused onto a photon sensor (typically a CCD array or CMOS array or quad-cell). If the sensor is placed at the geometric focal plane of the lenslet, and is uniformly illuminated, then, the integrated gradient of the wavefront across the lenslet is proportional to the displacement of the centroid. Consequently, any phase aberration can be approximated by a set of discrete tilts. By sampling the wavefront with an array of lenslets, all of these local tilts can be measured and the whole wavefront reconstructed. Since only tilts are measured the Shack–Hartmann cannot detect discontinuous steps in the wavefront. The design of this sensor improves upon an array of holes in a mask that had been developed in 1904 by Johannes Franz Hartmann as a means of tracing individual rays of light through the optical system of a large telescope, thereby testing the quality of the image. In the late 1960s, Roland Shack and Ben Platt modified the Hartmann screen by replacing the apertures in an opaque screen by an array of lenslets. The terminology as proposed by Shack and Platt was Hartmann screen. The fundamental principle seems to be documented even before Huygens by the Jesuit philosopher, Christopher Scheiner, in Austria. Shack–Hartmann sensors are used in astronomy to measure telescopes and in medicine to characterize eyes for corneal treatment of complex refractive errors. Recently, Pamplona et al. developed and patented an inverse of the Shack–Hartmann system to measure one's eye lens aberrations. While Shack–Hartmann sensors measure the localized slope of the wavefront error using spot displacement in the sensor plane, Pamplona et al. replace the sensor plane with a high resolution visual display (e.g. a mobile phone screen) that displays spots that the user views through a lenslet array. The user then manually shifts the displayed spots (i.e. the generated wavefront) until the spots align. The magnitude of this shift provides data to estimate the first-order parameters such as radius of curvature and hence error due to defocus and spherical aberration. References See also Optical Telescope Element (used this sensor in development of the James Webb Space Telescope) Sensors Optical metrology
Shack–Hartmann wavefront sensor
[ "Technology", "Engineering" ]
522
[ "Sensors", "Measuring instruments" ]
7,067,473
https://en.wikipedia.org/wiki/Industrial%20applications%20of%20nanotechnology
Nanotechnology is impacting the field of consumer goods, several products that incorporate nanomaterials are already in a variety of items; many of which people do not even realize contain nanoparticles, products with novel functions ranging from easy-to-clean to scratch-resistant. Examples of that car bumpers are made lighter, clothing is more stain repellant, sunscreen is more radiation resistant, synthetic bones are stronger, cell phone screens are lighter weight, glass packaging for drinks leads to a longer shelf-life, and balls for various sports are made more durable. Using nanotech, in the mid-term modern textiles will become "smart", through embedded "wearable electronics", such novel products have also a promising potential especially in the field of cosmetics, and has numerous potential applications in heavy industry. Nanotechnology is predicted to be a main driver of technology and business in this century and holds the promise of higher performance materials, intelligent systems and new production methods with significant impact for all aspects of society. Foods A complex set of engineering and scientific challenges in the food and bioprocessing industry for manufacturing high quality and safe food through efficient and sustainable means can be solved through nanotechnology. Bacteria identification and food quality monitoring using biosensors; intelligent, active, and smart food packaging systems; nanoencapsulation of bioactive food compounds are few examples of emerging applications of nanotechnology for the food industry. Nanotechnology can be applied in the production, processing, safety and packaging of food. A nanocomposite coating process could improve food packaging by placing anti-microbial agents directly on the surface of the coated film. Nanocomposites could increase or decrease gas permeability of different fillers as is needed for different products. They can also improve the mechanical and heat-resistance properties and lower the oxygen transmission rate. Research is being performed to apply nanotechnology to the detection of chemical and biological substances for sensanges in foods. A complex set of engineering and scientific challenges in the food and bioprocessing industry for manufacturing high quality and safe food through efficient and sustainable means can be solved through nanotechnology. Bacteria identification and food quality monitoring using biosensors; intelligent, active, and smart food packaging systems; nanoencapsulation of bioactive food compounds are few examples of emerging applications of nanotechnology for the food industry.[2] Nanotechnology can be applied in the production, processing, safety and packaging of food. A nanocomposite coating process could improve food packaging by placing anti-microbial agents directly on the surface of the coated film. Nanocomposites could increase or decrease gas permeability of different fillers as is needed for different products. They can also improve the mechanical and heat-resistance properties and lower the oxygen transmission rate. Research is being performed to apply nanotechnology to the detection of chemical and biological substances for sensanges in foods. Nano-foods New foods are among the nanotechnology-created consumer products coming onto the market at the rate of 3 to 4 per week, according to the Project on Emerging Nanotechnologies (PEN), based on an inventory it has drawn up of 609 known or claimed nano-products. On PEN's list are three foods—a brand of canola cooking oil called Canola Active Oil, a tea called Nanotea and a chocolate diet shake called Nanoceuticals Slim Shake Chocolate. According to company information posted on PEN's Web site, the canola oil, by Shemen Industries of Israel, contains an additive called "nanodrops" designed to carry vitamins, minerals and phytochemicals through the digestive system and urea. The shake, according to U.S. manufacturer RBC Life Sciences Inc., uses cocoa infused "NanoClusters" to enhance the taste and health benefits of cocoa without the need for extra sugar. Consumer goods Surfaces and coatings The most prominent application of nanotechnology in the household is self-cleaning or "easy-to-clean" surfaces on ceramics or glasses. Nanoceramic particles have improved the smoothness and heat resistance of common household equipment such as the flat iron. The first sunglasses using protective and anti-reflective ultrathin polymer coatings are on the market. For optics, nanotechnology also offers scratch resistant surface coatings based on nanocomposites. Nano-optics could allow for an increase in precision of pupil repair and other types of laser eye surgery. Textiles The use of engineered nanofibers already makes clothes water- and stain-repellent or wrinkle-free. Textiles with a nanotechnological finish can be washed less frequently and at lower temperatures. Nanotechnology has been used to integrate tiny carbon particles membrane and guarantee full-surface protection from electrostatic charges for the wearer. Many other applications have been developed by research institutions such as the Textiles Nanotechnology Laboratory at Cornell University, and the UK's Dstl and its spin out company P2i. Sports Nanotechnology may also play a role in sports such as soccer, football, and baseball. Materials for new athletic shoes may be made in order to make the shoe lighter (and the athlete faster). Baseball bats already on the market are made with carbon nanotubes that reinforce the resin, which is said to improve its performance by making it lighter. Other items such as sport towels, yoga mats, exercise mats are on the market and used by players in the National Football League, which use antimicrobial nanotechnology to prevent parasuram from illnesses caused by bacteria such as Methicillin-resistant Staphylococcus aureus (commonly known as MRSA). Aerospace and vehicle manufacturers Lighter and stronger materials will be of immense use to aircraft manufacturers, leading to increased performance. Spacecraft will also benefit, where weight is a major factor. Nanotechnology might thus help to reduce the size of equipment and thereby decrease fuel-consumption required to get it airborne. Hang gliders may be able to halve their weight while increasing their strength and toughness through the use of nanotech materials. Nanotech is lowering the mass of supercapacitors that will increasingly be used to give power to assistive electrical motors for launching hang gliders off flatland to thermal-chasing altitudes. Much like aerospace, lighter and stronger materials would be useful for creating vehicles that are both faster and safer. Combustion engines might also benefit from parts that are more hard-wearing and more heat-resistant. Military Biological sensors Nanotechnology can improve the military's ability to detect biological agents. By using nanotechnology, the military would be able to create sensor systems that could detect biological agents. The sensor systems are already well developed and will be one of the first forms of nanotechnology that the military will start to use. Uniform material Nanoparticles can be injected into the material on soldiers’ uniforms to not only make the material more durable, but also to protect soldiers from many different dangers such as high temperatures, impacts and chemicals. The nanoparticles in the material protect soldiers from these dangers by grouping together when something strikes the armor and stiffening the area of impact. This stiffness helps lessen the impact of whatever hit the armor, whether it was extreme heat or a blunt force. By reducing the force of the impact, the nanoparticles protect the soldier wearing the uniform from any injury the impact could have caused. Another way nanotechnology can improve soldiers’ uniforms is by creating a better form of camouflage. Mobile pigment nanoparticles injected into the material can produce a better form of camouflage. These mobile pigment particles would be able to change the color of the uniforms depending upon the area that the soldiers are in. There is still much research being done on this self-changing camouflage. Nanotechnology can improve thermal camouflage. Thermal camouflage helps protect soldiers from people who are using night vision technology. Surfaces of many different military items can be designed in a way that electromagnetic radiation can help lower the infrared signatures of the object that the surface is on. Surfaces of soldiers’ uniforms and surfaces of military vehicle are a few surfaces that can be designed in this way. By lowering the infrared signature of both the soldiers and the military vehicles the soldiers are using, it will provide better protection from infrared guided weapons or infrared surveillance sensors. Communication method There is a way to use nanoparticles to create coated polymer threads that can be woven into soldiers’ uniforms. These polymer threads could be used as a form of communication between the soldiers. The system of threads in the uniforms could be set to different light wavelengths, eliminating the ability for anyone else to listen in. This would lower the risk of having anything intercepted by unwanted listeners. Medical system A medical surveillance system for soldiers to wear can be made using nanotechnology. This system would be able to watch over their health and stress levels. The systems would be able to react to medical situations by releasing drugs or compressing wounds as necessary. This means that if the system detected an injury that was bleeding, it would be able to compress around the wound until further medical treatment could be received. The system would also be able to release drugs into the soldier's body for health reasons, such as pain killers for an injury. The system would be able to inform the medics at base of the soldier's health status at all times that the soldier is wearing the system. The energy needed to communicate this information back to base would be produced through the soldier's body movements. Weapons Nanoweapon is the name given to military technology currently under development which seeks to exploit the power of nanotechnology in the modern battlefield. Risks in military People such as state agencies, criminals and enterprises could use nano-robots to eavesdrop on conversations held in private. Grey goo: an uncontrollable, self-replicating nano-machine or robot. Nanoparticles used in different military materials could potentially be a hazard to the soldiers that are wearing the material, if the material is allowed to get worn out. As the uniforms wear down it is possible for nanomaterial to break off and enter the soldiers’ bodies. Having nanoparticles entering the soldiers’ bodies would be very unhealthy and could seriously harm them. There is not a lot of information on what the actual damage to the soldiers would be, but there have been studies on the effect of nanoparticles entering a fish through its skin. The studies showed that the different fish in the study suffered from varying degrees of brain damage. Although brain damage would be a serious negative effect, the studies also say that the results cannot be taken as an accurate example of what would happen to soldiers if nanoparticles entered their bodies. There are very strict regulations on the scientists that manufacture products with nanoparticles. With these strict regulations, they are able to largely decrease the danger of nanoparticles wearing off of materials and entering the soldiers’ systems. Catalysis Chemical catalysis benefits especially from nanoparticles, due to the extremely large surface-to-volume ratio. The application potential of nanoparticles in catalysis ranges from fuel cell to catalytic converters and photocatalytic devices. Catalysis is also important for the production of chemicals. For example, nanoparticles with a distinct chemical surrounding (ligands), or specific optical properties. Platinum nanoparticles are being considered in the next generation of automotive catalytic converters because the very high surface area of nanoparticles could reduce the amount of platinum required. However, some concerns have been raised due to experiments demonstrating that they will spontaneously combust if methane is mixed with the ambient air. Ongoing research at the Centre National de la Recherche Scientifique (CNRS) in France may resolve their true usefulness for catalytic applications. Nanofiltration may come to be an important application, although future research must be careful to investigate possible toxicity. Construction Nanotechnology has the potential to make construction faster, cheaper, safer, and more varied. Automation of nanotechnology construction can allow for the creation of structures from advanced homes to massive skyscrapers much more quickly and at much lower cost. In the near future, Nanotechnology can be used to sense cracks in foundations of architecture and can send nanobots to repair them. Nanotechnology is an active research area that encompasses a number of disciplines such as electronics, bio-mechanics and coatings. These disciplines assist in the areas of civil engineering and construction materials. If nanotechnology is implemented in the construction of homes and infrastructure, such structures will be stronger. If buildings are stronger, then fewer of them will require reconstruction and less waste will be produced. Nanotechnology in construction involves using nanoparticles such as alumina and silica. Manufacturers are also investigating the methods of producing nano-cement. If cement with nano-size particles can be manufactured and processed, it will open up a large number of opportunities in the fields of ceramics, high strength composites and electronic applications. Nanomaterials still have a high cost relative to conventional materials, meaning that they are not likely to feature in high-volume building materials. The day when this technology slashes the consumption of structural steel has not yet been contemplated. Cement Much analysis of concrete is being done at the nano-level in order to understand its structure. Such analysis uses various techniques developed for study at that scale such as Atomic Force Microscopy (AFM), Scanning Electron Microscopy (SEM) and Focused Ion Beam (FIB). This has come about as a side benefit of the development of these instruments to study the nanoscale in general, but the understanding of the structure and behavior of concrete at the fundamental level is an important and very appropriate use of nanotechnology. One of the fundamental aspects of nanotechnology is its interdisciplinary nature and there has already been cross over research between the mechanical modeling of bones for medical engineering to that of concrete which has enabled the study of chloride diffusion in concrete (which causes corrosion of reinforcement). Concrete is, after all, a macro-material strongly influenced by its nano-properties and understanding it at this new level is yielding new avenues for improvement of strength, durability and monitoring as outlined in the following paragraphs Silica (SiO2) is present in conventional concrete as part of the normal mix. However, one of the advancements made by the study of concrete at the nanoscale is that particle packing in concrete can be improved by using nano-silica which leads to a densifying of the micro and nanostructure resulting in improved mechanical properties. Nano-silica addition to cement based materials can also control the degradation of the fundamental C-S-H (calcium-silicatehydrate) reaction of concrete caused by calcium leaching in water as well as block water penetration and therefore lead to improvements in durability. Related to improved particle packing, high energy milling of ordinary Portland cement (OPC) clinker and standard sand, produces a greater particle size diminution with respect to conventional OPC and, as a result, the compressive strength of the refined material is also 3 to 6 times higher (at different ages). Steel Steel is a widely available material that has a major role in the construction industry. The use of nanotechnology in steel helps to improve the physical properties of steel. Fatigue, or the structural failure of steel, is due to cyclic loading. Current steel designs are based on the reduction in the allowable stress, service life or regular inspection regime. This has a significant impact on the life-cycle costs of structures and limits the effective use of resources. Stress risers are responsible for initiating cracks from which fatigue failure results. The addition of copper nanoparticles reduces the surface un-evenness of steel, which then limits the number of stress risers and hence fatigue cracking. Advancements in this technology through the use of nanoparticles would lead to increased safety, less need for regular inspection, and more efficient materials free from fatigue issues for construction. Steel cables can be strengthened using carbon nanotubes. Stronger cables reduce the costs and period of construction, especially in suspension bridges, as the cables are run from end to end of the span. The use of vanadium and molybdenum nanoparticles improves the delayed fracture problems associated with high strength bolts. This reduces the effects of hydrogen embrittlement and improves steel micro-structure by reducing the effects of the inter-granular cementite phase. Welds and the Heat Affected Zone (HAZ) adjacent to welds can be brittle and fail without warning when subjected to sudden dynamic loading. The addition of nanoparticles such as magnesium and calcium makes the HAZ grains finer in plate steel. This nanoparticle addition leads to an increase in weld strength. The increase in strength results in a smaller resource requirement because less material is required in order to keep stresses within allowable limits. Wood Nanotechnology represents a major opportunity for the wood industry to develop new products, substantially reduce processing costs, and open new markets for biobased materials. Wood is also composed of nanotubes or “nanofibrils”; namely, lignocellulosic (woody tissue) elements which are twice as strong as steel. Harvesting these nanofibrils would lead to a new paradigm in sustainable construction as both the production and use would be part of a renewable cycle. Some developers have speculated that building functionality onto lignocellulosic surfaces at the nanoscale could open new opportunities for such things as self-sterilizing surfaces, internal self-repair, and electronic lignocellulosic devices. These non-obtrusive active or passive nanoscale sensors would provide feedback on product performance and environmental conditions during service by monitoring structural loads, temperatures, moisture content, decay fungi, heat losses or gains, and loss of conditioned air. Currently, however, research in these areas appears limited. Due to its natural origins, wood is leading the way in cross-disciplinary research and modelling techniques. BASF have developed a highly water repellent coating based on the actions of the lotus leaf as a result of the incorporation of silica and alumina nanoparticles and hydrophobic polymers. Mechanical studies of bones have been adapted to model wood, for instance in the drying process. Glass Research is being carried out on the application of nanotechnology to glass, another important material in construction. Titanium dioxide (TiO2) nanoparticles are used to coat glazing since it has sterilizing and anti-fouling properties. The particles catalyze powerful reactions that break down organic pollutants, volatile organic compounds and bacterial membranes. TiO2 is hydrophilic (attraction to water), which can attract rain drops that then wash off the dirt particles. Thus the introduction of nanotechnology in the Glass industry, incorporates the self-cleaning property of glass. Fire-protective glass is another application of nanotechnology. This is achieved by using a clear intumescent layer sandwiched between glass panels (an interlayer) formed of silica nanoparticles (SiO2), which turns into a rigid and opaque fire shield when heated. Most of glass in construction is on the exterior surface of buildings. So the light and heat entering the building through glass has to be prevented. The nanotechnology can provide a better solution to block light and heat coming through windows. Coatings Coatings is an important area in construction coatings are extensively use to paint the walls, doors, and windows. Coatings should provide a protective layer bound to the base material to produce a surface of the desired protective or functional properties. The coatings should have self healing capabilities through a process of "self-assembly". Nanotechnology is being applied to paints to obtained the coatings having self healing capabilities and corrosion protection under insulation. Since these coatings are hydrophobic and repels water from the metal pipe and can also protect metal from salt water attack. Nanoparticle based systems can provide better adhesion and transparency. The TiO2 coating captures and breaks down organic and inorganic air pollutants by a photocatalytic process, which leads to putting roads to good environmental use. Fire Protection and detection Fire resistance of steel structures is often provided by a coating produced by a spray-on-cementitious process. The nano-cement has the potential to create a new paradigm in this area of application because the resulting material can be used as a tough, durable, high temperature coating. It provides a good method of increasing fire resistance and this is a cheaper option than conventional insulation. Risks in construction In building construction nanomaterials are widely used from self-cleaning windows to flexible solar panels to wi-fi blocking paint. The self-healing concrete, materials to block ultraviolet and infrared radiation, smog-eating coatings and light-emitting walls and ceilings are the new nanomaterials in construction. Nanotechnology is a promise for making the "smart home" a reality. Nanotech-enabled sensors can monitor temperature, humidity, and airborne toxins, which needs nanotech-based improved batteries. The building components will be intelligent and interactive since the sensor uses wireless components, it can collect the wide range of data. If nanosensors and nanomaterials become an everyday part of the buildings, as with smart homes, what are the consequences of these materials on human beings? Effect of nanoparticles on health and environment: Nanoparticles may also enter the body if building water supplies are filtered through commercially available nanofilters. Airborne and waterborne nanoparticles enter from building ventilation and wastewater systems. Effect of nanoparticles on societal issues: As sensors become commonplace, a loss of privacy and autonomy may result from users interacting with increasingly intelligent building components. References External links Overview of Nanotechnology Applications Project on Emerging Nanotechnologies Nanotechnology
Industrial applications of nanotechnology
[ "Materials_science", "Engineering" ]
4,495
[ "Nanotechnology", "Materials science" ]
7,071,096
https://en.wikipedia.org/wiki/Engineering%20design%20process
The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary. It is a decision making process (often iterative) in which the engineering sciences, basic sciences and mathematics are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation. Common stages of the engineering design process It's important to understand that there are various framings/articulations of the engineering design process. Different terminology employed may have varying degrees of overlap, which affects what steps get stated explicitly or deemed "high level" versus subordinate in any given model. This, of course, applies as much to any particular example steps/sequences given here. One example framing of the engineering design process delineates the following stages: research, conceptualization, feasibility assessment, establishing design requirements, preliminary design, detailed design, production planning and tool design, and production. Others, noting that "different authors (in both research literature and in textbooks) define different phases of the design process with varying activities occurring within them," have suggested more simplified/generalized models – such as problem definition, conceptual design, preliminary design, detailed design, and design communication. Another summary of the process, from European engineering design literature, includes clarification of the task, conceptual design, embodiment design, detail design. (NOTE: In these examples, other key aspects – such as concept evaluation and prototyping – are subsets and/or extensions of one or more of the listed steps.) Research Various stages of the design process (and even earlier) can involve a significant amount of time spent on locating information and research. Consideration should be given to the existing applicable literature, problems and successes associated with existing solutions, costs, and marketplace needs. The source of information should be relevant. Reverse engineering can be an effective technique if other solutions are available on the market. Other sources of information include the Internet, local libraries, available government documents, personal organizations, trade journals, vendor catalogs and individual experts available. Design requirements Establishing design requirements and conducting requirement analysis, sometimes termed problem definition (or deemed a related activity), is one of the most important elements in the design process in certain industries, and this task is often performed at the same time as a feasibility analysis. The design requirements control the design of the product or process being developed, throughout the engineering design process. These include basic things like the functions, attributes, and specifications – determined after assessing user needs. Some design requirements include hardware and software parameters, maintainability, availability, and testability. Feasibility In some cases, a feasibility study is carried out after which schedules, resource plans and estimates for the next phase are developed. The feasibility study is an evaluation and analysis of the potential of a proposed project to support the process of decision making. It outlines and analyses alternatives or methods of achieving the desired outcome. The feasibility study helps to narrow the scope of the project to identify the best scenario. A feasibility report is generated following which Post Feasibility Review is performed. The purpose of a feasibility assessment is to determine whether the engineer's project can proceed into the design phase. This is based on two criteria: the project needs to be based on an achievable idea, and it needs to be within cost constraints. It is important to have engineers with experience and good judgment to be involved in this portion of the feasibility study. Concept generation A concept study (conceptualization, conceptual design) is often a phase of project planning that includes producing ideas and taking into account the pros and cons of implementing those ideas. This stage of a project is done to minimize the likelihood of error, manage costs, assess risks, and evaluate the potential success of the intended project. In any event, once an engineering issue or problem is defined, potential solutions must be identified. These solutions can be found by using ideation, the mental process by which ideas are generated. In fact, this step is often termed Ideation or "Concept Generation." The following are widely used techniques: trigger word – a word or phrase associated with the issue at hand is stated, and subsequent words and phrases are evoked. morphological analysis – independent design characteristics are listed in a chart, and different engineering solutions are proposed for each solution. Normally, a preliminary sketch and short report accompany the morphological chart. synectics – the engineer imagines him or herself as the item and asks, "What would I do if I were the system?" This unconventional method of thinking may find a solution to the problem at hand. The vital aspects of the conceptualization step is synthesis. Synthesis is the process of taking the element of the concept and arranging them in the proper way. Synthesis creative process is present in every design. brainstorming – this popular method involves thinking of different ideas, typically as part of a small group, and adopting these ideas in some form as a solution to the problem Various generated ideas must then undergo a concept evaluation step, which utilizes various tools to compare and contrast the relative strengths and weakness of possible alternatives. Preliminary design The preliminary design, or high-level design includes (also called FEED or Basic design), often bridges a gap between design conception and detailed design, particularly in cases where the level of conceptualization achieved during ideation is not sufficient for full evaluation. So in this task, the overall system configuration is defined, and schematics, diagrams, and layouts of the project may provide early project configuration. (This notably varies a lot by field, industry, and product.) During detailed design and optimization, the parameters of the part being created will change, but the preliminary design focuses on creating the general framework to build the project on. S. Blanchard and J. Fabrycky describe it as: “The ‘whats’ initiating conceptual design produce ‘hows’ from the conceptual design evaluation effort applied to feasible conceptual design concepts. Next, the ‘hows’ are taken into preliminary design through the means of allocated requirements. There they become ‘whats’ and drive preliminary design to address ‘hows’ at this lower level.” Detailed design Following FEED is the Detailed Design (Detailed Engineering) phase, which may consist of procurement of materials as well. This phase further elaborates each aspect of the project/product by complete description through solid modeling, drawings as well as specifications. Computer-aided design (CAD) programs have made the detailed design phase more efficient. For example, a CAD program can provide optimization to reduce volume without hindering a part's quality. It can also calculate stress and displacement using the finite element method to determine stresses throughout the part. Production planning The production planning and tool design consists of planning how to mass-produce the product and which tools should be used in the manufacturing process. Tasks to complete in this step include selecting materials, selection of the production processes, determination of the sequence of operations, and selection of tools such as jigs, fixtures, metal cutting and metal or plastics forming tools. This task also involves additional prototype testing iterations to ensure the mass-produced version meets qualification testing standards. Comparison with the scientific method Engineering is formulating a problem that can be solved through design. Science is formulating a question that can be solved through investigation. The engineering design process bears some similarity to the scientific method. Both processes begin with existing knowledge, and gradually become more specific in the search for knowledge (in the case of "pure" or basic science) or a solution (in the case of "applied" science, such as engineering). The key difference between the engineering process and the scientific process is that the engineering process focuses on design, creativity and innovation while the scientific process emphasizes explanation, prediction and discovery (observation). Degree programs Methods are being taught and developed in Universities including: Engineering Design, University of Bristol Faculty of Engineering Dyson School of Design Engineering, Imperial College London TU Delft, Industrial Design Engineering. University of Waterloo, Systems Design Engineering See also Applied science Computer-automated design Design engineer Engineering analysis Engineering optimization New product development Systems engineering process Surrogate model Traditional engineering References External links Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition, Eggert, Rudolph J. (2010) Engineering Design, Second Edition, High Peak Press, Meridian, Idaho, Engineering concepts Mechanical engineering Systems engineering
Engineering design process
[ "Physics", "Engineering" ]
1,757
[ "Systems engineering", "nan", "Applied and interdisciplinary physics", "Mechanical engineering" ]
7,073,120
https://en.wikipedia.org/wiki/Nuclear%20criticality%20safety
Nuclear criticality safety is a field of nuclear engineering dedicated to the prevention of nuclear and radiation accidents resulting from an inadvertent, self-sustaining nuclear chain reaction. Nuclear criticality safety is concerned with mitigating the consequences of a nuclear criticality accident. A nuclear criticality accident occurs from operations that involve fissile material and results in a sudden and potentially lethal release of radiation. Nuclear criticality safety practitioners attempt to prevent nuclear criticality accidents by analyzing normal and credible abnormal conditions in fissile material operations and designing safe arrangements for the processing of fissile materials. A common practice is to apply a double contingency analysis to the operation in which two or more independent, concurrent and unlikely changes in process conditions must occur before a nuclear criticality accident can occur. For example, the first change in conditions may be complete or partial flooding and the second change a re-arrangement of the fissile material. Controls (requirements) on process parameters (e.g., fissile material mass, equipment) result from this analysis. These controls, either passive (physical), active (mechanical), or administrative (human), are implemented by inherently safe or fault-tolerant plant designs, or, if such designs are not practicable, by administrative controls such as operating procedures, job instructions and other means to minimize the potential for significant process changes that could lead to a nuclear criticality accident. Principles As a simplistic analysis, a system will be exactly critical if the rate of neutron production from fission is exactly balanced by the rate at which neutrons are either absorbed or lost from the system due to leakage. Safely subcritical systems can be designed by ensuring that the potential combined rate of absorption and leakage always exceeds the potential rate of neutron production. The parameters affecting the criticality of the system may be remembered using the mnemonic MAGICMERV. Some these parameters are not independent from one another; for example, changing mass will result in a change of volume, among others. Mass: The probability of fission increases as the total number of fissile nuclei increases. The relationship is not linear. If a fissile body has a given size and shape but varying density and mass, there is a threshold below which criticality cannot occur. This threshold is called the critical mass. Absorption: Absorption removes neutrons from the system. Large amounts of absorbers are used to control or reduce the probability of a criticality. Good absorbers are boron, cadmium, gadolinium, silver, and indium. Geometry/shape: The shape of the fissile system affects how easily neutrons can escape (leak out) from it, in which case they are not available to cause fission events in the fissile material. Therefore, the shape of the fissile material affects the probability of occurrence of fission events. A shape with a large surface area, such as a thin slab, favors leakage and is safer than the same amount of fissile material in a small, compact shape such as a cube or sphere. Interaction of units: Neutrons leaking from one unit can enter another. Two units, which by themselves are sub-critical, could interact with each other to form a critical system. The distance separating the units and any material between them influences the effect. Concentration/Density: Neutron reactions leading to scattering, capture or fission reactions are more likely to occur in dense materials; conversely, neutrons are more likely to escape (leak) from low density materials. Moderation: Neutrons resulting from fission are typically fast (high energy). These fast neutrons do not cause fission as readily as slower (less energetic) ones. Neutrons are slowed down (moderated) by collision with atomic nuclei. The most effective moderating nuclei are hydrogen, deuterium, beryllium and carbon. Hence hydrogenous materials including oil, polyethylene, water, wood, paraffin, and the human body are good moderators. Note that moderation comes from collisions; therefore most moderators are also good reflectors. Enrichment: The probability of a neutron reacting with a fissile nucleus is influenced by the relative numbers of fissile and non-fissile nuclei in a system. The process of increasing the relative number of fissile nuclei in a system is called enrichment. Typically, low enrichment means less likelihood of a criticality and high enrichment means a greater likelihood. Reflection: When neutrons collide with other atomic particles (primarily nuclei) and are not absorbed, they are scattered (i.e. they change direction). If the change in direction is large enough, neutrons that have just escaped from a fissile body may be deflected back into it, increasing the likelihood of fission. This is called 'reflection'. Good reflectors include hydrogen, beryllium, carbon, lead, uranium, water, polyethylene, concrete, Tungsten carbide and steel. Volume: For a body of fissile material in any given shape, increasing the size of the body increases the average distance that neutrons must travel before they can reach the surface and escape. Hence, increasing the size of the body increases the likelihood of fission and decreases the likelihood of leakage. Hence, for any given shape (and reflection conditions - see below) there will be a size that gives an exact balance between the rate of neutron production and the combined rate of absorption and leakage. This is the critical size. Other parameters include: Temperature: This particular parameter is less commonly considered by criticality safety practitioners, as variations in temperature in a typical operating environment are often minimal or unlikely to adversely affect the criticality of the system. Often, it is assumed the actual temperature of the system being analyzed is close to room temperature. Notable exceptions to this assumption include high-temperature reactors and low-temperature cryogenic experiments. Heterogeneity: Blending fissile powders into solution, milling of powders or scraps, or other processes that affect the small-scale structure of fissile materials is important. While normally referred to as heterogeneity control, generally the concern is maintaining homogeneity because the homogeneous case is usually less reactive. Particularly, at lower enrichment, a system may be more reactive in a heterogeneous configuration compared to a homogeneous configuration. Physicochemical Form: Consists of controlling the physical state (i.e., solid, liquid, or gas) and form (e.g., solution, powder, green or sintered pellets, or metal) and/or chemical composition (e.g., uranium hexafluoride, uranyl fluoride, plutonium nitrate, or mixed oxide) of a particular fissile material. The physicochemical form could indirectly affect other parameters, such as density, moderation, and neutron absorption. Calculations and analyses To determine if any given system containing fissile material is safe, its neutron balance must be calculated. In all but very simple cases, this usually requires the use of computer programs to model the system geometry and its material properties. The analyst describes the geometry of the system and the materials, usually with conservative or pessimistic assumptions. The density and size of any neutron absorbers is minimised while the amount of fissile material is maximised. As some moderators are also absorbers, the analyst must be careful when modelling these to be pessimistic. Computer codes allow analysts to describe a three-dimensional system with boundary conditions. These boundary conditions can represent real boundaries such as concrete walls or the surface of a pond, or can be used to represent an artificial infinite system using a periodic boundary condition. These are useful when representing a large system consisting of many repeated units. Computer codes used for criticality safety analyses include OPENMC (MIT), COG (US), MONK (UK), SCALE/KENO (US), MCNP (US), and CRISTAL (France). Burnup credit Traditional criticality analyses assume that the fissile material is in its most reactive condition, which is usually at maximum enrichment, with no irradiation. For spent nuclear fuel storage and transport, burnup credit may be used to allow fuel to be more closely packed, reducing space and allowing more fuel to be handled safely. In order to implement burnup credit, fuel is modeled as irradiated using pessimistic conditions which produce an isotopic composition representative of all irradiated fuel. Fuel irradiation produces actinides consisting of both neutron absorbers and fissionable isotopes as well as fission products which absorb neutrons. In fuel storage pools using burnup credit, separate regions are designed for storage of fresh and irradiated fuel. In order to store fuel in the irradiated fuel store it must satisfy a loading curve which is dependent on initial enrichment and irradiation. See also Critical mass Criticality accident Nuclear and radiation accidents and incidents World Association of Nuclear Operators References Nuclear safety and security Nuclear technology
Nuclear criticality safety
[ "Physics" ]
1,846
[ "Nuclear technology", "Nuclear physics" ]
7,075,678
https://en.wikipedia.org/wiki/Difference%20density%20map
In X-ray crystallography, a difference density map or Fo–Fc map shows the spatial distribution of the difference between the measured electron density of the crystal and the electron density explained by the current model. A way to compute this map has been formulated for cryo-EM. Display Conventionally, they are displayed as isosurfaces with positive density—electron density where there's nothing in the model, usually corresponding to some constituent of the crystal that hasn't been modelled, for example a ligand or a crystallisation adjutant -- in green, and negative density—parts of the model not backed up by electron density, indicating either that an atom has been disordered by radiation damage or that it is modelled in the wrong place—in red. The typical contouring (display threshold) is set at 3σ. Calculation Difference density maps are usually calculated using Fourier coefficients which are the differences between the observed structure factor amplitudes from the X-ray diffraction experiment and the calculated structure factor amplitudes from the current model, using the phase from the model for both terms (since no phases are available for the observed data). The two sets of structure factors must be on the same scale. It is now normal to also include maximum-likelihood weighting terms which take into account the estimated errors in the current model: where m is a figure of merit which is an estimate of the cosine of the error in the phase, and D is a "σA" scale factor. These coefficients are derived from the gradient of the likelihood function of the observed structure factors on the basis of the current model. A difference map built with m and D is known as a mFo – DFc map. The use of ML weighting reduces model bias (due to using the model's phase) in the 2 Fo–Fc map, which is the main estimate of the true density. However, it does not fully eliminate such bias. References Further reading Electron density maps on Proteopedia Crystallography
Difference density map
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
412
[ "Crystallography", "Condensed matter physics", "Materials science" ]
7,076,870
https://en.wikipedia.org/wiki/Myc
Myc is a family of regulator genes and proto-oncogenes that code for transcription factors. The Myc family consists of three related human genes: c-myc (MYC), l-myc (MYCL), and n-myc (MYCN). c-myc (also sometimes referred to as MYC) was the first gene to be discovered in this family, due to homology with the viral gene v-myc. In cancer, c-myc is often constitutively (persistently) expressed. This leads to the increased expression of many genes, some of which are involved in cell proliferation, contributing to the formation of cancer. A common human translocation involving c-myc is critical to the development of most cases of Burkitt lymphoma. Constitutive upregulation of Myc genes have also been observed in carcinoma of the cervix, colon, breast, lung and stomach. Myc is thus viewed as a promising target for anti-cancer drugs. Unfortunately, Myc possesses several features that have rendered it difficult to drug to date, such that any anti-cancer drugs aimed at inhibiting Myc may continue to require perturbing the protein indirectly, such as by targeting the mRNA for the protein rather than via a small molecule that targets the protein itself. c-Myc also plays an important role in stem cell biology and was one of the original Yamanaka factors used to reprogram somatic cells into induced pluripotent stem cells. In the human genome, C-myc is located on chromosome 8 and is believed to regulate expression of 15% of all genes through binding on enhancer box sequences (E-boxes). In addition to its role as a classical transcription factor, N-myc may recruit histone acetyltransferases (HATs). This allows it to regulate global chromatin structure via histone acetylation. Discovery The Myc family was first established after discovery of homology between an oncogene carried by the Avian virus, Myelocytomatosis (v-myc; ) and a human gene over-expressed in various cancers, cellular Myc (c-Myc). Later, discovery of further homologous genes in humans led to the addition of n-Myc and l-Myc to the family of genes. The most frequently discussed example of c-Myc as a proto-oncogene is its implication in Burkitt's lymphoma. In Burkitt's lymphoma, cancer cells show chromosomal translocations, most commonly between chromosome 8 and chromosome 14 [t(8;14)]. This causes c-Myc to be placed downstream of the highly active immunoglobulin (Ig) promoter region, leading to overexpression of Myc. Structure The protein products of Myc family genes all belong to the Myc family of transcription factors, which contain bHLH (basic helix-loop-helix) and LZ (leucine zipper) structural motifs. The bHLH motif allows Myc proteins to bind with DNA, while the leucine zipper TF-binding motif allows dimerization with Max, another bHLH transcription factor. Myc mRNA contains an IRES (internal ribosome entry site) that allows the RNA to be translated into protein when 5' cap-dependent translation is inhibited, such as during viral infection. Function Myc proteins are transcription factors that activate expression of many pro-proliferative genes through binding enhancer box sequences (E-boxes) and recruiting histone acetyltransferases (HATs). Myc is thought to function by upregulating transcript elongation of actively transcribed genes through the recruitment of transcriptional elongation factors. It can also act as a transcriptional repressor. By binding Miz-1 transcription factor and displacing the p300 co-activator, it inhibits expression of Miz-1 target genes. In addition, myc has a direct role in the control of DNA replication. This activity could contribute to DNA amplification in cancer cells. Myc is activated upon various mitogenic signals such as serum stimulation or by Wnt, Shh and EGF (via the MAPK/ERK pathway). By modifying the expression of its target genes, Myc activation results in numerous biological effects. The first to be discovered was its capability to drive cell proliferation (upregulates cyclins, downregulates p21), but it also plays a very important role in regulating cell growth (upregulates ribosomal RNA and proteins), apoptosis (downregulates Bcl-2), differentiation, and stem cell self-renewal. Nucleotide metabolism genes are upregulated by Myc, which are necessary for Myc induced proliferation or cell growth. There have been several studies that have clearly indicated Myc's role in cell competition. A major effect of c-myc is B cell proliferation, and gain of MYC has been associated with B cell malignancies and their increased aggressiveness, including histological transformation. In B cells, Myc acts as a classical oncogene by regulating a number of pro-proliferative and anti-apoptotic pathways, this also includes tuning of BCR signaling and CD40 signaling in regulation of microRNAs (miR-29, miR-150, miR-17-92). c-Myc induces MTDH(AEG-1) gene expression and in turn itself requires AEG-1 oncogene for its expression. Myc-nick Myc-nick is a cytoplasmic form of Myc produced by a partial proteolytic cleavage of full-length c-Myc and N-Myc. Myc cleavage is mediated by the calpain family of calcium-dependent cytosolic proteases. The cleavage of Myc by calpains is a constitutive process but is enhanced under conditions that require rapid downregulation of Myc levels, such as during terminal differentiation. Upon cleavage, the C-terminus of Myc (containing the DNA binding domain) is degraded, while Myc-nick, the N-terminal segment 298-residue segment remains in the cytoplasm. Myc-nick contains binding domains for histone acetyltransferases and for ubiquitin ligases. The functions of Myc-nick are currently under investigation, but this new Myc family member was found to regulate cell morphology, at least in part, by interacting with acetyl transferases to promote the acetylation of α-tubulin. Ectopic expression of Myc-nick accelerates the differentiation of committed myoblasts into muscle cells. Clinical significance A large body of evidence shows that Myc genes and proteins are highly relevant for treating tumors. Except for early response genes, Myc universally upregulates gene expression. Furthermore, the upregulation is nonlinear. Genes for which expression is already significantly upregulated in the absence of Myc are strongly boosted in the presence of Myc, whereas genes for which expression is low in the absence Myc get only a small boost when Myc is present. Inactivation of SUMO-activating enzyme (SAE1 / SAE2) in the presence of Myc hyperactivation results in mitotic catastrophe and cell death in cancer cells. Hence inhibitors of SUMOylation may be a possible treatment for cancer. Amplification of the MYC gene was found in a significant number of epithelial ovarian cancer cases. In TCGA datasets, the amplification of Myc occurs in several cancer types, including breast, colorectal, pancreatic, gastric, and uterine cancers. In the experimental transformation process of normal cells into cancer cells, the MYC gene can cooperate with the RAS gene. Expression of Myc is highly dependent on BRD4 function in some cancers. BET inhibitors have been used to successfully block Myc function in pre-clinical cancer models and are currently being evaluated in clinical trials. MYC expression is controlled by a wide variety of noncoding RNAs, including miRNA, lncRNA, and circRNA. Some of these RNAs have been shown to be specific for certain types of human tissues and tumors. Changes in the expression of such RNAs can potentially be used to develop targeted tumor therapy. MYC rearrangements MYC chromosomal rearrangements (MYC-R) occur in 10% to 15% of diffuse large B-cell lymphoma (DLBCLs), an aggressive Non-Hodgkin Lymphoma (NHL). Patients with MYC-R have inferior outcomes and can be classified as single-hit, when they only have MYC-R; as double hit when the rearrangement is accompanied by a translocation of BCL2 or BCL6; and as triple hit when MYC-R includes both BCL2 and BCL6. Double and triple hit lymphoma have been recently classified as high-grade B-cell lymphoma (HGBCL) and it is associated with a poor prognosis. MYC-R in DLBCL/HGBCL is believed to arise through the aberrant activity of activation-induced cytidine deaminase (AICDA), which facilitates somatic hypermutation (SHM) and class-switch recombination (CSR). Although AICDA primarily targets IG loci for SHM and CSR, its off-target mutagenic effects can impact lymphoma-associated oncogenes like MYC, potentially leading to oncogenic rearrangements. The breakpoints in MYC rearrangements show considerable variability within the MYC region. These breakpoints may occur within the so-called “genic cluster,” a region spanning approximately 1.5 kb upstream of the transcription start site, as well as the first exon and intron of MYC. Fluorescence in situ hybridization (FISH) has become a routine practice in many clinical laboratories for lymphoma characterization. A break-apart (BAP) FISH probe is commonly utilized for the detection of MYC-R due to the variability of breakpoints in the MYC locus and the diversity of rearrangement partners, including immunoglobulin (IG) and non-IG partners (i.e. BCL2/BCL6). The MYC BAP probe includes a red and a green probe which hybridize 5’ and 3’ to the MYC gen, respectively. In an intact MYC locus, these probes yield a fusion signal. When MYC-R occur, two types of signals can be observed: Balanced patterns: These patterns present separate red and green signals. Unbalanced patterns: When isolated red or green signals in the absence of the corresponding green or red signal is observed. Unbalanced MYC-R are frequently associated with increased MYC expression. There is a large variability in the interpretation of unbalanced MYC BAP results among the scientists, which can impact diagnostic classification and therapeutic management of the patients. Animal models In Drosophila Myc is encoded by the diminutive locus, (which was known to geneticists prior to 1935). Classical diminutive alleles resulted in a viable animal with small body size. Drosophila has subsequently been used to implicate Myc in cell competition, endoreplication, and cell growth. During the discovery of Myc gene, it was realized that chromosomes that reciprocally translocate to chromosome 8 contained immunoglobulin genes at the break-point. To study the mechanism of tumorigenesis in Burkitt lymphoma by mimicking expression pattern of Myc in these cancer cells, transgenic mouse models were developed. Myc gene placed under the control of IgM heavy chain enhancer in transgenic mice gives rise to mainly lymphomas. Later on, in order to study effects of Myc in other types of cancer, transgenic mice that overexpress Myc in different tissues (liver, breast) were also made. In all these mouse models overexpression of Myc causes tumorigenesis, illustrating the potency of Myc oncogene. In a study with mice, reduced expression of Myc was shown to induce longevity, with significantly extended median and maximum lifespans in both sexes and a reduced mortality rate across all ages, better health, cancer progression was slower, better metabolism and they had smaller bodies. Also, Less TOR, AKT, S6K and other changes in energy and metabolic pathways (such as AMPK, more oxygen consumption, more body movements, etc.). The study by John M. Sedivy and others used Cre-Loxp -recombinase to knockout one copy of Myc and this resulted in a "Haplo-insufficient" genotype noted as Myc+/-. The phenotypes seen oppose the effects of normal aging and are shared with many other long-lived mouse models such as CR (calorie restriction) ames dwarf, rapamycin, metformin and resveratrol. One study found that Myc and p53 genes were key to the survival of chronic myeloid leukaemia (CML) cells. Targeting Myc and p53 proteins with drugs gave positive results on mice with CML. Relationship to stem cells Myc genes play a number of normal roles in stem cells including pluripotent stem cells. In neural stem cells, N-Myc promotes a rapidly proliferative stem cell and precursor-like state in the developing brain, while inhibiting differentiation. In hematopoietic stem cells, Myc controls the balance between self-renewal and differentiation. In particular, long-term hematopoietic stem cells (LT-HSCs) express low levels of c-Myc, ensuring self-renewal. Enforced expression of c-Myc in LT-HSCs promotes differentiation at the expense of self-renewal, resulting in stem cell exhaustion. In pathological states and specifically in acute myeloid leukemia, oxidant stress can trigger higher levels of Myc expression that affects the behavior of leukemia stem cells. c-Myc plays a major role in the generation of induced pluripotent stem cells (iPSCs). It is one of the original factors discovered by Yamanaka et al. to encourage cells to return to a 'stem-like' state alongside transcription factors Oct4, Sox2 and Klf4. It has since been shown that it is possible to generate iPSCs without c-Myc. Interactions Myc has been shown to interact with: ACTL6A BRCA1 Bcl-2 Cyclin T1 CHD8 DNMT3A EP400 GTF2I HTATIP let-7 MAPK1 MAPK8 MAX MLH1 MYCBP2 MYCBP NMI NFYB NFYC P73 PCAF PFDN5 RuvB-like 1 SAP130 SMAD2 SMAD3 SMARCA4 SMARCB1 SUPT3H TIAM1 TADA2L TAF9 TFAP2A TRRAP WDR5 YY1 and ZBTB17. C2orf16 See also Myc-tag C-myc mRNA References Further reading External links InterPro signatures for protein family: , , The Myc Protein NCBI Human Myc protein Myc cancer gene Generating iPS Cells from MEFS through Forced Expression of Sox-2, Oct-4, c-Myc, and Klf4 Drosophila Myc - The Interactive Fly PDBe-KB provides an overview of all the structure information available in the PDB for Human Myc proto-oncogene protein Oncogenes Transcription factors Human proteins
Myc
[ "Chemistry", "Biology" ]
3,358
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
7,077,416
https://en.wikipedia.org/wiki/Avidity
In biochemistry, avidity refers to the accumulated strength of multiple affinities of individual non-covalent binding interactions, such as between a protein receptor and its ligand, and is commonly referred to as functional affinity. Avidity differs from affinity, which describes the strength of a single interaction. However, because individual binding events increase the likelihood of occurrence of other interactions (i.e., increase the local concentration of each binding partner in proximity to the binding site), avidity should not be thought of as the mere sum of its constituent affinities but as the combined effect of all affinities participating in the biomolecular interaction. A particular important aspect relates to the phenomenon of 'avidity entropy'. Biomolecules often form heterogenous complexes or homogeneous oligomers and multimers or polymers. If clustered proteins form an organized matrix, such as the clathrin-coat, the interaction is described as a matricity. Antibody-antigen interaction Avidity is commonly applied to antibody interactions in which multiple antigen-binding sites simultaneously interact with the target antigenic epitopes, often in multimerized structures. Individually, each binding interaction may be readily broken; however, when many binding interactions are present at the same time, transient unbinding of a single site does not allow the molecule to diffuse away, and binding of that weak interaction is likely to be restored. Each antibody has at least two antigen-binding sites, therefore antibodies are bivalent to multivalent. Avidity (functional affinity) is the accumulated strength of multiple affinities. For example, IgM is said to have low affinity but high avidity because it has 10 weak binding sites for antigen as opposed to the 2 stronger binding sites of IgG, IgE and IgD with higher single binding affinities. Affinity Binding affinity is a measure of dynamic equilibrium of the ratio of on-rate (kon) and off-rate (koff) under specific concentrations of reactants. The affinity constant, Ka, is the inverse of the dissociation constant, Kd. The strength of complex formation in solution is related to the stability constants of complexes, however in case of large biomolecules, such as receptor-ligand pairs, their interaction is also dependent on other structural and thermodynamic properties of reactants plus their orientation and immobilization. There are several methods to investigate protein–protein interactions existing with differences in immobilization of each reactant in 2D or 3D orientation. The measured affinities are stored in public databases, such as the Ki Database and BindingDB. As an example, affinity is the binding strength between the complex structures of the epitope of antigenic determinant and paratope of antigen-binding site of an antibody. Participating non-covalent interactions may include hydrogen bonds, electrostatic bonds, van der Waals forces and hydrophobic effects. Calculation of binding affinity for bimolecular reaction (1 antibody binding site per 1 antigen): [Ab] + [Ag] <=> [AbAg] where [Ab] is the antibody concentration and [Ag] is the antigen concentration, either in free ([Ab],[Ag]) or bound ([AbAg]) state. calculation of association constant (or equilibrium constant): calculation of dissociation constant: Application Avidity tests for rubella virus, Toxoplasma gondii, cytomegalovirus (CMV), varicella zoster virus, human immunodeficiency virus (HIV), hepatitis viruses, Epstein–Barr virus, and others were developed a few years ago. These tests help to distinguish acute, recurrent or past infection by avidity of marker-specific IgG. Currently there are two avidity assays in use. These are the well known chaotropic (conventional) assay and the recently developed AVIcomp (avidity competition) assay. See also Amino acid residue Epitope Fab region Hapten A number of technologies exist to characterise the avidity of molecular interactions including switchSENSE and surface plasmon resonance. References Further reading External links Biophysics Protein structure
Avidity
[ "Physics", "Chemistry", "Biology" ]
861
[ "Structural biology", "Applied and interdisciplinary physics", "Biophysics", "Protein structure" ]
8,566,056
https://en.wikipedia.org/wiki/Chain%20rule%20for%20Kolmogorov%20complexity
The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states: That is, the combined randomness of two sequences X and Y is the sum of the randomness of X plus whatever randomness is left in Y once we know X. This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability: The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term: (An exact version, , holds for the prefix complexity KP, where is a shortest program for x.) It states that the shortest program printing X and Y is obtained by concatenating a shortest program printing X with a program printing Y given X, plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: for all x,y. Proof The ≤ direction is obvious: we can write a program to produce x and y by concatenating a program to produce x, a program to produce y given access to x, and (whence the log term) the length of one of the programs, so that we know where to separate the two programs for x and upper-bounds this length). For the ≥ direction, it suffices to show that for all such that we have that either or . Consider the list (a1,b1), (a2,b2), ..., (ae,be) of all pairs produced by programs of length exactly [hence ]. Note that this list contains the pair , can be enumerated given and (by running all programs of length in parallel), has at most 2K(x,y) elements (because there are at most 2n programs of length ). First, suppose that x appears less than times as first element. We can specify y given by enumerating (a1,b1), (a2,b2), ... and then selecting in the sub-list of pairs . By assumption, the index of in this sub-list is less than and hence, there is a program for y given of length . Now, suppose that x appears at least times as first element. This can happen for at most different strings. These strings can be enumerated given and hence x can be specified by its index in this enumeration. The corresponding program for x has size . Theorem proved. References Computability theory Theory of computation Articles containing proofs
Chain rule for Kolmogorov complexity
[ "Mathematics", "Technology", "Engineering" ]
539
[ "Telecommunications engineering", "Applied mathematics", "Mathematical logic", "Computer science", "Information theory", "Computability theory", "Articles containing proofs" ]
8,567,316
https://en.wikipedia.org/wiki/Cyclonic%20spray%20scrubber
Cyclonic spray scrubbers are an air pollution control technology. They use the features of both the dry cyclone and the spray chamber to remove pollutants from gas streams. Generally, the inlet gas enters the chamber tangentially, swirls through the chamber in a corkscrew motion, and exits. At the same time, liquid is sprayed inside the chamber. As the gas swirls around the chamber, pollutants are removed when they impact on liquid droplets, are thrown to the walls, and washed back down and out. Cyclonic scrubbers are generally low- to medium-energy devices, with pressure drops of 4 to 25 cm (1.5 to 10 in) of water. Commercially available designs include the irrigated cyclone scrubber and the cyclonic spray scrubber. In the irrigated cyclone (Figure 1), the inlet gas enters near the top of the scrubber into the water sprays. The gas is forced to swirl downward, then change directions, and return upward in a tighter spiral. The liquid droplets produced capture the pollutants, are eventually thrown to the side walls, and carried out of the collector. The "cleaned" gas leaves through the top of the chamber. The cyclonic spray scrubber (Figure 2) forces the inlet gas up through the chamber from a bottom tangential entry. Liquid sprayed from nozzles on a center post (manifold) is directed toward the chamber walls and through the swirling gas. As in the irrigated cyclone, liquid captures the pollutant, is forced to the walls, and washes out. The "cleaned" gas continues upward, exiting through the straightening vanes at the top of the chamber. This type of technology is a part of the group of air pollution controls collectively referred to as wet scrubbers. Particulate collection Cyclonic spray scrubbers are more efficient than spray towers, but not as efficient as venturi scrubbers, in removing particulate from the inlet gas stream. Particulates larger than 5 μm are generally collected by impaction with 90% efficiency. In a simple spray tower, the velocity of the particulates in the gas stream is low: 0.6 to 1.5 m/s (2 to 5 ft/s). By introducing the inlet gas tangentially into the spray chamber, the cyclonic scrubber increases gas velocities (thus, particulate velocities) to approximately 60 to 180 m/s (200 to 600 ft/s). The velocity of the liquid spray is approximately the same in both devices. This higher particulate-to-liquid relative velocity increases particulate collection efficiency for this device over that of the spray chamber. Gas velocities of 60 to 180 m/s are equivalent to those encountered in a venturi scrubber. However, cyclonic spray scrubbers are not as efficient as venturi scrubbers because they are not capable of producing the same degree of useful turbulence. Gas collection High gas velocities through these devices reduce the gas-liquid contact time, thus reducing absorption efficiency. Cyclonic spray scrubbers are capable of effectively removing some gases; however, they are rarely chosen when gaseous pollutant removal is the only concern. Maintenance problems The main maintenance problems with cyclonic scrubbers are nozzle plugging and corrosion or erosion of the side walls of the cyclone body. Nozzles have a tendency to plug from particulates that are in the recycled liquid and/or particulates that are in the gas stream. The best solution is to install the nozzles so that they are easily accessible for cleaning or removal. Due to high gas velocities, erosion of the side walls of the cyclone can also be a problem. Abrasion-resistant materials may be used to protect the cyclone body, especially at the inlet. Summary The pressure drops across cyclonic scrubbers are usually 4 to 25 cm (1.5 to 10 in) of water; therefore, they are low- to medium-energy devices and are most often used to control large-sized particulates. Relatively simple devices, they resist plugging because of their open construction. They also have the additional advantage of acting as entrainment separators because of their shape. The liquid droplets are forced to the sides of the cyclone and removed prior to exiting the vessel. Their biggest disadvantages are that they are not capable of removing submicrometer particulates and they do not efficiently absorb most pollutant gases. Table 1 lists typical operating characteristics of cyclonic scrubbers. Bibliography Bethea, R. M. 1978. Air Pollution Control Technology. New York: Van Nostrand Reinhold. McIlvaine Company. 1974. The Wet Scrubber Handbook. Northbrook, IL: McIlvaine Company. Richards, J. R. 1995. Control of Particulate Emissions (APTI Course 413). U.S. Environmental Protection Agency. Richards, J. R. 1995. Control of Gaseous Emissions. (APTI Course 415). U.S. Environmental Protection Agency. U.S. Environmental Protection Agency. 1969. Control Techniques for Particulate Air Pollutants. AP-51. References Pollution control technologies Air pollution control systems Wet scrubbers Liquid-phase and gas-phase contacting scrubbers
Cyclonic spray scrubber
[ "Chemistry", "Engineering" ]
1,094
[ "Scrubbers", "Wet scrubbers", "Pollution control technologies", "Environmental engineering" ]
8,567,475
https://en.wikipedia.org/wiki/Roller-compacted%20concrete
Roller-compacted concrete (RCC) or rolled concrete (rollcrete) is a special blend of concrete that has essentially the same ingredients as conventional concrete but in different ratios, and increasingly with partial substitution of fly ash for portland cement. The partial substitution of fly ash for Portland Cement is an important aspect of RCC dam construction because the heat generated by fly ash hydration is significantly less than the heat generated by portland cement hydration. This in turn reduces the thermal loads on the concrete and reduces the potential for thermal cracking to occur. RCC is a mix of cement/fly ash, water, sand, aggregate and common additives, but contains much less water. The produced mix is drier and essentially has no slump. RCC is placed in a manner similar to road paving; the material is delivered by dump trucks or conveyors, spread by small bulldozers or specially modified asphalt pavers, and then compacted by vibratory rollers. In dam construction, roller-compacted concrete began its initial development with the construction of the Alpe Gera Dam near Sondrio in North Italy between 1961 and 1964. Concrete was laid in a similar form and method but not rolled. RCC had been touted in engineering journals during the 1970s as a revolutionary material suitable for, among other things, dam construction. Initially and generally, RCC was used for backfill, sub-base and concrete pavement construction, but increasingly it has been used to build concrete gravity dams because the low cement content and use of fly ash cause less heat to be generated while curing than do conventional mass concrete placements. Roller-compacted concrete has many time and cost benefits over conventional mass concrete dams; these include higher rates of concrete placement, lower material costs and lower costs associated with post-cooling and formwork. Dam applications For dam applications, RCC sections are built lift-by-lift in successive horizontal layers resulting in a downstream slope that resembles a concrete staircase. Once a layer is placed, it can immediately support the earth-moving equipment to place the next layer. After RCC is deposited on the lift surface, small dozers typically spread it in one-foot-thick (about 30 cm) layers. The first RCC dam built in the United States was the Willow Creek Dam on Willow Creek, a tributary in Oregon of the Columbia River. It was constructed by the US Army Corps of Engineers between November 1981 and February 1983. Construction proceeded well, within a fast schedule and under budget (estimated US$50 million, actual US$35 million). On initial filling though, it was found that the leakage between the compacted layers within the dam body was unusually high. This condition was treated by traditional remedial grouting at a further cost of US$2 million, which initially reduced the leakage by nearly 75%; over the years, seepage has since decreased to less than 10% of its initial flow. Concern over the dam's long-term safety has continued however, although only indirectly related to its RCC construction. Within a few years of construction, problems were noted with stratification of the reservoir water, caused by upstream pollution and anoxic decomposition, which produced hydrogen sulfide gas. Concerns were expressed that this could in turn give rise to sulfuric acid, and thus accelerate damage to the concrete. The controversy itself, as well as its handling, continued for some years. In 2004 an aeration plant was installed to address the root cause in the reservoir, as had been suggested 18 years earlier. In the quarter century following the construction of the Willow Creek Dam, considerable research and experimentation yielded many improvements in concrete mix designs, dam designs and construction methods for roller-compacted concrete dams. By 2008, about 350 RCC dams existed worldwide. As of 2018, the highest dam of this type was the Gilgel Gibe III Dam in Ethiopia, at , with the Pakistani Diamer-Bhasha Dam under construction at . See also List of roller-compacted concrete dams Asphalt concrete Further reading References External links History of Concrete Database of Worldwide Roller Compacted Concrete Dams Concrete Concrete buildings and structures Building materials
Roller-compacted concrete
[ "Physics", "Engineering" ]
842
[ "Structural engineering", "Building engineering", "Construction", "Materials", "Building materials", "Concrete", "Matter", "Architecture" ]
8,568,920
https://en.wikipedia.org/wiki/Dupuit%E2%80%93Forchheimer%20assumption
The Dupuit–Forchheimer assumption holds that groundwater flows horizontally in an unconfined aquifer and that the groundwater discharge is proportional to the saturated aquifer thickness. It was formulated by Jules Dupuit and Philipp Forchheimer in the late 1800s to simplify groundwater flow equations for analytical solutions. The Dupuit–Forchheimer assumption requires that the water table be relatively flat and that the groundwater be hydrostatic (that is, that the equipotential lines are vertical): where is the vertical pressure gradient, is the specific weight, is the density of water, is the standard gravity, and is the vertical hydraulic gradient. References Aquifers Hydraulic engineering Hydrology
Dupuit–Forchheimer assumption
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
144
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Aquifers", "Environmental engineering", "Hydraulic engineering" ]
8,569,148
https://en.wikipedia.org/wiki/Cascade%20filling%20system
A cascade filling system is a high-pressure gas cylinder storage system that is used for the refilling of smaller compressed gas cylinders. In some applications, each of the large cylinders is filled by a compressor, otherwise they may be filled remotely and replaced when the pressure is too low for effective transfer. The cascade system allows small cylinders to be filled without a compressor. In addition, a cascade system is useful as a reservoir to allow a low-capacity compressor to meet the demand of filling several small cylinders in close succession, with longer intermediate periods during which the storage cylinders can be recharged. Principle of operation When gas in a cylinder at high pressure is allowed to flow to another cylinder containing gas at a lower pressure, the pressures will equalize to a value somewhere between the two initial pressures. The equilibrium pressure is affected by transfer rate as it will be influenced by temperature, but at a constant temperature, the equilibrium pressure is described by Dalton's law of partial pressures and Boyle's law for ideal gases. The formula for the equilibrium pressure is: P3 = (P1×V1+P2×V2)/(V1+V2) where P1 and V1 are the initial pressure and volume of one cylinder P2 and V2 the initial pressure and volume of the other cylinder and P3 is the equilibrium pressure. An example could be a 100-litre (internal volume) cylinder (V1) pressurised to 200 bar (P1) filling a 10-litre (internal volume) cylinder (V2) which was unpressurised (P2 = 1 bar) (resulting in both cylinder equalising to approximately 180 bar (P3). If another 100-liter cylinder pressurized this time to 250 bar were then used to "top-up" the 10-liter cylinder, both of these cylinders would equalize to about 240 bar. However, if the higher pressure 100-liter cylinder were used first, the 10-liter cylinder would equalize to about 225 bar and the lower pressure 100-liter cylinder could not be used to top it up. In a cascade storage system, several large cylinders are used to bring a small cylinder up to the desired pressure, by always using the supply cylinder with the lowest usable pressure first, then the cylinder with the next lowest pressure, and so on. In practice, the theoretical transfers can only be achieved if the gases are allowed to reach a temperature equilibrium before disconnection. This requires significant time, and a lower efficiency may be accepted to save time. Actual transfer can be calculated using the general gas equation of state if the temperature of the gas in the cylinder is accurately measured. Uses Breathing sets A breathing set cylinder may be filled to its working pressure by decanting from larger (often 50 liters) cylinders. (To make this easy the neck of the cylinder of the Siebe Gorman Salvus rebreather had the same thread as an oxygen storage cylinder, but the opposite gender, for direct decanting.) The storage cylinders are available in a variety of sizes, typically from 50 litre internal capacity to well over 100 litres. In the more general case, a high-pressure hose known as a filling whip is used to connect the filling panel or storage cylinder to the receiving cylinder. Cascade filling is often used for partial pressure blending of breathing gas mixtures for diving, to economize on the relatively expensive oxygen, for nitrox, and the even more expensive helium in trimix or heliox mixtures. Compressed natural gas fueling Cascade storage is used at compressed natural gas (CNG) fueling stations. Typically three CNG tanks will be used, and a vehicle will first be fueled from one of them, which will result in an incomplete fill, perhaps to 2000 psig for a 3000 psig tank. The second and third tanks will bring the vehicle's tank closer to 3000 psi. The station normally has a compressor, which refills the station's tanks, using natural gas from a utility line. This prevents accidentally overfilling the tank, which could happen with a system using a single fueling tank at a higher pressure than the target pressure for the vehicle. Hydrogen storage In cascade storage systems for hydrogen storage, for example at hydrogen stations, fuel dispenser A draws hydrogen from tank A, while dispenser B draws fuel from hydrogen tank B. If dispenser A is over-utilized, tank A will become depleted before tank B. At this point dispenser A is switched to tank C. Tank C will then supply dispensers A and B and tank A until tank A is filled to the same pressure as tank B and the dispensers are disconnected, after which the control system will close the control valves to switch to its former state. Arrangement of system The storage cylinders may be used independently in sequence using a portable transfer whip with a pressure gauge and manual bleed valve, to transfill the receiving cylinder until the appropriate fill pressure has been reached, or the storage cylinders may be connected to a manifold system and a filling control panel with one or more filling whips. Ideally, each storage cylinder has an independent connection to the filling panel with a contents pressure gauge and supply valve dedicated to that cylinder, and a filling gauge connected to the filling whip, so the operator can see at a glance the next higher storage cylinder pressure compared to the receiving cylinder pressure. The storage cylinders may be filled remotely and connected to the manifold by a flexible hose when in use, or maybe permanently connected and refilled by a compressor through a dedicated filling system, which may be automated or manually controlled. An over-pressure safety valve is usually installed inline between the compressor and the storage units to protect the cylinders from overfilling, and each cylinder may also be protected by a rupture disc. References External links Breathing gases Diving support equipment Gas technologies Hydrogen storage Pressure vessels
Cascade filling system
[ "Physics", "Chemistry", "Engineering" ]
1,203
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
8,571,733
https://en.wikipedia.org/wiki/Noise%2C%20vibration%2C%20and%20harshness
Noise, vibration, and harshness (NVH), also known as noise and vibration (N&V), is the study and modification of the noise and vibration characteristics of vehicles, particularly cars and trucks. While noise and vibration can be readily measured, harshness is a subjective quality, and is measured either via jury evaluations, or with analytical tools that can provide results reflecting human subjective impressions. The latter tools belong to the field psychoacoustics. Interior NVH deals with noise and vibration experienced by the occupants of the cabin, while exterior NVH is largely concerned with the noise radiated by the vehicle, and includes drive-by noise testing. NVH is mostly engineering, but often objective measurements fail to predict or correlate well with the subjective impression on human observers. For example, although the ear's response at moderate noise levels is approximated by A-weighting, two different noises with the same A-weighted level are not necessarily equally disturbing. The field of psychoacoustics is partly concerned with this correlation. In some cases, the NVH engineer is asked to change the sound quality, by adding or subtracting particular harmonics, rather than making the vehicle quieter. Noise, vibration, and harshness for vehicles can be distinguished easily by quantifying the frequency. Vibration is between 0.5 Hz and 50 Hz, noise is between 20 Hz and 5000 Hz, and harshness takes the coupling of noise and vibration. Sources of NVH The sources of noise in a vehicle can be classified as: Aerodynamic (e.g., wind, cooling fans of HVAC) Mechanical (e.g., engine, driveline, tire contact patch and road surface, brakes) Electrical (e.g., electromagnetically induced acoustic noise and vibration coming from electrical actuators, alternator, or traction motor in electric cars) Mainly, noise is either structure-borne noise or airborne noise. Many problems are generated as either vibration or noise, transmitted via a variety of paths, and then radiated acoustically into the cabin. These are classified as "structure-borne" noise. Others are generated acoustically and propagated by airborne paths. Structure-borne noise is attenuated by isolation, while airborne noise is reduced by absorption or through the use of barrier materials. Vibrations are sensed at the steering wheel, the seat, armrests, or the floor and pedals. Some problems are sensed visually, such as the vibration of the rear-view mirror or header rail on open-topped cars. Tonal versus broadband NVH can be tonal such as engine noise, or broadband, such as road noise or wind noise, normally. Some resonant systems respond at characteristic frequencies, but in response to random excitation. Therefore, although they look like tonal problems on any one spectrum, their amplitude varies considerably. Other problems are self-resonant, such as whistles from antennas. Tonal noises often have harmonics. Below is the noise spectrum of Michael Schumacher's Ferrari at 16680 rpm, showing the various harmonics. The x-axis is given in terms of multiples of engine speed. The y-axis is logarithmic, and uncalibrated. Instrumentation Typical instrumentation used to measure NVH include microphones, accelerometers, and force gauges or load cells. Many NVH facilities have semi-anechoic chambers, and rolling road dynamometers. Typically, signals are recorded directly to the hard drive via an analog-to-digital converter. In the past, magnetic or DAT tape recorders were used. The integrity of the signal chain is very important, typically each of the instruments used are fully calibrated in a laboratory once per year, and any given setup is calibrated as a whole once per day. Laser scanning vibrometry is an essential tool for effective NVH optimization. The vibrational characteristics of a sample is acquired full-field under operational or excited conditions. The results represent the actual vibrations. No added mass is influencing the measurement, as the sensor is light itself. Investigative techniques Techniques used to help identify NVH include part substitution, modal analysis, rig squeak and rattle tests (complete vehicle or component/system tests), lead cladding, acoustic intensity, transfer path analysis, and partial coherence. Most NVH work is done in the frequency domain, using fast Fourier transforms to convert the time domain signals into the frequency domain. Wavelet analysis, order analysis, statistical energy analysis, and subjective evaluation of signals modified in real time are also used. Computer-based modeling NVH analysis needs good representative prototypes of the production vehicle for testing. These are needed early in the design process as the solutions often need substantial modification to the design, forcing in engineering changes which are much less expensive when made early. These early prototypes are very expensive, so there has been great interest in computer aided predictive techniques for NVH. One example is the modeling works for structure borne noise and vibration analysis. When the phenomenon being considered occurs below, for example, 25–30 Hz, the idle shaking of the powertrain, a multi-body model can be used. In contrast, when the phenomenon being considered occurs at relatively high frequency – for example, above 1 kHz – a statistical energy analysis (SEA) model may be a better approach. For the mid-frequency band, various methodologies exist, such as vibro-acoustic finite element analysis, and boundary element analysis. The structure can be coupled to the interior cavity and form a fully coupled equation system. Also, other techniques exist that can mix measured data with finite element or boundary element data. Typical solutions There are three principal means of improving NVH: Reducing the source strength, as in making a noise source quieter with a muffler, or improving the balance of a rotating mechanism Interrupting the noise or vibration path, with barriers (for noise) or isolators (for vibration) Absorption of the noise or vibration energy, as for example with foam noise absorbers, or tuned vibration dampers Deciding which of these (or what combination) to use in solving a particular problem is one of the challenges facing the NVH engineer. Specific methods for improving NVH include the use of tuned mass dampers, subframes, balancing, modifying the stiffness or mass of structures, retuning exhausts and intakes, modifying the characteristics of elastomeric isolators, adding sound deadening or absorbing materials, and using active noise control. In some circumstances, substantial changes in vehicle architecture may be the only way to cure some problems cost-effectively. Not-for-profit organizations such as the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) and Vibration Isolation and Seismic Control Manufacturers Association (VISCMA) provide specifications, standards, and requirements that cover a wide array of industries including electrical, mechanical, plumbing, and HVAC. See also Acoustic camera Acoustic quieting Engine balance Health effects from noise Noise control Noise mitigation Vibration calibrator Vibration isolation Acoustical measurements and instrumentation References Bibliography Baxa (1982). Noise Control in Internal Combustion Engines. Beranek. Acoustics. Griffin. Handbook of Human Vibration. Harris. Shock and Vibration Handbook. Thomson. Theory of Vibration with Applications. External links Agilent's Fundamentals of Signal Analysis Basics of NVH Dr. Pawan Pingle Mechanical vibrations Automotive engineering Noise control
Noise, vibration, and harshness
[ "Physics", "Engineering" ]
1,530
[ "Structural engineering", "Automotive engineering", "Mechanics", "Mechanical engineering by discipline", "Mechanical vibrations" ]
8,573,406
https://en.wikipedia.org/wiki/Phenothrin
Phenothrin, also called sumithrin and d-phenothrin, is a synthetic pyrethroid that kills adult fleas and ticks. It has also been used to kill head lice in humans. d-Phenothrin is used as a component of aerosol insecticides for domestic use. It is often used with methoprene, an insect growth regulator that interrupts the insect's biological lifecycle by killing the eggs. Effects Phenothrin is primarily used to kill fleas and ticks. It is also used to kill head lice in humans, but studies conducted in Paris and the United Kingdom have shown widespread resistance to phenothrin. It is extremely toxic to bees. A U.S. Environmental Protection Agency (EPA) study found that 0.07 micrograms were enough to kill honey bees. It is also extremely toxic to aquatic life with a study showing concentrations of 0.03 ppb killing mysid shrimp. It has increased risk of liver cancer in rats and mice in long-term exposure, with doses in the range of 100 milligrams per kilogram of body weight per day, or above. It is capable of killing mosquitoes, although remains poisonous to cats and dogs, with seizures and deaths being reported due to poisoning. Specific data on concentrations or exposure are lacking. Phenothrin has been found to possess antiandrogen properties, and was responsible for a small epidemic of gynecomastia via isolated environmental exposure. The EPA has not assessed its effect on cancer in humans. However, one study performed by the Mount Sinai School of Medicine linked sumithrin with breast cancer; the link made by its effect on increasing the expression of a gene responsible for mammary tissue proliferation. EPA action In 2005, the U.S. EPA cancelled permission to use phenothrin in several flea and tick products, at the request of the manufacturer, Hartz Mountain Industries. The products were linked to a range of adverse reactions, including hair loss, salivation, tremors, and numerous deaths in cats and kittens. In the short term, the agreement called for new warning labels on the products. As of March 31, 2006, the sale and distribution of Hartz's phenothrin-containing flea and tick products for cats has been terminated. However, EPA's product cancellation order did not apply to Hartz flea and tick products for dogs, and Hartz continues to produce many of its flea and tick products for dogs. See also Permethrin Resmethrin Deltamethrin References External links d-Phenothrin general information – National Pesticide Information Center Pyrethrins and Pyrethroids Fact Sheet – National Pesticide Information Center Pyrethrins and Pyrethroids Pesticide Information Profile – Extension Toxicology Network Chrysanthemate esters Endocrine disruptors Nonsteroidal antiandrogens Pest control (3-phenoxyphenyl)methyl 2,2,3-trimethylcyclopropane-1-carboxylates
Phenothrin
[ "Chemistry", "Biology" ]
639
[ "Endocrine disruptors", "Pests (organism)", "Pest control" ]
8,573,560
https://en.wikipedia.org/wiki/Ferrofluidic%20seal
Ferrofluidic is the brand name of a staged magnetic liquid rotary sealing mechanism made by the Ferrotec Corporation. Ferrofluidic seals, also known as magnetic liquid rotary seals, are employed in various rotating equipment to facilitate rotary motion while ensuring a hermetic seal. This is achieved through a physical barrier constituted by a ferrofluid, which is held in position by a permanent magnet. Developed in the 1970s, ferrofluidic seals have been utilized in various specialized applications, including computer disk drives, vacuum systems, and nuclear technologies. Origins Ferrofluidic seals rely on the general principle of ferrofluids - fluids that display magnetic attraction. Following research on ferrofluids during the 1960s, the ferrofluidic seal was first patented in 1971 by R.E. Rosensweig (USP 3,620,584), who subsequently founded Ferrofluidics Corporation with R. Moskowitz. Benefits and limitations Magnetic liquid rotary seals operate with little maintenance and minimal leakage in a range of applications. Ferrofluid-based seals used in industrial and scientific applications are most often packaged in mechanical seal assemblies called rotary feed-throughs, which also contain a central shaft, ball bearings, and outer housing. The ball bearings provide two functions: maintaining the shaft's centering within the seal gap and supporting external loads. The bearings are the only mechanical wear items, as the dynamic seal is formed with a series of rings of ultra-low vapor pressure, oil-based liquid, held magnetically between the rotor and stator. As the ferrofluid retains its liquid properties even when magnetized, drag torque is very low. With the use of permanent magnets, the operating life and equipment maintenance cycles are generally very long. Ferrofluid-sealed feed-throughs reach their greatest performance levels by optimizing features such as ferrofluid Viscosity and magnetic strength, magnet and steel materials, bearing arrangements, and using water cooling for applications with extremely high speeds or temperatures. Ferrofluid-sealed feedthroughs can operate in environments including ultra-high vacuum (below 10−8 mbar), temperatures over 1,000 °C, tens of thousands of RPM, and multiple-atmosphere pressures. Magnetic liquid seals can be engineered for a range of applications and exposure, but are generally limited to sealing gases and vapors, not direct pressurized liquid. This is due to premature failure of the ferrofluid seal when it seals a liquid. In 2020, research was underway to try and solve this problem. Each particular combination of construction materials and design features has practical limits concerning temperature, differential pressure, speed, applied loads, and operating environment, and as such devices must be designed to meet the criteria for their applications. Necessary features may include multiple ferrofluid stages, water cooling, customized materials, permanent magnets, and exotic bearings. Ferrofluid-based seals have extremely low leak rates however they cannot reach the levels of welded connections or other all-metal, static (non-rotating) seals. References External links Video demonstration of a magnetic liquid rotary seal Seals (mechanical) Magnetic devices
Ferrofluidic seal
[ "Physics" ]
654
[ "Seals (mechanical)", "Materials", "Matter" ]
6,910,700
https://en.wikipedia.org/wiki/King%20road%20drag
The King road drag (also known as the Missouri road drag and the split log road drag) was a simple form of a road grader implemented for grading dirt road. It revolutionized the maintenance of dirt roads in the early 1900s. It was invented by David Ward King, who went by "D. Ward King" and who was a farmer in Holt Township, near Maitland, Missouri. It started out as two parallel logs with the cut side facing the front separated three feet by rigid separators and pulled by a team of two horses. Variations of the two-plank drag design but pulled by trucks or tractors are still used today to smooth the dirt infields of baseball diamonds. In this simple design, the first log would remove clods and the second log would smooth the road. The logs were staggered so that dirt would be pushed to the center to create a crown so that water would rush off. The very simple design replaced the old practice of dragging a road with a single log which left the surface unrepaired and rut-filled. It also made it possible for farmers to improve roads near their homes without having to wait for government graders. D. Ward King of Maitland, Missouri requested a patent for the process in 1907 and received Patent 884,497 in 1908. He widely publicized the process in a U.S. Department of Agriculture Farmers' Bulletin #321 in 1908 under the title The use of the split-log drag on earth roads An important component of the grading process was that it had to occur when the road was wet. This invention was the horse-drawn forerunner of the modern-day road grader. It was a sensation in its day. States passed laws requiring its use. The design was so simple that King did not enforce his patent rights. However, he did tour the country explaining to how to use it. He also wrote articles such as one that appeared in the May 7, 1910 issue of the Saturday Evening Post entitled “Good Roads Without Money.” King would further enhance his invention with his patent 1,102,671 in 1914 which included four bars and two triangular scrapers. Before the King Road Drag, dirt roads turned into a quagmire when they were wet—especially in the winter. The widespread use of the King Road Drag came along during the Good Roads Movement, driven by bicyclists and later by automobile drivers. Automobiles benefitted since Macadam roads were rapidly destabilized by cars, which sucked the cementing dust out of smooth macadam roads. Solid roads meant people could use their automobiles on the roads between cities. Solid rural roads also made possible reliable rural mail delivery, which did much to promote commerce in the United States between city-based businesses and the rural population. For instance, they allowed Sears, Roebuck to start sending out its catalogs to small towns and farms and thereby vastly increase the size of its customer base. References Road construction
King road drag
[ "Engineering" ]
596
[ "Construction", "Road construction" ]
6,916,790
https://en.wikipedia.org/wiki/Aeroacoustic%20analogy
Acoustic analogies are applied mostly in numerical aeroacoustics to reduce aeroacoustic sound sources to simple emitter types. They are therefore often also referred to as aeroacoustic analogies. In general, aeroacoustic analogies are derived from the compressible Navier–Stokes equations (NSE). The compressible NSE are rearranged into various forms of the inhomogeneous acoustic wave equation. Within these equations, source terms describe the acoustic sources. They consist of pressure and speed fluctuation as well as stress tensor and force terms. Approximations are introduced to make the source terms independent of the acoustic variables. In this way, linearized equations are derived which describe the propagation of the acoustic waves in a homogeneous, resting medium. The latter is excited by the acoustic source terms, which are determined from the turbulent fluctuations. Since the aeroacoustics are described by the equations of classical acoustics, the methods are called aeroacoustic analogies. Types The Lighthill analogy considers a free flow, as for example with an engine jet. The nonstationary fluctuations of the stream are represented by a distribution of quadrupole sources in the same volume. The Curle analogy is a formal solution of the Lighthill analogy, which takes hard surfaces into consideration. The Ffowcs Williams–Hawkings analogy is valid for aeroacoustic sources in relative motion with respect to a hard surface, as is the case in many technical applications for example in the automotive industry or in air travel. The calculation involves quadrupole, dipole and monopole terms. References Further reading Blumrich, R.: Berechnungsmethoden für die Aeroakustik von Fahrzeugen. Tagungsband der ATZ/MTZ-Konferenz Akustik 2006, Stuttgart, 17–18.5.2006.. Contribution of the Technical University of Dresden to the modeling of flow sound sources with elementary emitters. Contribution of the Technical University of Dresden to the history of aeroacoustics. Computational fluid dynamics Fluid mechanics Acoustics Analogy
Aeroacoustic analogy
[ "Physics", "Chemistry", "Engineering" ]
434
[ "Computational fluid dynamics", "Classical mechanics", "Acoustics", "Computational physics", "Civil engineering", "Fluid mechanics", "Fluid dynamics stubs", "Fluid dynamics" ]
6,917,139
https://en.wikipedia.org/wiki/Band%20diagram
In solid-state physics of semiconductors, a band diagram is a diagram plotting various key electron energy levels (Fermi level and nearby energy band edges) as a function of some spatial dimension, which is often denoted x. These diagrams help to explain the operation of many kinds of semiconductor devices and to visualize how bands change with position (band bending). The bands may be coloured to distinguish level filling. A band diagram should not be confused with a band structure plot. In both a band diagram and a band structure plot, the vertical axis corresponds to the energy of an electron. The difference is that in a band structure plot the horizontal axis represents the wave vector of an electron in an infinitely large, homogeneous material (a crystal or vacuum), whereas in a band diagram the horizontal axis represents position in space, usually passing through multiple materials. Because a band diagram shows the changes in the band structure from place to place, the resolution of a band diagram is limited by the Heisenberg uncertainty principle: the band structure relies on momentum, which is only precisely defined for large length scales. For this reason, the band diagram can only accurately depict evolution of band structures over long length scales, and has difficulty in showing the microscopic picture of sharp, atomic scale interfaces between different materials (or between a material and vacuum). Typically, an interface must be depicted as a "black box", though its long-distance effects can be shown in the band diagram as asymptotic band bending. Anatomy The vertical axis of the band diagram represents the energy of an electron, which includes both kinetic and potential energy. The horizontal axis represents position, often not being drawn to scale. Note that the Heisenberg uncertainty principle prevents the band diagram from being drawn with a high positional resolution, since the band diagram shows energy bands (as resulting from a momentum-dependent band structure). While a basic band diagram only shows electron energy levels, often a band diagram will be decorated with further features. It is common to see cartoon depictions of the motion in energy and position of an electron (or electron hole) as it drifts, is excited by a light source, or relaxes from an excited state. The band diagram may be shown connected to a circuit diagram showing how bias voltages are applied, how charges flow, etc. The bands may be colored to indicate filling of energy levels, or sometimes the band gaps will be colored instead. Energy levels Depending on the material and the degree of detail desired, a variety of energy levels will be plotted against position: EF or μ: Although it is not a band quantity, the Fermi level (total chemical potential of electrons) is a crucial level in the band diagram. The Fermi level is set by the device's electrodes. For a device at equilibrium, the Fermi level is a constant and thus will be shown in the band diagram as a flat line. Out of equilibrium (e.g., when voltage differences are applied), the Fermi level will not be flat. Furthermore, in semiconductors out of equilibrium it may be necessary to indicate multiple quasi-Fermi levels for different energy bands, whereas in an out-of-equilibrium insulator or vacuum it may not be possible to give a quasi-equilibrium description, and no Fermi level can be defined. EC: The conduction band edge should be indicated in situations where electrons might be transported at the bottom of the conduction band, such as in an n-type semiconductor. The conduction band edge may also be indicated in an insulator, simply to demonstrate band bending effects. EV: The valence band edge likewise should be indicated in situations where electrons (or holes) are transported through the top of the valence band such as in a p-type semiconductor. Ei: The intrinsic Fermi level may be included in a semiconductor, to show where the Fermi level would have to be for the material to be neutrally doped (i.e., an equal number of mobile electrons and holes). Eimp: Impurity energy level. Many defects and dopants add states inside the band gap of a semiconductor or insulator. It can be useful to plot their energy level to see whether they are ionized or not. Evac: In a vacuum, the vacuum level shows the energy , where is the electrostatic potential. The vacuum can be considered as a sort of insulator, with Evac playing the role of the conduction band edge. At a vacuum-material interface, the vacuum energy level is fixed by the sum of work function and Fermi level of the material. Electron affinity level: Occasionally, a "vacuum level" is plotted even inside materials, at a fixed height above the conduction band, determined by the electron affinity. This "vacuum level" does not correspond to any actual energy band and is poorly defined (electron affinity strictly speaking is a surface, not bulk, property); however, it may be a helpful guide in the use of approximations such as Anderson's rule or the Schottky–Mott rule. Band bending When looking at a band diagram, the electron energy states (bands) in a material can curve up or down near a junction. This effect is known as band bending. It does not correspond to any physical (spatial) bending. Rather, band bending refers to the local changes in electronic structure, in the energy offset of a semiconductor's band structure near a junction, due to space charge effects. The primary principle underlying band bending inside a semiconductor is space charge: a local imbalance in charge neutrality. Poisson's equation gives a curvature to the bands wherever there is an imbalance in charge neutrality. The reason for the charge imbalance is that, although a homogeneous material is charge neutral everywhere (since it must be charge neutral on average), there is no such requirement for interfaces. Practically all types of interface develop a charge imbalance, though for different reasons: At the junction of two different types of the same semiconductor (e.g., p-n junction) the bands vary continuously since the dopants are sparsely distributed and only perturb the system. At the junction of two different semiconductors there is a sharp shift in band energies from one material to the other; the band alignment at the junction (e.g., the difference in conduction band energies) is fixed. At the junction of a semiconductor and metal, the bands of the semiconductor are pinned to the metal's Fermi level. At the junction of a conductor and vacuum, the vacuum level (from vacuum electrostatic potential) is set by the material's work function and Fermi level. This also (usually) applies for the junction of a conductor to an insulator. Knowing how bands will bend when two different types of materials are brought in contact is key to understanding whether the junction will be rectifying (Schottky) or ohmic. The degree of band bending depends on the relative Fermi levels and carrier concentrations of the materials forming the junction. In an n-type semiconductor the band bends upward, while in p-type the band bends downward. Note that band bending is due neither to magnetic field nor temperature gradient. Rather, it only arises in conjunction with the force of the electric field. See also Anderson's rule – approximate rule for band alignment of heterojunctions based on vacuum electron affinity Schottky–Mott rule – approximate rule for band alignment of metal–semiconductor junctions based on vacuum electron affinity and work function Field effect (semiconductor) – band bending induced by an electric field at the vacuum (or insulator) surface of a semiconductor Thomas–Fermi screening – rudimentary theory of the band bending that occurs around a charged defect Quantum capacitance – special case of band bending in field effect, for a material system containing a two-dimensional electron gas References James D. Livingston, Electronic Properties of Engineering Materials, Wiley (December 21, 1999). Electronic band structures Semiconductor structures
Band diagram
[ "Physics", "Chemistry", "Materials_science" ]
1,624
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
9,190,726
https://en.wikipedia.org/wiki/N%C3%A9ron%20model
In algebraic geometry, the Néron model (or Néron minimal model, or minimal model) for an abelian variety AK defined over the field of fractions K of a Dedekind domain R is the "push-forward" of AK from Spec(K) to Spec(R), in other words the "best possible" group scheme AR defined over R corresponding to AK. They were introduced by for abelian varieties over the quotient field of a Dedekind domain R with perfect residue fields, and extended this construction to semiabelian varieties over all Dedekind domains. Definition Suppose that R is a Dedekind domain with field of fractions K, and suppose that AK is a smooth separated scheme over K (such as an abelian variety). Then a Néron model of AK is defined to be a smooth separated scheme AR over R with fiber AK that is universal in the following sense. If X is a smooth separated scheme over R then any K-morphism from XK to AK can be extended to a unique R-morphism from X to AR (Néron mapping property). In particular, the canonical map is an isomorphism. If a Néron model exists then it is unique up to unique isomorphism. In terms of sheaves, any scheme A over Spec(K) represents a sheaf on the category of schemes smooth over Spec(K) with the smooth Grothendieck topology, and this has a pushforward by the injection map from Spec(K) to Spec(R), which is a sheaf over Spec(R). If this pushforward is representable by a scheme, then this scheme is the Néron model of A. In general the scheme AK need not have any Néron model. For abelian varieties AK Néron models exist and are unique (up to unique isomorphism) and are commutative quasi-projective group schemes over R. The fiber of a Néron model over a closed point of Spec(R) is a smooth commutative algebraic group, but need not be an abelian variety: for example, it may be disconnected or a torus. Néron models exist as well for certain commutative groups other than abelian varieties such as tori, but these are only locally of finite type. Néron models do not exist for the additive group. Properties The formation of Néron models commutes with products. The formation of Néron models commutes with étale base change. An Abelian scheme AR is the Néron model of its generic fibre. The Néron model of an elliptic curve The Néron model of an elliptic curve AK over K can be constructed as follows. First form the minimal model over R in the sense of algebraic (or arithmetic) surfaces. This is a regular proper surface over R but is not in general smooth over R or a group scheme over R. Its subscheme of smooth points over R is the Néron model, which is a smooth group scheme over R but not necessarily proper over R. The fibers in general may have several irreducible components, and to form the Néron model one discards all multiple components, all points where two components intersect, and all singular points of the components. Tate's algorithm calculates the special fiber of the Néron model of an elliptic curve, or more precisely the fibers of the minimal surface containing the Néron model. See also Minimal model program References W. Stein, What are Néron models? (2003) Algebraic geometry Number theory
Néron model
[ "Mathematics" ]
722
[ "Fields of abstract algebra", "Number theory", "Discrete mathematics", "Algebraic geometry" ]
9,194,858
https://en.wikipedia.org/wiki/Area%20of%20a%20triangle
In geometry, calculating the area of a triangle is an elementary problem encountered often in many different situations. The best known and simplest formula is where b is the length of the base of the triangle, and h is the height or altitude of the triangle. The term "base" denotes any side, and "height" denotes the length of a perpendicular from the vertex opposite the base onto the line containing the base. Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book Elements in 300 BCE. In 499 CE Aryabhata, used this illustrated method in the Aryabhatiya (section 2.6). Although simple, this formula is only useful if the height can be readily found, which is not always the case. For example, the land surveyor of a triangular field might find it relatively easy to measure the length of each side, but relatively difficult to construct a 'height'. Various methods may be used in practice, depending on what is known about the triangle. Other frequently used formulas for the area of a triangle use trigonometry, side lengths (Heron's formula), vectors, coordinates, line integrals, Pick's theorem, or other properties. History Heron of Alexandria found what is known as Heron's formula for the area of a triangle in terms of its sides, and a proof can be found in his book, Metrica, written around 60 CE. It has been suggested that Archimedes knew the formula over two centuries earlier, and since Metrica is a collection of the mathematical knowledge available in the ancient world, it is possible that the formula predates the reference given in that work. In 300 BCE Greek mathematician Euclid proved that the area of a triangle is half that of a parallelogram with the same base and height in his book Elements of Geometry. In 499 Aryabhata, a great mathematician-astronomer from the classical age of Indian mathematics and Indian astronomy, expressed the area of a triangle as one-half the base times the height in the Aryabhatiya. A formula equivalent to Heron's was discovered by the Chinese independently of the Greeks. It was published in 1247 in Shushu Jiuzhang ("Mathematical Treatise in Nine Sections"), written by Qin Jiushao. Using trigonometry The area of a triangle can be found through the application of trigonometry. Knowing SAS (side-angle-side) Using the labels in the image on the right, the height or altitude is . Substituting this in the area formula derived above, the area of the triangle can be expressed as: Where: a is the line BC, b is the line AC, c is the line AB; and: α is the interior angle at A, β is the interior angle at B, is the interior angle at C. Furthermore, since sin α = sin (π − α) = sin (β + ), and similarly for the other two angles: Knowing AAS (angle-angle-side) and analogously if the known side is a or c. Knowing ASA (angle-side-angle) and analogously if the known side is b or c. Using side lengths (Heron's formula) A triangle's shape is uniquely determined by the lengths of the sides, so its metrical properties, including area, can be described in terms of those lengths. By Heron's formula, where is the semiperimeter, or half of the triangle's perimeter. Three other equivalent ways of writing Heron's formula are Formulas resembling Heron's formula Three formulas have the same structure as Heron's formula but are expressed in terms of different variables. First, denoting the medians from sides a, b, and c respectively as ma, mb, and mc and their semi-sum as σ, we have Next, denoting the altitudes from sides a, b, and c respectively as ha, hb, and hc, and denoting the semi-sum of the reciprocals of the altitudes as we have And denoting the semi-sum of the angles' sines as , we have where D is the diameter of the circumcircle: Using vectors The area of triangle ABC is half of the area of a parallelogram: where , , and are vectors to the triangle's vertices from any arbitrary origin point, so that and are the translation vectors from vertex to each of the others, and is the wedge product. If vertex is taken to be the origin, this simplifies to . The oriented relative area of a parallelogram in any affine space, a type of bivector, is defined as where and are translation vectors from one vertex of the parallelogram to each of the two adjacent vertices. In Euclidean space, the magnitude of this bivector is a well-defined scalar number representing the area of the parallelogram. (For vectors in three-dimensional space, the bivector-valued wedge product has the same magnitude as the vector-valued cross product, but unlike the cross product, which is only defined in three-dimensional Euclidean space, the wedge product is well-defined in an affine space of any dimension.) The area of triangle ABC can also be expressed in terms of dot products. Taking vertex to be the origin and calling translation vectors to the other vertices and , where for any Euclidean vector . This area formula can be derived from the previous one using the elementary vector identity In two-dimensional Euclidean space, for a vector with coordinates and vector with coordinates , the magnitude of the wedge product is (See the following section.) Using coordinates If vertex A is located at the origin (0, 0) of a Cartesian coordinate system and the coordinates of the other two vertices are given by and , then the area can be computed as times the absolute value of the determinant For three general vertices, the equation is: which can be written as If the points are labeled sequentially in the counterclockwise direction, the above determinant expressions are positive and the absolute value signs can be omitted. The above formula is known as the shoelace formula or the surveyor's formula. If we locate the vertices in the complex plane and denote them in counterclockwise sequence as , , and , and denote their complex conjugates as , , and , then the formula is equivalent to the shoelace formula. In three dimensions, the area of a general triangle , and ) is the Pythagorean sum of the areas of the respective projections on the three principal planes (i.e. x = 0, y = 0 and z = 0): Using line integrals The area within any closed curve, such as a triangle, is given by the line integral around the curve of the algebraic or signed distance of a point on the curve from an arbitrary oriented straight line L. Points to the right of L as oriented are taken to be at negative distance from L, while the weight for the integral is taken to be the component of arc length parallel to L rather than arc length itself. This method is well suited to computation of the area of an arbitrary polygon. Taking L to be the x-axis, the line integral between consecutive vertices (xi,yi) and (xi+1,yi+1) is given by the base times the mean height, namely . The sign of the area is an overall indicator of the direction of traversal, with negative area indicating counterclockwise traversal. The area of a triangle then falls out as the case of a polygon with three sides. While the line integral method has in common with other coordinate-based methods the arbitrary choice of a coordinate system, unlike the others it makes no arbitrary choice of vertex of the triangle as origin or of side as base. Furthermore, the choice of coordinate system defined by L commits to only two degrees of freedom rather than the usual three, since the weight is a local distance (e.g. in the above) whence the method does not require choosing an axis normal to L. When working in polar coordinates it is not necessary to convert to Cartesian coordinates to use line integration, since the line integral between consecutive vertices (ri,θi) and (ri+1,θi+1) of a polygon is given directly by . This is valid for all values of θ, with some decrease in numerical accuracy when |θ| is many orders of magnitude greater than π. With this formulation negative area indicates clockwise traversal, which should be kept in mind when mixing polar and cartesian coordinates. Just as the choice of y-axis () is immaterial for line integration in cartesian coordinates, so is the choice of zero heading () immaterial here. Using Pick's theorem See Pick's theorem for a technique for finding the area of any arbitrary lattice polygon (one drawn on a grid with vertically and horizontally adjacent lattice points at equal distances, and with vertices on lattice points). The theorem states: where is the number of internal lattice points and B is the number of lattice points lying on the border of the polygon. Other area formulas Numerous other area formulas exist, such as where r is the inradius, and s is the semiperimeter (in fact, this formula holds for all tangential polygons), and where are the radii of the excircles tangent to sides a, b, c respectively. We also have and for circumdiameter D; and for angle α ≠ 90°. The area can also be expressed as In 1885, Baker gave a collection of over a hundred distinct area formulas for the triangle. These include: for circumradius (radius of the circumcircle) R, and Upper bound on the area The area T of any triangle with perimeter p satisfies with equality holding if and only if the triangle is equilateral. Other upper bounds on the area T are given by and both again holding if and only if the triangle is equilateral. Bisecting the area There are infinitely many lines that bisect the area of a triangle. Three of them are the medians, which are the only area bisectors that go through the centroid. Three other area bisectors are parallel to the triangle's sides. Any line through a triangle that splits both the triangle's area and its perimeter in half goes through the triangle's incenter. There can be one, two, or three of these for any given triangle. See also Area of a circle Congruence of triangles References Area Triangles
Area of a triangle
[ "Physics", "Mathematics" ]
2,173
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities", "Area" ]
10,694,451
https://en.wikipedia.org/wiki/DSS%20%28NMR%20standard%29
Sodium trimethylsilylpropanesulfonate (DSS) is the organosilicon compound with the formula (CH3)3SiCH2CH2CH2SO3−Na+. It is the sodium salt of trimethylsilylpropanesulfonic acid. A white, water-soluble solid, it is used as a chemical shift standard for proton NMR spectroscopy of aqueous solutions. The chemical shift, specifically the signal for the trimethylsilyl group, is relatively insensitive to pH. The proton spectrum of DSS also exhibits resonances at 2.91 ppm (m), 1.75 ppm (m), and 0.63 ppm (m) at an intensity of 22% of the reference resonance at 0 ppm. Alternatives Sodium trimethylsilyl propionate (TSP) is a related compound used as an NMR standard. It uses a carboxylic acid instead of the sulfonic acid found in DSS to confer water solubility. As a weak acid, TSP is more sensitive to changes in pH. 4,4-Dimethyl-4-silapentane-1-ammonium trifluoroacetate (DSA) has also been proposed as an alternative, to overcome certain drawbacks of DSS. References Sulfonic acids Trimethylsilyl compounds Organic sodium salts Nuclear magnetic resonance
DSS (NMR standard)
[ "Physics", "Chemistry" ]
298
[ "Sulfonic acids", "Nuclear magnetic resonance", "Functional groups", "Trimethylsilyl compounds", "Salts", "Organic sodium salts", "Nuclear physics" ]
10,696,110
https://en.wikipedia.org/wiki/Supralateral%20arc
A supralateral arc is a comparatively rare member of the halo family which in its complete form appears as a large, faintly rainbow-colored band in a wide arc above the sun and appearing to encircle it, at about twice the distance as the familiar 22° halo. In reality, however, the supralateral arc does not form a circle and never reaches below the sun. When present, the supralateral arc touches the (much more common) circumzenithal arc from below. As in all colored halos, the arc has its red side directed towards the sun, its blue part away from it. Formation Supralateral arcs form when sun light enters horizontally oriented, rod-shaped hexagonal ice crystals through a hexagonal base and exits through one of the prism sides. Supralateral arcs occur about once a year. Confusion with the 46° halo Due to its apparent circular shape and nearly identical location in the sky, the supralateral arc is often mistaken for the 46° halo, which does form a complete circle around the sun at approximately the same distance, but which is much rarer and fainter. Distinguishing between the two phenomena can be difficult, requiring the combination of several subtle indicators for proper identification. In contrast to the static 46° halo, the shape of a supralateral arc varies with the elevation of the sun. Before the sun reaches 15°, the bases of the arc touch the lateral (oriented sidewise) sides of the 46° halo. As the sun rises from 15° to 27°, the supralateral arc almost overlaps the upper half of the 46° halo, which is why many reported observations of the latter most likely are observations of the former. As the sun goes from 27° to 32°, the apex of the arc touches the circumzenithal arc centered on zenith (as does the 46° halo when the sun is located between 15° and 27°). In addition, the supralateral arc is always located above the parhelic circle (the arc located below it is the infralateral arc), and is never perfectly circular. Arguably the best way of distinguishing the halo from the arc is to carefully study the difference in colour and brightness. The 46° halo is six times fainter than the 22° halo and generally white with a possible red inner edge. The supralateral arc, in contrast, can even be confused with the rainbow with clear blue and green strokes. Gallery See also Infralateral arc Parry arc Lowitz arc References External links Atmospheric Optics - Supralateral & infralateral arcs - including HaloSim computer simulations and crystal illustrations. Paraselene.de - Gallery of images from March 2002 Paraselene.de - Gallery of images from December 2007 Atmospheric optical phenomena
Supralateral arc
[ "Physics" ]
592
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,696,700
https://en.wikipedia.org/wiki/Infralateral%20arc
An infralateral arc (or lower lateral tangent arc) is a rare halo, an optical phenomenon appearing similar to a rainbow under a white parhelic circle. Together with the supralateral arc they are always located outside the seldom observable 46° halo, but in contrast to supralateral arcs, infralateral arcs are always located below the parhelic circle. The shape of an infralateral arc varies with the elevation of the Sun. Between sunrise and before the observed Sun reaches about 50° over Earth's horizon, two infralateral arcs are located on either side (e.g. lateral) of the 46° halo, their convex apexes lying tangental to the 46° halo. As the observed Sun reaches above 68°, the two arcs unite into a single concave arc tangent to the 46° halo vertically under the Sun. Infralateral arcs form when sunlight enters horizontally oriented, rod-shaped hexagonal ice crystals through a hexagonal base and exits through one of the prism sides. Infralateral arcs occur about once a year. They are often observed together with circumscribed halos and upper tangent arcs. See also Circumzenithal arc Tangent arc Parry arc References External links Atmospheric Optics - Supralateral & infralateral arcs - including HaloSim computer simulations and crystal illustrations. Atmospheric optical phenomena ja:ラテラルアーク#下部ラテラルアーク
Infralateral arc
[ "Physics" ]
307
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
10,699,615
https://en.wikipedia.org/wiki/Water%20efficiency
Water efficiency is the practice of reducing water consumption by measuring the amount of water required for a particular purpose and is proportionate to the amount of essential water used. Water efficiency differs from water conservation in that it focuses on reducing waste, not restricting use. Solutions for water efficiency not only focus on reducing the amount of potable water used but also on reducing the use of non-potable water where appropriate (e.g. flushing toilet, watering landscape, etc.). It also emphasizes the influence consumers can have on water efficiency by making small behavioral changes to reduce water wastage, and by choosing more water-efficient products. Importance According to the UN World Water Development Report, over the past 100 years, global water use has increased by a factor of six. Annually, the rate steadily increases at an estimated amount of one percent as a result of population increase, economic development and changing consumption patterns. Increasing human demand for water coupled with the effects of climate change mean that the future of water supply is not secure. Billion people do not have safe drinking water. In addition, there are changes in climate, population growth, and lifestyles. The changes in human lifestyle and activities require more water per capita. This creates competition for water among agricultural, industrial, and human consumption. Organizations Many countries recognize water scarcity as a growing problem. Global organizations such as the World Water Council, continue to prioritize water efficiency alongside water conservation. The Alliance for Water Efficiency, Waterwise, California Water Efficiency Partnership (formally the California Urban Water Conservation Council), Smart Approved WaterMark in Australia, and the Partnership for Water Sustainability in British Columbia in Canada are non-governmental organizations that support water efficiency at national and regional levels. Governmental organizations such as Environment Canada, the EPA in the USA, the Environment Agency in the UK, DEWR in Australia, have recognized and created policies and strategies to raise water efficiency awareness. The EPA established the WaterSense in 2006. The program is a voluntary program to encourage water efficiency in the United States by identifying and testing products that demonstrate improvement over standard models for toilets, bathroom faucets and faucet accessories, urinals, and residential shower heads through the use of the WaterSense label. The government of China created a five-year (2010-2015) plan to deliver safe drinking water to about 54 percent of the population by 2015. It would cost about $66 billion US dollars or ¥410 billion Yuan to upgrade about 57,353 miles (92,300 kilometers) of main pipes and water treatment plants. The government hopes these steps will help to better conserve water and meet demands. The Indian state of Haryana implemented the State Rural Water Policy 2012; under this policy individual household metered connections would be provided to 50% of the rural population by 2017, to stop water wastage in villages. Water efficient solutions Residential Water efficiency solutions in residences include: Turning off the faucet sink while brushing teeth — saves approximately five gallons (about 19 liters) of water Installing faucet aerators Fixing a water valve leakage Only running the dishwasher and washing machine with a full load Taking a shower instead of a bath Washing fruits and vegetables in a bowl rather than continuously running the tap water Using leftover water for houseplants Using a watering can or garden hose with a trigger nozzle instead of a sprinkler Using a bucket and sponge when washing a car rather than a running a hose Washing clothes and linens in a washing machine rather than washing them by hand Recycling greywater for toilet flushing water and garden use Watering outdoor plants in the morning or in the evening when temperatures are cooler Consumers can voluntarily, or with government incentives or mandates, purchase water-efficient appliances such as low-flush toilets and washing machines. Manufacturers Water efficiency solutions in manufacturing: Identifying and eliminating wastage (such as leaks) and inefficient processes (such as continual spray devices on stop-start production lines). This may be the most low-cost area for water savings, as it involves minimal capital outlay. Savings can be made through implementing procedural changes, such as cleaning plant areas with brooms rather than water. Changing processes and plant machinery. A retrofit of key plant equipment may increase efficiency. Alternatively, upgrades to more efficient models can be factored into planned maintenance and replacement schedules. Reusing wastewater. As well as saving on mains water, this option may improve the reliability of supply, whilst reducing trade waste charges and associated environmental risks. Waterless products Using waterless car wash products to wash cars, boats, motorcycles, and bicycles. This could save up to of water per wash. Utilities The United States Environmental Protection Agency (EPA) makes the following recommendations for communities and utilities: Implementing a water-loss management program (e.g. locate and repair leaks). Universal metering. Ensuring that fire hydrants are tamperproof. Equipment changes — setting a good example by using water-efficient equipment. Installing faucet aerators and low-flow shower heads in municipal buildings. Replacing worn-out plumbing fixtures, appliances, and equipment with water-saving models. Minimizing the water used in space cooling equipment in accordance with the manufacturer's recommendations. Shutting off cooling units when not needed. Encouraging the use of urinals instead of toilet stalls in school (boys') and work office (men's) restrooms. Utilities can also modify their billing software to track customers who have taken advantage of various utility-sponsored water conservation initiatives (toilet rebates, irrigation rebates, etc.) to see which initiatives provide the greatest water savings for the least cost. Data centers Water policies and impact assessments Environmental policies and the difference usages of models that are generated by these enforcement can have significant impacts on society. Hence, improving policies regarding environmental justice issues often require local government's decision-making, public awareness, and a significant amount of scientific tools. Furthermore, it is important to understand that positively impacting policy decisions require more than good intentions, and they necessitate analysis of risk-related information along with consideration of economic issues, ethical and moral principles, legal precedents, political realities, cultural beliefs, societal values, and bureaucratic impediments. Also, ensuring that the rights of people regardless of their age, race, and background are being protected should not be neglected according to "The Role of Cumulative risk Assessment in Decisions about Environmental Justice." If a policy protects the natural environment but negatively affects those who are in the reach of the enforcement of the policy, that policy is subjected to revaluation. Researchers suggest racial and socioeconomic disparities in exposure to environmental hazards describing the demographic composition of areas and their proximity to hazardous sites. Then, any improvements of a social policy and models that are generated by these improvements should reflect the policy-makers' and researchers' environmental justice beliefs. Therefore, researchers and social changes should examine the promises and pitfalls associated with the environmental justice struggles, explore implications of proposed solutions, and recognize the fact that tools necessary to sufficiently carry preceding requirements are underdeveloped. Examples Reef Plan (Australia) The Reef Plan began to incorporate new ways to create models that integrate environmental, economical, and social consequences. Pre-existing Australian water policies were often criticized by previous models for investment prioritization and economic dimensions when it came to policy impact assessment. However, the policy makers and researchers in Australia now suggest that "sustainability focused policy requires multi-dimensional indicators" that combine different disciplines. The Reef Plan allows the policy makers to identify issues relating to Reef water quality and implement management strategies and actions to conserve and rehabilitate areas such as riparian zones and wetlands. With the Reef Plan, Nine strategies were implemented in the Great Barrier Reef region. They include self-management approaches, education, and extension, economic incentives, planning for natural resources management and land use, regulatory frameworks, research and information sharing, partnership, priorities and targets, and monitoring and evaluation. And such improvements invoked benefits such as: A more comprehensive picture of the policy impacts. New models projected possible outcomes of different simulations of the proposed policies under various circumstances. In addition, they provided the optimal decisions to be made regarding each outcome through the usage of what is known as computable general equilibrium (CGE) which "integrate dynamics on a catchment scale" Helping the aggregation of both economic aspects of water and non-monetary elements of water usage. Acknowledging the fact that farm production should depend on the global dynamics Conserved Water Statutes (United States) Conserved Water Statutes are state laws enacted by California, Montana, Washington, and Oregon to conserve water and allocate water resources to meet the needs of increasing demand for water in the dry lands where irrigation is or was occurring. These laws help the states to dismiss the disincentives to conserve water and do so without damaging pre-existing water rights. Because any extra amount of water after applying water to beneficiaries of the pre-existing water policies does not belong to the appropriators, such a condition creates an incentive to use as much water as possible rather than saving. This obviously causes the costs of irrigation to be greater than the optimal amount which makes the policy very inefficient. However, by enacting Conserved Water Statutes, state legislatures are able to address the disincentives to save water. The policy allows the appropriators to have rights over the surplus water and enforces them to verify their water savings by the water resources department. Out of the four states that adopted the Conserved Water Statutes, Oregon is often renowned to be the most successful. According to "How Expanding The Productivity of Water Rights Could Lessen Our Water Woes," The Oregon Water Resources Department (OWRD) has been a success because a high percentage of submitted applications submitted, and the OWRD serves as a good intermediaries that help appropriators to conserve water. OWRD's programs are not only a success because its effectiveness but also because of their efforts to improve the workers' working conditions. According to OWRD's website, the state policies regarding the water rights are divided into Cultural Competency, Traditional Health Worker, Coordinated Care Organizations, and Race, Ethnicity and Language Data Collection. Water pollution in Malaysia In Malaysia, the citizens have been experiencing harm from water pollutants in the river that have been accumulating over decades due to fast-growing urbanization and industrialization. The planners of Malaysia have been trying to come up with models that indicate the amount of pollutants has grown over time as cities became more industrialized and how these chemicals are distributed in various regions with the usage of econometrics and various scientific tools. Such an attempt is to encourage in-depth researches because sources should be able to analyzed numerically and give economic evaluations while also evaluating the environment. With an abundance of evidence provided by models which reveal the inadequacy of current policies, the Malaysian decision-makers now recognize that appropriate treatments are necessary for regions that are industrialized to protect the residents from water pollutants. As a result, the government seeks to increase public awareness and provide affordable water services to residents by year 2020. Benefits of impact assessments Successful policies and assessments integrate environmental, economical, and social consequences which provide better models and potential future improvements of the policies. Understanding the importance of water policies and impact assessments is a crucial part of both water justice and environmental justice issues. Not only does it help to protect the quality of water but also the quality of living for humans who are directly affected by the environment. In addition, successful policies go beyond water issues. Beneficial policies that are intended to benefit the general public touch upon subjects such as transportation and other environmental policies that may have a significant impact on the surrounding environment. Instead of mere cost-benefit analysis, decisions are made so that they account for the priorities of the people. Notable benefits of impact assessments: comprehensive picture of the policy impacts. New models projected possible outcomes of different simulations of the proposed policies under various circumstances. In addition, they provided the optimal decisions to be made regarding each outcome through the usage of what is known as computable general equilibrium (CGE) which "integrate dynamics on a catchment scale" aggregation of both economics aspects of water and non-monetary elements of water usage. acknowledging the fact that farm production should depend on the global dynamics. Protection of the human rights of the workers and improvements in working conditions. Provision of data that can be analyzed in terms of the economy, health impacts, and recognition of the need for appropriate treatments. See also Deficit irrigation Nonresidential water use in the U.S. Rainwater collection Residential water use in the U.S. and Canada Water conservation Water resource management Tap water References External links Water Conservation for Small and Medium-Sized Utilities Water Efficiency A Free Trade Publication Savewater Water conservation Water and the environment Water supply
Water efficiency
[ "Chemistry", "Engineering", "Environmental_science" ]
2,605
[ "Hydrology", "Water supply", "Environmental engineering" ]
10,701,883
https://en.wikipedia.org/wiki/Reassignment%20method
The method of reassignment is a technique for sharpening a time-frequency representation (e.g. spectrogram or the short-time Fourier transform) by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The method has been independently introduced by several parties under various names, including method of reassignment, remapping, time-frequency reassignment, and modified moving-window method. The method of reassignment sharpens blurry time-frequency data by relocating the data according to local estimates of instantaneous frequency and group delay. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time and frequency with respect to the analysis window. Introduction Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time. In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The spectrogram as a time-frequency representation One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis. As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other. A time-frequency representation having improved resolution, relative to the spectrogram, is the Wigner–Ville distribution, which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The Wigner–Ville distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals. Cohen's class of bilinear time-frequency representations is a class of "smoothed" Wigner–Ville distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy. The spectrogram is a member of Cohen's class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window. The method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class. In the case of the reassigned spectrogram, the short-time phase spectrum is used to correct the nominal time and frequency coordinates of the spectral data, and map it back nearer to the true regions of support of the analyzed signal. The method of reassignment Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of Modified Moving Window Method. Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal. In the classical moving window method, a time-domain signal, is decomposed into a set of coefficients, , based on a set of elementary signals, , defined where is a (real-valued) lowpass kernel function, like the window function in the short-time Fourier transform. The coefficients in this decomposition are defined where is the magnitude, and the phase, of , the Fourier transform of the signal shifted in time by and windowed by . can be reconstructed from the moving window coefficients by For signals having magnitude spectra, , whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point satisfying the phase stationarity condition or equivalently, around the point defined by This phenomenon is known in such fields as optics as the principle of stationary phase, which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid. In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency. The time-frequency coordinates thus computed are equal to the local group delay, and local instantaneous frequency, and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are local in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis. The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of to this point of maximum contribution , rather than to the point at which it is computed. This point is sometimes called the center of gravity of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued. Efficient computation of reassigned times and frequencies In digital signal processing, it is most common to sample the time and frequency domains. The discrete Fourier transform is used to compute samples of the Fourier transform from samples of a time domain signal. The reassignment operations proposed by Kodera et al. cannot be applied directly to the discrete short-time Fourier transform data, because partial derivatives cannot be computed directly on data that is discrete in time and frequency, and it has been suggested that this difficulty has been the primary barrier to wider use of the method of reassignment. It is possible to approximate the partial derivatives using finite differences. For example, the phase spectrum can be evaluated at two nearby times, and the partial derivative with respect to time be approximated as the difference between the two values divided by the time difference, as in For sufficiently small values of and and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function. Independently of Kodera et al., Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase spectrum. It is easily shown that Nelson's cross spectral surfaces compute an approximation of the derivatives that is equivalent to the finite differences method. Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of Cohen's class of time-frequency representations by generalizing the reassignment operations to where is the Wigner–Ville distribution of , and is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of phase. In the case of the spectrogram, the reassignment operations can be computed by where is the short-time Fourier transform computed using an analysis window is the short-time Fourier transform computed using a time-weighted analysis window and is the short-time Fourier transform computed using a time-derivative analysis window . Using the auxiliary window functions and , the reassignment operations can be computed at any time-frequency coordinate from an algebraic combination of three Fourier transforms evaluated at . Since these algorithms operate only on short-time spectral data evaluated at a single time and frequency, and do not explicitly compute any derivatives, this gives an efficient method of computing the reassigned discrete short-time Fourier transform. One constraint in this method of computation is that the must be non-zero. This is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued. Separability The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a multi-component signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution. For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components, and the instantaneous frequency of each component is defined as the derivative of its phase with respect to time, that is, then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter. This is the property, in the frequency domain, that Nelson called separability and is required of all signals so analyzed. If this property is not met, then the desired multi-component decomposition cannot be achieved, because the parameters of individual components cannot be estimated from the short-time Fourier transform. In such cases, a different analysis window must be chosen so that the separability criterion is satisfied. If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the is equal to the derivative with respect to time, of the phase of the dominant component at Therefore, if a component, having instantaneous frequency is the dominant component in the vicinity of then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at That is, Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time domain, and is equivalent to requiring that the time between two events be greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window). Extensions Consensus complex reassignment Gardner and Magnasco (2006) argues that the auditory nerves may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement. Synchrosqueezing transform References Further reading S. A. Fulop and K. Fitz, A spectrogram for the twenty-first century, Acoustics Today, vol. 2, no. 3, pp. 26–33, 2006. S. A. Fulop and K. Fitz, Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications, Journal of the Acoustical Society of America, vol. 119, pp. 360 – 371, Jan 2006. External links TFTB — Time-Frequency ToolBox SPEAR - Sinusoidal Partial Editing Analysis and Resynthesis Loris - Open-source software for sound modeling and morphing SRA - A web-based research tool for spectral and roughness analysis of sound signals (supported by a Northwest Academic Computing Consortium grant to J. Middleton, Eastern Washington University) Time–frequency analysis Transforms Data compression
Reassignment method
[ "Physics", "Mathematics" ]
3,155
[ "Functions and mappings", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Mathematical objects", "Mathematical relations", "Transforms" ]
10,702,027
https://en.wikipedia.org/wiki/Harmonic%20wavelet%20transform
In the mathematics of signal processing, the harmonic wavelet transform, introduced by David Edward Newland in 1993, is a wavelet-based linear transformation of a given function into a time-frequency representation. It combines advantages of the short-time Fourier transform and the continuous wavelet transform. It can be expressed in terms of repeated Fourier transforms, and its discrete analogue can be computed efficiently using a fast Fourier transform algorithm. Harmonic wavelets The transform uses a family of "harmonic" wavelets indexed by two integers j (the "level" or "order") and k (the "translation"), given by , where These functions are orthogonal, and their Fourier transforms are a square window function (constant in a certain octave band and zero elsewhere). In particular, they satisfy: where "*" denotes complex conjugation and is Kronecker's delta. As the order j increases, these wavelets become more localized in Fourier space (frequency) and in higher frequency bands, and conversely become less localized in time (t). Hence, when they are used as a basis for expanding an arbitrary function, they represent behaviors of the function on different timescales (and at different time offsets for different k). However, it is possible to combine all of the negative orders (j < 0) together into a single family of "scaling" functions where The function φ is orthogonal to itself for different k and is also orthogonal to the wavelet functions for non-negative j: In the harmonic wavelet transform, therefore, an arbitrary real- or complex-valued function (in L2) is expanded in the basis of the harmonic wavelets (for all integers j) and their complex conjugates: or alternatively in the basis of the wavelets for non-negative j supplemented by the scaling functions φ: The expansion coefficients can then, in principle, be computed using the orthogonality relationships: For a real-valued function f(t), and so one can cut the number of independent expansion coefficients in half. This expansion has the property, analogous to Parseval's theorem, that: Rather than computing the expansion coefficients directly from the orthogonality relationships, however, it is possible to do so using a sequence of Fourier transforms. This is much more efficient in the discrete analogue of this transform (discrete t), where it can exploit fast Fourier transform algorithms. References Time–frequency analysis Transforms Wavelets
Harmonic wavelet transform
[ "Physics", "Mathematics" ]
493
[ "Functions and mappings", "Spectrum (physical sciences)", "Time–frequency analysis", "Frequency-domain analysis", "Mathematical objects", "Mathematical relations", "Transforms" ]
10,702,410
https://en.wikipedia.org/wiki/Phi%20bond
In chemistry, phi bonds (φ bonds) are usually covalent chemical bonds, where six lobes of one involved atomic orbital overlap six lobes of the other involved atomic orbital. This overlap leads to the formation of a bonding molecular orbital with three nodal planes which contain the internuclear axis and go through both atoms. The Greek letter φ in their name refers to f orbitals, since the orbital symmetry of the φ bond is the same as that of the usual (6-lobed) type of f orbital when seen down the bond axis. There was one possible candidate known in 2005 of a molecule with phi bonding (a U−U bond, in the molecule U2). However, later studies that accounted for spin orbit interactions found that the bonding was only of fourth order. Experimental evidence for phi bonding between a thorium atom and cyclooctatetraene in thorocene has been supported by computational analysis, though this mixed-orbital bond has strong ionic character and is not a traditional phi bond. References Chemical bonding Hypothetical processes
Phi bond
[ "Physics", "Chemistry", "Materials_science" ]
210
[ "nan", "Chemical bonding", "Condensed matter physics", "Hypotheses in chemistry" ]
10,702,544
https://en.wikipedia.org/wiki/Russian%20floating%20nuclear%20power%20station
Floating nuclear power stations () are vessels designed by Rosatom, the Russian state-owned nuclear energy corporation. They are self-contained, low-capacity, floating nuclear power plants. Rosatom plans to mass-produce the stations at shipbuilding facilities and then tow them to ports near locations that require electricity. The work on such a concept dates back to the MH-1A in the United States, which was built in the 1960s into the hull of a World War II Liberty Ship, which was followed on much later in 2022 when the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. The Rosatom project is the first floating nuclear power plant intended for mass production. The initial plan was to manufacture at least seven of the vessels by 2015. On 14 September 2019, Russia’s first-floating nuclear power plant, Akademik Lomonosov, arrived to its permanent location in the Chukotka region. It started operation on 19 December 2019. History The project for a floating nuclear power station began in 2000, when the Ministry for Atomic Energy of the Russian Federation (Rosatom) chose Severodvinsk in Arkhangelsk Oblast as the construction site, Sevmash was appointed as general contractor. Construction of the first power station, the Akademik Lomonosov, started on 15 April 2007 at the Sevmash Submarine-Building Plant in Severodvinsk. In August 2008 construction works were transferred to the Baltic Shipyard in Saint Petersburg, which is also responsible for the construction of future vessels. Akademik Lomonosov was launched on 1 July 2010, at an estimated cost of 6 billion rubles (232 m$). In 2015 construction of a second vessel starting in 2019 was announced by Russia's state nuclear corporation Rosatom. On 27 July 2021 Rosatom signed an agreement with GDK Baimskaya LLC for energy delivery for Baimskaya copper mining operations. Rosatom suggests delivering up to three new floating power plants (with fourth being in reserve), all using the latest RITM-200M 55 MWe reactors, currently serving on Project 22220 icebreakers. These are to be docked at Cape Nagloynyn, Chaunskaya Bay port and connected to the Baimskaya mine by 400 km long 110 kV line through Bilibino. According to Rosatom, production of the first new reactors by Atomenergomash has already started. In August 2022, construction of the first hull started in China, planned to be delivered to Russia in 2023 for installation of reactors and equipment. On 31 December 2021 Rosatom announced that these four new floating plants will carry a new, slightly improved version of RITM-200 cores, named RITM-200S, currently in development. TVEL has been charged with development of new fuel assemblies for its improved core. Each barge will produce 106 MWe of power. Technical characteristics The floating nuclear power station is a non-self propelled vessel. It has length of , width of , height of , and draught of . The vessel has a displacement of 21,500 tonnes and a crew of 69 people. Each vessel of this type has two modified KLT-40 naval propulsion reactors together providing up to 70 MW of electricity or 300 MW of heat, or cogeneration of electricity and heat for district heating, enough for a city with a population of 200,000 people. Because of its ability to float and be assembled in extreme weather conditions, it can provide heat and power to areas that do not have easy access to these amenities because of their geographic location. It could also be modified as a desalination plant producing 240,000 cubic meters of fresh water a day. Smaller modification of the plant can be fitted with two ABV-6M reactors with the electrical power around 18 MWe (megawatts of electricity). The much larger VBER-300 917 MW thermal or 325 MWe and the slightly larger RITM-200 55 MWe reactors have both been considered as a potential energy source for these floating nuclear power stations. The station also incorporates a floating unit (FPU), waterworks, guaranteeing solid establishment, separation FPU and transmitting created power and heat on the banks, inland offices for accepting and transmitting the produced power to outside systems for circulation to purchasers. Objectives The primary goal of the venture is to give increasing energy needs of the area, effective energy investigation and advancement of gold and rest of the different fields in Chaun-Bilibino energy arrangement of the industrial group, guaranteeing adjustment of taxes for electric and heat energy for the populace and modern customers, and the making of a solid energy base for monetary and social improvement of the locale. Contractors The hull and sections of vessels are built by the Baltic Shipyard in Saint Petersburg and Wison (Nantong) Heavy Industry in China. Reactors are designed by OKBM Afrikantov and assembled by Nizhniy Novgorod Research and Development Institute Atomenergoproekt (both part of Atomenergoprom). The reactor vessels are produced by Izhorskiye Zavody. Kaluga Turbine Plant supplies the turbo-generators. Fueling The floating power stations need to be refueled every three years while saving up to 200,000 metric tons of coal and 100,000 tons of fuel oil a year. The reactors are supposed to have a lifespan of 40 years. Every 12 years, the whole plant will be towed home and overhauled at the wharf where it was constructed. The manufacturer will arrange for the disposal of the nuclear waste and maintenance is provided by the infrastructure of the Russian nuclear industry. Thus, virtually no radiation traces are expected at the place where the power station produced its energy. Safety The safety systems of the KLT-40S are designed according to the reactor design itself, physical successive systems of protection and containment, self-activating active and passive safety systems, self-diagnostic automatic systems, reliable diagnostics relating to equipment and systems status, and provisioned methods regarding accident control. Additionally, the safety systems on board operate independently of the plant’s power supply. Environmental groups and citizens are concerned that floating plants will be more vulnerable to accidents, natural disasters specific to oceans, and terrorism than land-based stations. They point to a history of naval and nuclear accidents in Russia and the former Soviet Union, including the Chernobyl disaster of 1986. Russia does have 50 years of experience operating a fleet of nuclear-powered icebreakers that are also used for scientific and Arctic tourism expeditions. However earlier incidents (Lenin, 1957, and Taymyr, 2011) involving radioactive leaking from such vessels also contribute to safety concerns for FNPPs. Commercialization of floating nuclear power plants in the United States have failed due to high costs and safety concerns. Environmental concerns around the health and safety of the project have arisen. Radioactive steam may be produced, negatively impacting people living nearby. Earthquake activity is common in the area and there are fears that a tsunami wave could damage the facility and release radioactive substances and waste. Being on the water exposes it to natural forces, according to environmental groups. Environmental impacts Both coastal and floating nuclear powerplants may result in similar consequences for the ocean environments. Although the surrounding seawall could provide an artificial reef that is an advantageous environment for some marine life forms, there are potential negative effects on animal and plant life near-shore (for coastal plants) or further offshore (with deep-water floating plants). Intrusion of marine organisms into power station systems during water entrainment could reduce species variety and number of individual organisms. The thermal impact of water discharge from stations may permanently change the area's marine ecosystem, with, for example, cooler-water species unable to maintain populations, and non-local, warmer-water species, colonizing the vicinity. While power plants may instigate such environmental transformations, the thermal plumes caused by the warmed-water discharge are narrow, so their effect is geographically restricted. Winter shutdown of the plant may result in fish kills from the thermal shock. However, this can be mitigated in stations with multiple units, by avoiding simultaneous shutdowns. By sequentially turning off only one unit at a time, the water temperature variation is minimized. These problems are shared by all thermal power plants. The breakwater will constitute an artificial island of appreciable size. Locations Floating nuclear power stations are planned to be used mainly in the Russian Arctic. Five of these are planned to be used by Gazprom for offshore oil and gas field development and for operations on the Kola and Yamal peninsulas. Other locations include Dudinka on the Taymyr Peninsula, Vilyuchinsk on the Kamchatka Peninsula and Pevek on the Chukchi Peninsula. In 2007, Rosatom signed an agreement with the Sakha Republic to build a floating plant for its northern parts, using smaller ABV reactors. According to Rosatom, 15 countries, including China, Indonesia, Malaysia, Algeria, Sudan, Namibia, Cape Verde, and Argentina, have shown interest in hiring such a device. It has been estimated that 75% of the world's population live within 100 miles of a port city. See also Atlantic Nuclear Power Plant Nuclear marine propulsion Offshore Power Systems Soviet naval reactors References Further reading Vladimir Kuznetsov et al. (2004), Floating Nuclear Power Plants in Russia: A Threat to the Arctic, World Oceans and Non-Proliferation. Green Cross Russia Akademik Lomonosov Floating Nuclear Co-generation Plant, Russian Federation Floating nuclear power stations Nuclear power stations in Russia Nuclear power stations using pressurized water reactors Nuclear-powered ships Nuclear technology Russian inventions Science and technology in Russia Service vessels of Russia
Russian floating nuclear power station
[ "Physics" ]
1,987
[ "Nuclear technology", "Nuclear physics" ]
11,680,057
https://en.wikipedia.org/wiki/Radiotrophic%20fungus
Radiotrophic fungi are fungi that can perform the hypothetical biological process called radiosynthesis, which means using ionizing radiation as an energy source to drive metabolism. It has been claimed that radiotrophic fungi have been found in extreme environments such as in the Chernobyl Nuclear Power Plant. Most radiotrophic fungi use melanin in some capacity to survive. The process of using radiation and melanin for energy has been termed radiosynthesis, and is thought to be analogous to anaerobic respiration. However, it is not known if multi-step processes such as photosynthesis or chemosynthesis are used in radiosynthesis or even if radiosynthesis exists in living organisms. Discovery Many fungi have been isolated from the area around the destroyed Chernobyl Nuclear Power Plant, some of which have been observed directing their growth of hyphae toward radioactive graphite from the disaster, a phenomenon called “radiotropism”. Study has ruled out the presence of carbon as the resource attracting the fungal colonies, and in fact concluded that some fungi will preferentially grow in the direction of the source of beta and gamma ionizing radiation, but were not able to identify the biological mechanism behind this effect. It has also been observed that other melanin-rich fungi were discovered in the cooling water from some other working nuclear reactors. The light-absorbing compound in the fungus cell membranes had the effect of turning the water black. While there are many cases of extremophiles (organisms that can live in severe conditions such as that of the radioactive power plant), a hypothetical radiotrophic fungus would grow because of the radiation, rather than in spite of it. Further research conducted at the Albert Einstein College of Medicine showed that three melanin-containing fungi—Cladosporium sphaerospermum, Wangiella dermatitidis, and Cryptococcus neoformans—increased in biomass and accumulated acetate faster in an environment in which the radiation level was 500 times higher than in the normal environment. C. sphaerospermum in particular was chosen due to this species being found in the reactor at Chernobyl. Exposure of C. neoformans cells to these radiation levels rapidly (within 20–40 minutes of exposure) altered the chemical properties of its melanin, and increased melanin-mediated rates of electron transfer (measured as reduction of ferricyanide by NADH) three- to four-fold compared with unexposed cells. However, each culture was performed with at least limited nutrients provided to each fungus. The increase in biomass and other effects could be caused either by the cells directly deriving energy from ionizing radiation, or by the radiation allowing the cells to utilize traditional nutrients either more efficiently or more rapidly. Outside of the fungal studies, similar effects on melanin electron-transport capability were observed by the authors after exposure to non-ionizing radiation. The authors did not conclude whether light or heat radiation would have a similar effect on living fungal cells. Role of melanin Melanins are an family of dark-colored, naturally occurring pigments with radiation-shielding properties. These pigments can absorb electromagnetic radiation due to their dark color and high molecular weights; this quality suggests that melanin could help protect radiotropic fungi from ionizing radiation. It has been suggested that melanin's radiation-shielding properties are due to its ability to trap free radicals formed during radiolysis of water. Melanin production is also advantageous to the fungus in that it can aid survival in many extreme environments. Examples of these environments include the Chernobyl Nuclear Power Plant, the International Space Station, and the Transantarctic Mountains. Melanin may also be able to help the fungus metabolize radiation, but more evidence and research is still needed. Comparisons with non-melanized fungi Melanization may come at some metabolic cost to the fungal cells. In the absence of radiation, some non-melanized fungi (that had been mutated in the melanin pathway) grew faster than their melanized counterparts. Limited uptake of nutrients due to the melanin molecules in the fungal cell wall or toxic intermediates formed in melanin biosynthesis have been suggested to contribute to this phenomenon. It is consistent with the observation that despite being capable of producing melanin, many fungi do not synthesize melanin constitutively (i.e., all the time), but often only in response to external stimuli or at different stages of their development. The exact biochemical processes in the suggested melanin-based synthesis of organic compounds or other metabolites for fungal growth, including the chemical intermediates (such as native electron donor and acceptor molecules) in the fungal cell and the location and chemical products of this process, are unknown. Use in human spaceflight It is hypothesized that radiotrophic fungi could potentially be used as a shield to protect against radiation, specifically in affiliation to the use of astronauts in space or other atmospheres. An experiment taking place at the International Space Station in December 2018 through January 2019 was conducted in order to test whether radiotrophic fungi could provide protection from ionizing radiation in space, as part of research efforts preceding a possible trip to Mars. This experiment used the radiotrophic strain of the fungus Cladosporium sphaerospermum. The growth of this fungus and its ability to deflect the effects of ionizing radiation were studied for 30 days aboard the International Space Station. This experimental trial yielded very promising results. The amount of radiation deflected was found to directly correlate with the amount of fungus. There was no difference in the reduction of ionizing radiation between the experimental and control group within the first 24 hour period; however, once the fungus had reached an adequate maturation, and with a 180° protection radius, amounts of ionizing radiation were significantly reduced as compared to the control group. With a 1.7 mm thick shield of melanized radiotrophic Cladosporium sphaerospermum, measurements of radiation nearing the end of the experimental trial were found to be 2.42% lower, demonstrating radiation deflecting capabilities five times that of the control group. Under circumstances in which the fungi would fully encompass an entity, radiation levels would be reduced by an estimated 4.34±0.7%. Estimations indicate that approximately a 21 cm thick layer could significantly deflect the annual amount of radiation received on Mars’ surface. Limitations to the use of a radiotrophic fungi based shield include increased mass on missions. However as a viable substitute to reduce overall mass on potential Mars missions, a mixture with equal mole concentration of Martian soil, melanin, and a layer of fungi roughly 9 cm thick, could be used. See also Nylon-eating bacteria Plastivore References External links Einstein College of Medicine on radiotrophic fungi The blacker the better… especially in Chernobyl at Earthling Nature. Fungi by adaptation Evolution by taxon Radiation effects Radiobiology Gamma rays Extremophiles
Radiotrophic fungus
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology", "Environmental_science" ]
1,452
[ "Physical phenomena", "Fungi", "Spectrum (physical sciences)", "Fungi by adaptation", "Radiobiology", "Electromagnetic spectrum", "Materials science", "Organisms by adaptation", "Extremophiles", "Radiation", "Gamma rays", "Condensed matter physics", "Bacteria", "Radiation effects", "Envi...
5,408,457
https://en.wikipedia.org/wiki/Quantum%20critical%20point
A quantum critical point is a point in the phase diagram of a material where a continuous phase transition takes place at absolute zero. A quantum critical point is typically achieved by a continuous suppression of a nonzero temperature phase transition to zero temperature by the application of a pressure, field, or through doping. Conventional phase transitions occur at nonzero temperature when the growth of random thermal fluctuations leads to a change in the physical state of a system. Condensed matter physics research over the past few decades has revealed a new class of phase transitions called quantum phase transitions which take place at absolute zero. In the absence of the thermal fluctuations which trigger conventional phase transitions, quantum phase transitions are driven by the zero point quantum fluctuations associated with Heisenberg's uncertainty principle. Overview Within the class of phase transitions, there are two main categories: at a first-order phase transition, the properties shift discontinuously, as in the melting of solid, whereas at a second order phase transition, the state of the system changes in a continuous fashion. Second-order phase transitions are marked by the growth of fluctuations on ever-longer length-scales. These fluctuations are called "critical fluctuations". At the critical point where a second-order transition occurs the critical fluctuations are scale invariant and extend over the entire system. At a nonzero temperature phase transition, the fluctuations that develop at a critical point are governed by classical physics, because the characteristic energy of quantum fluctuations is always smaller than the characteristic Boltzmann thermal energy . At a quantum critical point, the critical fluctuations are quantum mechanical in nature, exhibiting scale invariance in both space and in time. Unlike classical critical points, where the critical fluctuations are limited to a narrow region around the phase transition, the influence of a quantum critical point is felt over a wide range of temperatures above the quantum critical point, so the effect of quantum criticality is felt without ever reaching absolute zero. Quantum criticality was first observed in ferroelectrics, in which the ferroelectric transition temperature is suppressed to zero. A wide variety of metallic ferromagnets and antiferromagnets have been observed to develop quantum critical behavior when their magnetic transition temperature is driven to zero through the application of pressure, chemical doping or magnetic fields. In these cases, the properties of the metal are radically transformed by the critical fluctuations, departing qualitatively from the standard Fermi liquid behavior, to form a metallic state sometimes called a non-Fermi liquid or a "strange metal". There is particular interest in these unusual metallic states, which are believed to exhibit a marked preponderance towards the development of superconductivity. Quantum critical fluctuations have also been shown to drive the formation of exotic magnetic phases in the vicinity of quantum critical points. Quantum critical endpoints Quantum critical points arise when a susceptibility diverges at zero temperature. There are a number of materials (such as CeNi2Ge2) where this occurs serendipitously. More frequently a material has to be tuned to a quantum critical point. Most commonly this is done by taking a system with a second-order phase transition which occurs at nonzero temperature and tuning it—for example by applying pressure or magnetic field or changing its chemical composition. CePd2Si2 is such an example, where the antiferromagnetic transition which occurs at about 10K under ambient pressure can be tuned to zero temperature by applying a pressure of 28,000 atmospheres. Less commonly a first-order transition can be made quantum critical. First-order transitions do not normally show critical fluctuations as the material moves discontinuously from one phase into another. However, if the first order phase transition does not involve a change of symmetry then the phase diagram can contain a critical endpoint where the first-order phase transition terminates. Such an endpoint has a divergent susceptibility. The transition between the liquid and gas phases is an example of a first-order transition without a change of symmetry and the critical endpoint is characterized by critical fluctuations known as critical opalescence. A quantum critical endpoint arises when a nonzero temperature critical point is tuned to zero temperature. One of the best studied examples occurs in the layered ruthenate metal, Sr3Ru2O7 in a magnetic field. This material shows metamagnetism with a low-temperature first-order metamagnetic transition where the magnetization jumps when a magnetic field is applied within the directions of the layers. The first-order jump terminates in a critical endpoint at about 1 kelvin. By switching the direction of the magnetic field so that it points almost perpendicular to the layers, the critical endpoint is tuned to zero temperature at a field of about 8 teslas. The resulting quantum critical fluctuations dominate the physical properties of this material at nonzero temperatures and away from the critical field. The resistivity shows a non-Fermi liquid response, the effective mass of the electron grows and the magnetothermal expansion of the material is modified all in response to the quantum critical fluctuations. Notes References Quantum phases Condensed matter physics
Quantum critical point
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,035
[ "Quantum phases", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Matter" ]
2,944,387
https://en.wikipedia.org/wiki/Eurocodes
The Eurocodes are the ten European standards (EN; harmonised technical rules) specifying how structural design should be conducted within the European Union (EU). These were developed by the European Committee for Standardization upon the request of the European Commission. The purpose of the Eurocodes is to provide: a means to prove compliance with the requirements for mechanical strength and stability and safety in case of fire established by European Union law. a basis for construction and engineering contract specifications. a framework for creating harmonized technical specifications for building products (CE mark). By March 2010, the Eurocodes are mandatory for the specification of European public works and are intended to become the de facto standard for the private sector. The Eurocodes therefore replace the existing national building codes published by national standard bodies (e.g. BS 5950), although many countries had a period of co-existence. Additionally, each country is expected to issue a National Annex to the Eurocodes which will need referencing for a particular country (e.g. The UK National Annex). At present, take-up of Eurocodes is slow on private sector projects and existing national codes are still widely used by engineers. The motto of the Eurocodes is "Building the future". The second generation of the Eurocodes (2G Eurocodes) is being prepared. History In 1975, the Commission of the European Community (presently the European Commission), decided on an action programme in the field of construction, based on article 95 of the Treaty. The objective of the programme was to eliminate technical obstacles to trade and the harmonisation of technical specifications. Within this action programme, the Commission took the initiative to establish a set of harmonised technical rules for the design of construction works which, in a first would serve as an alternative to the national rules in force in the member states of the European Union (EU) and, ultimately, would replace them. For fifteen years, the Commission, with the help of a steering committee with representatives of the member states, conducted the development of the Eurocodes programme, which led to the first generation of European codes in the 1980s. In 1989, the Commission and the member states of the EU and the European Free Trade Association (EFTA) decided, on the basis of an agreement between the Commission and to transfer the preparation and the publication of the Eurocodes to the European Committee for Standardization (CEN) through a series of mandates, in order to provide them with a future status of European Standard (EN). This links de facto the Eurocodes with the provisions of all the Council's Directives and/or Commission's Decisions dealing with European standards (e.g. Regulation (EU) No. 305/2011 on the marketing of construction products and Directive 2014/24/EU on government procurement in the European Union). List The Eurocodes are published as a separate European Standards, each having a number of parts. By 2002, ten sections have been developed and published: Eurocode 0: Basis of structural design(EN 1990) ''Eurocode 1: Actions on structures(EN 1991)Part 1-1: Densities, self-weight, imposed loads for buildings(EN 1991-1-1) Part 1-2: Actions on structures exposed to fire(EN 1991-1-2) Part 1-3: General actions - Snow loads(EN 1991-1-3) Part 1-4: General actions - Wind actions(EN 1991-1-4) Part 1-5: General actions - Thermal actions(EN 1991-1-5) Part 1-6: General actions - Actions during execution(EN 1991-1-6) Part 1-7: General actions - Accidental Actions(EN 1991-1-7) Part 2: Traffic loads on bridges(EN 1991-2) Part 3: Actions induced by cranes and machinery(EN 1991-3) Part 4 : Silos and tanks(EN 1991-4) Eurocode 2: Design of concrete structures(EN 1992)Part 1-1: General rules, and rules for buildings(EN 1992-1-1) Part 1-2: Structural fire design(EN 1992-1-2) Part 1-3: Precast Concrete Elements and Structures(EN 1992-1-3) Part 1-4: Lightweight aggregate concrete with closed structure(EN 1992-1-4) Part 1-5: Structures with unbonded and external prestressing tendons(EN 1992-1-5) Part 1-6: Plain concrete structures(EN 1992-1-6) Part 2: Reinforced and prestressed concrete bridges(EN 1992-2) Part 3: Liquid retaining and containing structures(EN 1992-3) Part 4: Design of fastenings for use in concrete(EN 1992-4) Eurocode 3: Design of steel structures(EN 1993)Part 1-1: General rules and rules for buildings(EN 1993-1-1) Part 1-2: General rules - Structural fire design(EN 1993-1-2) Part 1-3: General rules - Supplementary rules for cold-formed members and sheeting(EN 1993-1-3) Part 1-4: General rules - Supplementary rules for stainless steels(EN 1993-1-4) Part 1-5: Plated structural elements(EN 1993-1-5) Part 1-6: Strength and Stability of Shell Structures(EN 1993-1-6) Part 1-7: General Rules - Supplementary rules for planar plated structural elements with out of plane loading(EN 1993-1-7) Part 1-8: Design of joints(EN 1993-1-8) Part 1-9: Fatigue(EN 1993-1-9) Part 1-10: Material Toughness and through-thickness properties(EN 1993-1-10) Part 1-11: Design of Structures with tension components(EN 1993-1-11) Part 1-12: High Strength steels(EN 1993-1-12) Part 2: Steel Bridges(EN 1993-2) Part 3-1: Towers, masts and chimneys(EN 1993-3-1) Part 3-2: Towers, masts and chimneys - Chimneys(EN 1993-3-2) Part 4-1: Silos(EN 1993-4-1) Part 4-2: Tanks(EN 1993-4-2) Part 4-3: Pipelines(EN 1993-4-3) Part 5: Piling(EN 1993-5) Part 6: Crane supporting structures(EN 1993-6) Eurocode 4: Design of composite steel and concrete structures(EN 1994)Part 1-1: General rules and rules for buildings(EN 1994-1-1) Part 1-2: Structural fire design(EN 1994-1-2) Part 2: General rules and rules for bridges(EN 1994-2) Eurocode 5: Design of timber structures(EN 1995)Part 1-1: General – Common rules and rules for buildings(EN 1995-1-1) Part 1-2: General – Structural fire design(EN 1995-1-2) Part 2: Bridges(EN 1995-2) Eurocode 6: Design of masonry structures(EN 1996)Part 1-1: General – Rules for reinforced and unreinforced masonry structures(EN 1996-1-1) Part 1-2: General rules – Structural fire design(EN 1996-1-2) Part 2: Design, selection of materials and execution of masonry(EN 1996-2) Part 3: Simplified calculation methods for unreinforced masonry structures(EN 1996-3) Eurocode 7: Geotechnical design(EN 1997)Part 1: General rules(EN 1997-1) Part 2: Ground investigation and testing(EN 1997-2) Part 3: Design assisted by field testing(EN 1997-3) Eurocode 8: Design of structures for earthquake resistance(EN 1998)Part 1: General rules, seismic actions and rules for buildings(EN 1998-1) Part 2: Bridges(EN 1998-2) Part 3: Assessment and retrofitting of buildings(EN 1998-3) Part 4: Silos, tanks and pipelines(EN 1998-4) Part 5: Foundations, retaining structures and geotechnical aspects(EN 1998-5) Part 6: Towers, masts and chimneys(EN 1998-6) Eurocode 9: Design of aluminium structures''(EN 1999) Part 1-1: General structural rules(EN 1999-1-1) Part 1-2: Structural fire design(EN 1999-1-2) Part 1-3: Structures susceptible to fatigue(EN 1999-1-3) Part 1-4: Cold-formed structural sheeting(EN 1999-1-4) Part 1-5: Shell structures(EN 1999-1-5) Each of the codes (except EN 1990) is divided into a number of Parts covering specific aspects of the subject. In total there are 58 EN Eurocode parts distributed in the ten Eurocodes (EN 1990 – 1999). All of the EN Eurocodes relating to materials have a Part 1-1 which covers the design of buildings and other civil engineering structures and a Part 1-2 for fire design. The codes for concrete, steel, composite steel and concrete, and timber structures and earthquake resistance have a Part 2 covering design of bridges. These Parts 2 should be used in combination with the appropriate general Parts (Parts 1). See also Geotechnical Engineering Limit state design (Load and Resistance Factor Design) List of EN standards Structural Engineering Structural robustness Previous national standards BS 5950: British Standard on steel design, replaced by Eurocode 3 in March, 2010. BS 8110: British Standard on concrete design, replaced by Eurocode 2 in March, 2010. BS 6399: British Standard on loading for buildings, replaced by Eurocode 1 in March, 2010. References External links Eurocodes: Building the Future - European Commission Eurocodes available in PDF and HTML format, without national annexes 'National Annexes & Eurocodes', European standards institutes and links to download national annexes. Building codes Civil engineering EN standards
Eurocodes
[ "Engineering" ]
2,086
[ "Construction", "Civil engineering", "Building codes", "Building engineering" ]
2,944,872
https://en.wikipedia.org/wiki/Lipogenesis
In biochemistry, lipogenesis is the conversion of fatty acids and glycerol into fats, or a metabolic process through which acetyl-CoA is converted to triglyceride for storage in fat. Lipogenesis encompasses both fatty acid and triglyceride synthesis, with the latter being the process by which fatty acids are esterified to glycerol before being packaged into very-low-density lipoprotein (VLDL). Fatty acids are produced in the cytoplasm of cells by repeatedly adding two-carbon units to acetyl-CoA. Triacylglycerol synthesis, on the other hand, occurs in the endoplasmic reticulum membrane of cells by bonding three fatty acid molecules to a glycerol molecule. Both processes take place mainly in liver and adipose tissue. Nevertheless, it also occurs to some extent in other tissues such as the gut and kidney. A review on lipogenesis in the brain was published in 2008 by Lopez and Vidal-Puig. After being packaged into VLDL in the liver, the resulting lipoprotein is then secreted directly into the blood for delivery to peripheral tissues. Fatty acid synthesis Fatty acid synthesis starts with acetyl-CoA and builds up by the addition of two-carbon units. Fatty acid synthesis occurs in the cytoplasm of cells while oxidative degradation occurs in the mitochondria. Many of the enzymes for the fatty acid synthesis are organized into a multienzyme complex called fatty acid synthase. The major sites of fatty acid synthesis are adipose tissue and the liver. Triglyceride synthesis Triglycerides are synthesized by esterification of fatty acids to glycerol. Fatty acid esterification takes place in the endoplasmic reticulum of cells by metabolic pathways in which acyl groups in fatty acyl-CoAs are transferred to the hydroxyl groups of glycerol-3-phosphate and diacylglycerol. Three fatty acid chains are bonded to each glycerol molecule. Each of the three -OH groups of the glycerol reacts with the carboxyl end of a fatty acid chain (-COOH). Water is eliminated and the remaining carbon atoms are linked by an -O- bond through dehydration synthesis. Both the adipose tissue and the liver can synthesize triglycerides. Those produced by the liver are secreted from it in the form of very-low-density lipoproteins (VLDL). VLDL particles are secreted directly into blood, where they function to deliver the endogenously derived lipids to peripheral tissues. Hormonal regulation Insulin is a peptide hormone that is critical for managing the body's metabolism. Insulin is released by the pancreas when blood sugar levels rise, and it has many effects that broadly promote the absorption and storage of sugars, including lipogenesis. Insulin stimulates lipogenesis primarily by activating two enzymatic pathways. Pyruvate dehydrogenase (PDH), converts pyruvate into acetyl-CoA. Acetyl-CoA carboxylase (ACC), converts acetyl-CoA produced by PDH into malonyl-CoA. Malonyl-CoA provides the two-carbon building blocks that are used to create larger fatty acids. Insulin stimulation of lipogenesis also occurs through the promotion of glucose uptake by adipose tissue. The increase in the uptake of glucose can occur through the use of glucose transporters directed to the plasma membrane or through the activation of lipogenic and glycolytic enzymes via covalent modification. The hormone has also been found to have long term effects on lipogenic gene expression. It is hypothesized that this effect occurs through the transcription factor SREBP-1, where the association of insulin and SREBP-1 lead to the gene expression of glucokinase. The interaction of glucose and lipogenic gene expression is assumed to be managed by the increasing concentration of an unknown glucose metabolite through the activity of glucokinase. Another hormone that may affect lipogenesis through the SREBP-1 pathway is leptin. It is involved in the process by limiting fat storage through inhibition of glucose intake and interfering with other adipose metabolic pathways. The inhibition of lipogenesis occurs through the down regulation of fatty acid and triglyceride gene expression. Through the promotion of fatty acid oxidation and lipogenesis inhibition, leptin was found to control the release of stored glucose from adipose tissues. Other hormones that prevent the stimulation of lipogenesis in adipose cells are growth hormones (GH). Growth hormones result in loss of fat but stimulates muscle gain. One proposed mechanism for how the hormone works is that growth hormones affects insulin signaling thereby decreasing insulin sensitivity and in turn down regulating fatty acid synthase expression. Another proposed mechanism suggests that growth hormones may phosphorylate with STAT5A and STAT5B, transcription factors that are a part of the Signal Transducer And Activator Of Transcription (STAT) family. There is also evidence suggesting that acylation stimulating protein (ASP) promotes the aggregation of triglycerides in adipose cells. This aggregation of triglycerides occurs through the increase in the synthesis of triglyceride production. PDH dephosphorylation Insulin stimulates the activity of pyruvate dehydrogenase phosphatase. The phosphatase removes the phosphate from pyruvate dehydrogenase activating it and allowing for conversion of pyruvate to acetyl-CoA. This mechanism leads to the increased rate of catalysis of this enzyme, so increases the levels of acetyl-CoA. Increased levels of acetyl-CoA will increase the flux through not only the fat synthesis pathway but also the citric acid cycle. Acetyl-CoA carboxylase Insulin affects ACC in a similar way to PDH. It leads to its dephosphorylation via activation of PP2A phosphatase whose activity results in the activation of the enzyme. Glucagon has an antagonistic effect and increases phosphorylation, deactivation, thereby inhibiting ACC and slowing fat synthesis. Affecting ACC affects the rate of acetyl-CoA conversion to malonyl-CoA. Increased malonyl-CoA level pushes the equilibrium over to increase production of fatty acids through biosynthesis. Long chain fatty acids are negative allosteric regulators of ACC and so when the cell has sufficient long chain fatty acids, they will eventually inhibit ACC activity and stop fatty acid synthesis. AMP and ATP concentrations of the cell act as a measure of the ATP needs of a cell. When ATP is depleted, there is a rise in 5'AMP. This rise activates AMP-activated protein kinase, which phosphorylates ACC and thereby inhibits fat synthesis. This is a useful way to ensure that glucose is not diverted down a storage pathway in times when energy levels are low. ACC is also activated by citrate. When there is abundant acetyl-CoA in the cell cytoplasm for fat synthesis, it proceeds at an appropriate rate. Transcriptional regulation SREBPs have been found to play a role with the nutritional or hormonal effects on the lipogenic gene expression. Overexpression of SREBP-1a or SREBP-1c in mouse liver cells results in the build-up of hepatic triglycerides and higher expression levels of lipogenic genes. Lipogenic gene expression in the liver via glucose and insulin is moderated by SREBP-1. The effect of glucose and insulin on the transcriptional factor can occur through various pathways; there is evidence suggesting that insulin promotes SREBP-1 mRNA expression in adipocytes and hepatocytes. It has also been suggested that the hormone increases transcriptional activation by SREBP-1 through MAP-kinase-dependent phosphorylation regardless of changes in the mRNA levels. Along with insulin glucose also have been shown to promote SREBP-1 activity and mRNA expression. References Lipid metabolism
Lipogenesis
[ "Chemistry" ]
1,703
[ "Lipid biochemistry", "Lipid metabolism", "Metabolism" ]
2,945,005
https://en.wikipedia.org/wiki/Network-to-network%20interface
In telecommunications, a network-to-network interface (NNI) is an interface that specifies signaling and management functions between two networks. An NNI circuit can be used for interconnection of signalling (e.g., SS7), Internet Protocol (IP) (e.g., MPLS) or ATM networks. In networks based on MPLS or GMPLS, NNI is used for the interconnection of core provider routers (class 4 or higher). In the case of GMPLS, the type of interconnection can vary across Back-to-Back, EBGP or mixed NNI connection scenarios, depending on the type of VRF exchange used for interconnection. In case of Back-to-Back, VRF is necessary to create VLANs and subsequently sub-interfaces (VLAN headers and DLCI headers for Ethernet and Frame Relay network packets) on each interface used for the NNI circuit. In the case of eBGP NNI interconnection, IP routers are taught how to dynamically exchange VRF records without VLAN creation. NNI also can be used for interconnection of two VoIP nodes. In cases of mixed or full-mesh scenarios, other NNI types are possible. NNI interconnection is encapsulation independent, but Ethernet and Frame Relay are commonly used. See also User–network interface Asynchronous Transfer Mode References Network management
Network-to-network interface
[ "Technology", "Engineering" ]
308
[ "Computing stubs", "Computer networks engineering", "Network management", "Computer network stubs" ]
2,945,180
https://en.wikipedia.org/wiki/Superplasticizer
Superplasticizers (SPs), also known as high range water reducers, are additives used for making high-strength concrete or to place self-compacting concrete. Plasticizers are chemical compounds enabling the production of concrete with approximately 15% less water content. Superplasticizers allow reduction in water content by 30% or more. These additives are employed at the level of a few weight percent. Plasticizers and superplasticizers also retard the setting and hardening of concrete. According to their dispersing functionality and action mode, one distinguishes two classes of superplasticizers: Ionic interactions (electrostatic repulsion): lignosulfonates (first generation of ancient water reducers), sulfonated synthetic polymers (naphthalene, or melamine, formaldehyde condensates) (second generation), and; Steric effects: Polycarboxylates-ether (PCE) synthetic polymers bearing lateral chains (third generation). Superplasticizers are used when well-dispersed cement particle suspensions are required to improve the flow characteristics (rheology) of concrete. Their addition allows to decrease the water-to-cement ratio of concrete or mortar without negatively affecting the workability of the mixture. It enables the production of self-consolidating concrete and high-performance concrete. The water–cement ratio is the main factor determining the concrete strength and its durability. Superplasticizers greatly improve the fluidity and the rheology of fresh concrete. The concrete strength increases when the water-to-cement ratio decreases because avoiding to add water in excess only for maintaining a better workability of fresh concrete results in a lower porosity of the hardened concrete, and so to a better resistance to compression. The addition of SP in the truck during transit is a fairly modern development within the industry. Admixtures added in transit through automated slump management system, allow to maintain fresh concrete slump until discharge without reducing concrete quality. Working mechanism Traditional plasticizers are lignosulphonates as their sodium salts. Superplasticizers are synthetic polymers. Compounds used as superplasticizers include (1) sulfonated naphthalene formaldehyde condensate, sulfonated melamine formaldehyde condensate, acetone formaldehyde condensate and (2) polycarboxylates ethers. Cross-linked melamine- or naphthalene-sulfonates, referred to as PMS (polymelamine sulfonate) and PNS (polynaphthalene sulfonate), respectively, are illustrative. They are prepared by cross-linking of the sulfonated monomers using formaldehyde or by sulfonating the corresponding crosslinked polymer. The polymers used as plasticizers exhibit surfactant properties. They are often ionomers bearing negatively charged groups (sulfonates, carboxylates, or phosphonates...). They function as dispersants to minimize particles segregation in fresh concrete (separation of the cement slurry and water from the coarse and fine aggregates such as gravels and sand respectively). The negatively charged polymer backbone adsorbs onto the positively charged colloidal particles of unreacted cement, especially onto the tricalcium aluminate () mineral phase of cement. Melaminesulfonate (PMS) and naphthalenesulfonate (PNS) mainly act by electrostatic interactions with cement particles favoring their electrostatic repulsion while polycarboxylate-ether (PCE) superplasticizers sorb and coat large agglomerates of cement particles, and thanks to their lateral chains, sterically favor the dispersion of large cement agglomerates into smaller ones. However, as their working mechanisms are not fully understood, cement-superplasticizer incompatibilities can be observed in certain cases. Common superplasticizer types Sodium Lignosulfonate Sulfonated Naphthalene Formaldehyde Polycarboxylate Superplasticizer Polycarboxylate superplasticizer (PCE), the third generation of high-performance superplasticizer, follows the development of ordinary plasticizers and superplasticizers. It significantly reduces water content while enhancing concrete's workability, strength, and durability. Known for its cutting-edge technology, exceptional application prospects, and superior overall performance, PCE has revolutionized concrete admixtures. See also Particle aggregation (inverse process of) Peptization Plasticizer Polycarboxylates Rheology Surfactant Suspension (chemistry) References Further reading External links Cement Concrete Chemistry Colloidal chemistry Heterogeneous chemical mixtures Concrete admixtures
Superplasticizer
[ "Chemistry", "Engineering" ]
999
[ "Structural engineering", "Colloidal chemistry", "Surface science", "Colloids", "Chemical mixtures", "Concrete", "Heterogeneous chemical mixtures" ]
2,945,390
https://en.wikipedia.org/wiki/Scalar%20boson
A scalar boson is a boson whose spin equals zero. A boson is a particle whose wave function is symmetric under particle exchange and therefore follows Bose–Einstein statistics. The spin–statistics theorem implies that all bosons have an integer-valued spin. Scalar bosons are the subset of bosons with zero-valued spin. The name scalar boson arises from quantum field theory, which demands that fields of spin-zero particles transform like a scalar under Lorentz transformation (i.e. are Lorentz invariant). A pseudoscalar boson is a scalar boson that has odd parity, whereas "regular" scalar bosons have even parity. Examples Scalar The only fundamental scalar boson in the Standard Model of particle physics is the Higgs boson, the existence of which was confirmed on 14 March 2013 at the Large Hadron Collider by CMS and ATLAS. As a result of this confirmation, the 2013 Nobel Prize in Physics was awarded to Peter Higgs and François Englert. Various known composite particles are scalar bosons, e.g. the alpha particle and scalar mesons. The φ4-theory or quartic interaction is a popular "toy model" quantum field theory that uses scalar bosonic fields, used in many introductory quantum textbooks to introduce basic concepts in field theory. Pseudoscalar There are no fundamental pseudoscalars in the Standard Model, but there are pseudoscalar mesons, like the pion. See also Scalar field theory Klein–Gordon equation Vector boson Higgs boson References Bosons Quantum field theory
Scalar boson
[ "Physics" ]
333
[ "Quantum field theory", "Matter", "Quantum mechanics", "Bosons", "Subatomic particles" ]
2,946,366
https://en.wikipedia.org/wiki/Saturated%20measure
In mathematics, a measure is said to be saturated if every locally measurable set is also measurable. A set , not necessarily measurable, is said to be a if for every measurable set of finite measure, is measurable. -finite measures and measures arising as the restriction of outer measures are saturated. References Measures (measure theory)
Saturated measure
[ "Physics", "Mathematics" ]
71
[ "Mathematical analysis", "Physical quantities", "Mathematical analysis stubs", "Measures (measure theory)", "Quantity", "Size" ]
2,948,178
https://en.wikipedia.org/wiki/Biomedical%20text%20mining
Biomedical text mining (including biomedical natural language processing or BioNLP) refers to the methods and study of how text mining may be applied to texts and literature of the biomedical domain. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies in this field have been applied to the biomedical literature available through services such as PubMed. In recent years, the scientific literature has shifted to electronic publishing but the volume of information available can be overwhelming. This revolution of publishing has caused a high demand for text mining techniques. Text mining offers information retrieval (IR) and entity recognition (ER). IR allows the retrieval of relevant papers according to the topic of interest, e.g. through PubMed. ER is practiced when certain biological terms are recognized (e.g. proteins or genes) for further processing. Considerations Applying text mining approaches to biomedical text requires specific considerations common to the domain. Availability of annotated text data Large annotated corpora used in the development and training of general purpose text mining methods (e.g., sets of movie dialogue, product reviews, or Wikipedia article text) are not specific for biomedical language. While they may provide evidence of general text properties such as parts of speech, they rarely contain concepts of interest to biologists or clinicians. Development of new methods to identify features specific to biomedical documents therefore requires assembly of specialized corpora. Resources designed to aid in building new biomedical text mining methods have been developed through the Informatics for Integrating Biology and the Bedside (i2b2) challenges and biomedical informatics researchers. Text mining researchers frequently combine these corpora with the controlled vocabularies and ontologies available through the National Library of Medicine's Unified Medical Language System (UMLS) and Medical Subject Headings (MeSH). Machine learning-based methods often require very large data sets as training data to build useful models. Manual annotation of large text corpora is not realistically possible. Training data may therefore be products of weak supervision or purely statistical methods. Data structure variation Like other text documents, biomedical documents contain unstructured data. Research publications follow different formats, contain different types of information, and are interspersed with figures, tables, and other non-text content. Both unstructured text and semi-structured document elements, such as tables, may contain important information that should be text mined. Clinical documents may vary in structure and language between departments and locations. Other types of biomedical text, such as drug labels, may follow general structural guidelines but lack further details. Uncertainty Biomedical literature contains statements about observations that may not be statements of fact. This text may express uncertainty or skepticism about claims. Without specific adaptations, text mining approaches designed to identify claims within text may mis-characterize these "hedged" statements as facts. Supporting clinical needs Biomedical text mining applications developed for clinical use should ideally reflect the needs and demands of clinicians. This is a concern in environments where clinical decision support is expected to be informative and accurate. A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases is presented in. Interoperability with clinical systems New text mining systems must work with existing standards, electronic medical records, and databases. Methods for interfacing with clinical systems such as LOINC have been developed but require extensive organizational effort to implement and maintain. Patient privacy Text mining systems operating with private medical data must respect its security and ensure it is rendered anonymous where appropriate. Processes Specific sub tasks are of particular concern when processing biomedical text. Named entity recognition Developments in biomedical text mining have incorporated identification of biological entities with named entity recognition, or NER. Names and identifiers for biomolecules such as proteins and genes, chemical compounds and drugs, and disease names have all been used as entities. Most entity recognition methods are supported by pre-defined linguistic features or vocabularies, though methods incorporating deep learning and word embeddings have also been successful at biomedical NER. Document classification and clustering Biomedical documents may be classified or clustered based on their contents and topics. In classification, document categories are specified manually, while in clustering, documents form algorithm-dependent, distinct groups. These two tasks are representative of supervised and unsupervised methods, respectively, yet the goal of both is to produce subsets of documents based on their distinguishing features. Methods for biomedical document clustering have relied upon k-means clustering. Relationship discovery Biomedical documents describe connections between concepts, whether they are interactions between biomolecules, events occurring subsequently over time (i.e., temporal relationships), or causal relationships. Text mining methods may perform relation discovery to identify these connections, often in concert with named entity recognition. Hedge cue detection The challenge of identifying uncertain or "hedged" statements has been addressed through hedge cue detection in biomedical literature. Claim detection Multiple researchers have developed methods to identify specific scientific claims from literature. In practice, this process involves both isolating phrases and sentences denoting the core arguments made by the authors of a document (a process known as argument mining, employing tools used in fields such as political science) and comparing claims to find potential contradictions between them. Information extraction Information extraction, or IE, is the process of automatically identifying structured information from unstructured or partially structured text. IE processes can involve several or all of the above activities, including named entity recognition, relationship discovery, and document classification, with the overall goal of translating text to a more structured form, such as the contents of a template or knowledge base. In the biomedical domain, IE is used to generate links between concepts described in text, such as gene A inhibits gene B and gene C is involved in disease G. Biomedical knowledge bases containing this type of information are generally products of extensive manual curation, so replacement of manual efforts with automated methods remains a compelling area of research. Information retrieval and question answering Biomedical text mining supports applications for identifying documents and concepts matching search queries. Search engines such as PubMed search allow users to query literature databases with words or phrases present in document contents, metadata, or indices such as MeSH. Similar approaches may be used for medical literature retrieval. For more fine-grained results, some applications permit users to search with natural language queries and identify specific biomedical relationships. On 16 March 2020, the National Library of Medicine and others launched the COVID-19 Open Research Dataset (CORD-19) to enable text mining of the current literature on the novel virus. The dataset is hosted by the Semantic Scholar project of the Allen Institute for AI. Other participants include Google, Microsoft Research, the Center for Security and Emerging Technology, and the Chan Zuckerberg Initiative. Resources Corpora The following table lists a selection of biomedical text corpora and their contents. These items include annotated corpora, sources of biomedical research literature, and resources frequently used as vocabulary and/or ontology references, such as MeSH. Items marked "Yes" under "Freely Available" can be downloaded from a publicly accessible location. Word embeddings Several groups have developed sets of biomedical vocabulary mapped to vectors of real numbers, known as word vectors or word embeddings. Sources of pre-trained embeddings specific for biomedical vocabulary are listed in the table below. The majority are results of the word2vec model developed by Mikolov et al or variants of word2vec. Applications Text mining applications in the biomedical field include computational approaches to assist with studies in protein docking, protein interactions, and protein-disease associations. Text mining techniques have several advantages over traditional manual curation for identifying associations. Text mining algorithms can identify and extract information from a vast amount of literature, and more efficiently than manual curation. This includes the integration of data from different sources, including literature, databases, and experimental results. These algorithms have transformed the process of identifying and prioritizing novel genes and gene-disease associations that have previously been overlooked. These methods are the foundation to facilitate systematic searches of overlooked scientific and biomedical  literature which could carry significant association between research. The combination of information can stem new discoveries and hypotheses especially with the integration of datasets. It must be noted that the quality of the database is as important as the size of it. Promising text mining methods such as iProLINK (integrated Protein Literature Information and Knowledge) have been developed to curate data sources that can aid text mining research in areas of bibliography mapping, annotation extraction, protein named entity recognition, and protein ontology development. Curated databases such as UniProt can accelerate the accessibility of targeted information not only for genetic sequences, but also for literature and phylogeny. Gene cluster identification Methods for determining the association of gene clusters obtained by microarray experiments with the biological context provided by the corresponding literature have been developed. Protein interactions Automatic extraction of protein interactions and associations of proteins to functional concepts (e.g. gene ontology terms) has been explored. The search engine PIE was developed to identify and return protein-protein interaction mentions from MEDLINE-indexed articles. The extraction of kinetic parameters from text or the subcellular location of proteins have also been addressed by information extraction and text mining technology. Gene-disease associations Computational gene prioritization is an essential step in understanding the genetic basis of diseases, particularly within genetic linkage analysis. Text mining and other computational tools extract relevant information, including gene-disease associations, among others, from numerous data sources, then apply different ranking algorithms to prioritize the genes based on their relevance to the specific disease. Text mining and gene prioritization allow researchers to focus their efforts on the most promising candidates for further research. Computational tools for gene prioritization continue to be developed and analyzed. One group studied the performance of various text-mining techniques for disease gene prioritization. They investigated different domain vocabularies, text representation schemes, and ranking algorithms in order to find the best approach for identifying disease-causing genes to establish a benchmark. Gene-trait associations An agricultural genomics group identified genes related to bovine reproductive traits using text mining, among other approaches. Applications of phrase mining to disease associations A text mining study assembled a collection of 709 core extracellular matrix proteins and associated proteins based on two databases: MatrixDB (matrixdb.univ-lyon1.fr) and UniProt. This set of proteins had a manageable size and a rich body of associated information, making it a suitable for the application of text mining tools. The researchers conducted phrase-mining analysis to cross-examine individual extracellular matrix proteins across the biomedical literature concerned with six categories of cardiovascular diseases. They used a phrase-mining pipeline, Context-aware Semantic Online Analytical Processing (CaseOLAP), then semantically scored all 709 proteins according to their Integrity, Popularity, and Distinctiveness using the CaseOLAP pipeline. The text mining study validated existing relationships and informed previously unrecognized biological processes in cardiovascular pathophysiology. Software tools Search engines Search engines designed to retrieve biomedical literature relevant to a user-provided query frequently rely upon text mining approaches. Publicly available tools specific for research literature include PubMed search, Europe PubMed Central search, GeneView, and APSE Similarly, search engines and indexing systems specific for biomedical data have been developed, including DataMed and OmicsDI. Some search engines, such as Essie, OncoSearch, PubGene, and GoPubMed were previously public but have since been discontinued, rendered obsolete, or integrated into commercial products. Medical record analysis systems Electronic medical records (EMRs) and electronic health records (EHRs) are collected by clinical staff in the course of diagnosis and treatment. Though these records generally include structured components with predictable formats and data types, the remainder of the reports are often free-text and difficult to search, leading to challenges with patient care. Numerous complete systems and tools have been developed to analyse these free-text portions. The MedLEE system was originally developed for analysis of chest radiology reports but later extended to other report topics. The clinical Text Analysis and Knowledge Extraction System, or cTAKES, annotates clinical text using a dictionary of concepts. The CLAMP system offers similar functionality with a user-friendly interface. Frameworks Computational frameworks have been developed to rapidly build tools for biomedical text mining tasks. SwellShark is a framework for biomedical NER that requires no human-labeled data but does make use of resources for weak supervision (e.g., UMLS semantic types). The SparkText framework uses Apache Spark data streaming, a NoSQL database, and basic machine learning methods to build predictive models from scientific articles. APIs Some biomedical text mining and natural language processing tools are available through application programming interfaces, or APIs. NOBLE Coder performs concept recognition through an API. Conferences The following academic conferences and workshops host discussions and presentations in biomedical text mining advances. Most publish proceedings. Journals A variety of academic journals publishing manuscripts on biology and medicine include topics in text mining and natural language processing software. Some journals, including the Journal of the American Medical Informatics Association (JAMIA) and the Journal of Biomedical Informatics are popular publications for these topics. References Further reading Biomedical Literature Mining Publications (BLIMP) : A comprehensive and regularly updated index of publications on (bio)medical text mining External links Bio-NLP resources, systems and application database collection The BioNLP mailing list archives Corpora for biomedical text mining The BioCreative evaluations of biomedical text mining technologies Directory of people involved in BioNLP Data mining Bioinformatics Text mining Clinical data management
Biomedical text mining
[ "Engineering", "Biology" ]
2,784
[ "Bioinformatics", "Biological engineering" ]
2,948,381
https://en.wikipedia.org/wiki/BioCreative
BioCreAtIvE (A critical assessment of text mining methods in molecular biology) consists in a community-wide effort for evaluating information extraction and text mining developments in the biological domain. It was preceded by the Knowledge Discovery and Data Mining (KDD) Challenge Cup for detection of gene mentions. Community Challenges First edition (2004-2005) Three main tasks were posed at the first BioCreAtIvE challenge: the entity extraction task, the gene name normalization task, and the functional annotation of gene products task. The data sets produced by this contest serve as a Gold Standard training and test set to evaluate and train Bio-NER tools and annotation extraction tools. Second edition (2006-2007) The second BioCreAtIvE challenge (2006-2007) had also 3 tasks: detection of gene mentions, extraction of unique idenfiers for genes and extraction information related to physical protein-protein interactions. It counted with participation of 44 teams from 13 countries. Third edition (2011-2012) The third edition of BioCreative included for the first time the InterActive Task (IAT), designed to evaluate the practical usability of text mining tools in real-world biocuration tasks. Fifth edition (2016) BioCreative V had 5 different tracks, including an interactive task (IAT) for usability of text mining systems and a track using the BioC format for curating information for BioGRID. See also Biocuration References External links BioCreAtIve, 2007-2015 BioCreAtIve 2, 2006-2007 First BioCreAtIvE workshop, 2004 BMC Bioinformatics special issue : BioCreAtIvE First BioCreAtIvE data download request Bioinformatics Information science
BioCreative
[ "Chemistry", "Engineering", "Biology" ]
343
[ "Biological engineering", "Bioinformatics stubs", "Biotechnology stubs", "Biochemistry stubs", "Bioinformatics" ]
2,948,757
https://en.wikipedia.org/wiki/TUNEL%20assay
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) is a method for detecting DNA fragmentation by labeling the 3′- hydroxyl termini in the double-strand DNA breaks generated during apoptosis. Method TUNEL is a method for detecting apoptotic DNA fragmentation, widely used to identify and quantify apoptotic cells, or to detect excessive DNA breakage in individual cells. The assay relies on the use of terminal deoxynucleotidyl transferase (TdT), an enzyme that catalyzes attachment of deoxynucleotides, tagged with a fluorochrome or another marker, to 3'-hydroxyl termini of DNA double strand breaks. It may also label cells having DNA damage by other means than in the course of apoptosis. History The fluorochrome-based TUNEL assay applicable for flow cytometry, combining the detection of DNA strand breaks with respect to the cell cycle-phase position, was originally developed by Gorczyca et al. Concurrently, the avidin-peroxidase labeling assay applicable for light absorption microscope was described by Gavrieli et al. Since 1992 the TUNEL has become one of the main methods for detecting apoptotic programmed cell death. However, for years there has been a debate about its accuracy, due to problems in the original assay which caused necrotic cells to be inappropriately labeled as apoptotic. The method has subsequently been improved dramatically and if performed correctly should only identify cells in the last phase of apoptosis. New methods incorporate the dUTPs modified by fluorophores or haptens, including biotin or bromine, which can be detected directly in the case of a fluorescently-modified nucleotide (i.e., fluorescein-dUTP), or indirectly with streptavidin or antibodies, if biotin-dUTP or BrdUTP are used, respectively. The most sensitive of them is the method utilizing incorporation of BrdUTP by TdT followed by immunocytochemical detection of BrdU. References External links Biochemistry detection reactions Laboratory techniques Programmed cell death
TUNEL assay
[ "Chemistry", "Biology" ]
461
[ "Biochemistry detection reactions", "Signal transduction", "Biochemical reactions", "Senescence", "Microbiology techniques", "nan", "Programmed cell death" ]
2,949,409
https://en.wikipedia.org/wiki/Mineral-insulated%20copper-clad%20cable
Mineral-insulated copper-clad cable is a variety of electrical cable made from copper conductors inside a copper sheath, insulated by inorganic magnesium oxide powder. The name is often abbreviated to MICC or MI cable, and colloquially known as pyro (because the original manufacturer and vendor for this product in the UK was a company called Pyrotenax). A similar product sheathed with metals other than copper is called mineral-insulated metal-sheathed (MIMS) cable. Construction MI cable is made by placing copper rods inside a circular copper tube and filling the spaces with dry magnesium oxide powder. The overall assembly is then pressed between rollers to reduce its diameter (and increase its length). Up to seven conductors are often found in an MI cable, with up to 19 available from some manufacturers. Since MI cables use no organic material as insulation (except at the ends), they are more resistant to fires than plastic-insulated cables. MI cables are used in critical fire protection applications such as alarm circuits, fire pumps, and smoke control systems. In process industries handling flammable fluids MI cable is used where small fires would otherwise cause damage to control or power cables. MI cable is also highly resistant to ionizing radiation and so finds applications in instrumentation for nuclear reactors and nuclear physics apparatus. MI cables may be covered with a plastic sheath, coloured for identification purposes. The plastic sheath also provides additional corrosion protection for the copper sheath. The metal tube shields the conductors from electromagnetic interference. The metal sheath also physically protects the conductors, most importantly from accidental contact with other energised conductors. History The first patent for MI cable was issued to the Swiss inventor Arnold Francois Borel in 1896. Initially the insulating mineral was described in the patent application as pulverised glass, silicious stones, or asbestos, in powdered form. Much development ensued by the French company Société Alsacienne de Construction Mécanique. Commercial production began in 1932 and much mineral-insulated cable was used on ships such as the Normandie and oil tankers, and in such critical applications as the Louvre museum. In 1937 a British company Pyrotenax, having purchased patent rights to the product from the French company, began production. During the Second World War much of the company's product was used in military equipment. The company floated on the stock exchange in 1954. Around 1947, the British Cable Makers' Association investigated the option of manufacturing a mineral-insulated cable that would compete with the Pyrotenax product. The manufacturers of the products "Bicalmin" and "Glomin" eventually merged with the Pyrotenax company. The Pyrotenax company introduced an aluminum sheathed version of its product in 1964. MI cable is now manufactured in several countries. Pyrotenax is now a brand name under nVent (formerly known as Pentair Thermal Management). Purpose and use MI cables are used for power and control circuits of critical equipment, such as the following examples: Nuclear reactors Exposure to dangerous gasses Air pressurisation systems for stairwells to enable building egress during a fire Hospital operating rooms Fire alarm systems Emergency power systems Emergency lighting systems Temperature measurement devices; RTDs and thermocouples. Critical process valves in the petrochemical industry Public buildings such as theatres, cinemas, hotels Transport hubs (railway stations, airports etc.) Mains supply cables within residential apartment blocks Tunnels and mines Electrical equipment in hazardous areas where flammable gases may be present e.g. oil refineries, petrol stations Areas where corrosive chemicals may be present e.g. factories Building plant rooms Hot areas e.g. power stations, foundries, and close to or even inside industrial furnaces, kilns and ovens MI cable fulfills the passive fire protection called circuit integrity, which is intended to provide operability of critical electrical circuits during a fire. It is subject to strict listing and approval use and compliance Heating cable A similar-appearing product is mineral-insulated trace heating cable, in which the conductors are made of a high-resistance alloy. A heating cable is used to protect pipes from freezing or to maintain the temperature of process piping and vessels. An MI resistance heating cable may not be repairable if damaged. Most electric stove and oven heating elements are constructed in a similar manner. Typical specifications Properties and comparison with other wiring systems The construction of MI cable makes it mechanically robust and resistant to impact. Copper sheathing is waterproof and resistant to ultraviolet light and many corrosive elements. MI cable is approved for use in areas with hazardous concentrations of flammable substances, being unlikely to initiate an explosion even during circuit fault conditions. MI cable is smokeless, non-toxic, and will not support combustion. The cable meets and exceeds BS 5839-1, making it fire-rated surpassing 950°c for over three hours with simultaneous mechanical stress and water spray as well without failure. MI cable is primarily used for high-temperature environments or safety-critical signal and power systems; however, it can additionally be used within a tenanted area, carrying electricity supplied and billed to the landlord. For example, for a communal extract system or antenna booster, it provides a supply cable that cannot easily be 'tapped' into to obtain free energy. The finished cable assembly can be bent to follow the shapes of buildings or bent around obstacles, allowing for a neat appearance when exposed. Since the inorganic insulation does not degrade with (moderate) heating, the finished cable assembly can be allowed to rise to higher temperatures than plastic-insulated cables; the limits to temperature rise may be only due to possible contact of the sheath with people or structures or the physical melting point of copper. This may also allow a smaller cross-section cable to be used in particular applications. An additional advantage of Mi cable is the ability to use the copper shield as a neutral or earth in particular situations Due to oxidation, the copper cladding darkens with age. However, where MICC cables with a bare copper sheath are installed in damp locations, particularly where lime mortar has been used, the water and lime combine to create an electrolytic action with the bare copper. Similarly, electrolytic action may also be caused by installing bare-sheath MICC cables on new oak. The reaction causes the copper to be eaten away, making a hole in the sheath of the cable and letting in water, causing a breakdown of the insulation and short circuits. The copper sheath material is typically resistant to most chemicals but can be severely damaged by ammonia-bearing compounds and urine. A pinhole in the copper sheathing will allow moisture into the insulation and cause eventual failure of the circuit. A PVC over-jacket or sheaths of other metals may be required where such chemical damage is expected. When MI cable is embedded in concrete, as in floor heating cable, it is susceptible to physical damage by concrete workers working the concrete into the pour. If the coating is damaged, pinholes in the copper jacket may develop, causing premature failure of the system. While the length of the MI cable is very tough, at some point, each run of cabling terminates at a splice or within electrical equipment. These terminations are vulnerable to fire, moisture, and mechanical impact. MICC is not suitable for use where it will be subject to vibration or flexing, as in connections to heavy or movable machinery. Vibration can cause cracking in the cladding and cores, leading to failure. During installation MI cable must not be bent repeatedly, as this will cause work hardening and cracks in the cladding and cores. A minimum bend radius must be observed, and the cable must be supported at regular intervals. The magnesium oxide insulation is hygroscopic, so MICC cable must be protected from moisture until it has been terminated. Termination requires stripping back the copper cladding and attaching a compression gland fitting. Individual conductors are insulated with plastic sleeves. A sealing tape, insulating putty, or an epoxy resin is then poured into the compression gland fitting to provide a watertight seal. If a termination is faulty due to workmanship or damage, then the magnesium oxide will absorb moisture and lose its insulating properties. Installation of MI cable takes more time than installation of a PVC-sheathed armoured cable of the same conductor size. Installation of MICC is therefore a costly task. MI cable is only manufactured with ratings up to 1000 volts. The magnesium oxide insulation has a high affinity for moisture. Moisture introduced into the cable can cause electrical leakage from the internal conductors to the metal sheath. Moisture absorbed at a cut end of the cable may be driven off by heating the cable. If the MI cable jacket has been damaged, the magnesium oxide will wick moisture into the cable, and it will lose its insulating properties, causing shorts to the copper cladding and thence to earth. It is often necessary to remove of the MI cable and splice in a new section to accomplish the repair. Depending on the size and number of conductors, a single termination can be a large undertaking to repair. Alternatives Circuit integrity for conventional plastic-insulated cables requires additional measures to obtain a fire-resistance rating or to lower the flammability and smoke contributions to a minimum degree acceptable for certain types of construction. Sprayed-on coatings or flexible wraps cover the plastic insulation to protect it from flame and reduce its flame-spreading ability. However, since these coatings reduce the heat dissipation of the cables, often they must be rated for less current after application of fire-resistant coatings. This is called current capacity derating. It can be tested through the use of IEEE 848 Standard Procedure for the Determination of the Ampacity Derating of Fire-Protected Cables. See also Listing and approval use and compliance Passive fire protection Circuit integrity Fireproofing Cable tray Copper wire and cable References Power cables Passive fire protection Electrical wiring
Mineral-insulated copper-clad cable
[ "Physics", "Engineering" ]
2,024
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
2,949,555
https://en.wikipedia.org/wiki/XENON
The XENON dark matter research project, operated at the Italian Gran Sasso National Laboratory, is a deep underground detector facility featuring increasingly ambitious experiments aiming to detect hypothetical dark matter particles. The experiments aim to detect particles in the form of weakly interacting massive particles (WIMPs) by looking for rare nuclear recoil interactions in a liquid xenon target chamber. The current detector consists of a dual phase time projection chamber (TPC). The experiment detects scintillation and ionization signals produced when external particles interact in the liquid xenon volume, to search for an excess of nuclear recoil events against known backgrounds. The detection of such a signal would provide the first direct experimental evidence for dark matter candidate particles. The collaboration is currently led by Italian professor of physics Elena Aprile from Columbia University. Detector principle The XENON experiment operates a dual phase time projection chamber (TPC), which utilizes a liquid xenon target with a gaseous phase on top. Two arrays of photomultiplier tubes (PMTs), one at the top of the detector in the gaseous phase (GXe), and one at the bottom of the liquid layer (LXe), detect scintillation and electroluminescence light produced when charged particles interact in the detector. Electric fields are applied across both the liquid and gaseous phase of the detector. The electric field in the gaseous phase has to be sufficiently large to extract electrons from the liquid phase. Particle interactions in the liquid target produce scintillation and ionization. The prompt scintillation light produces 178 nm ultraviolet photons. This signal is detected by the PMTs, and is referred to as the S1 signal. The applied electric field prevents recombination of all the electrons produced from a charged particle interaction in the TPC. These electrons are drifted to the top of the liquid phase by the electric field. The ionization is then extracted into the gas phase by the stronger electric field in the gaseous phase. The electric field accelerates the electrons to the point that it creates a proportional scintillation signal that is also collected by the PMTs, and is referred to as the S2 signal. This technique has proved sensitive enough to detect S2 signals generated from single electrons. The detector allows for a full 3-D position determination of the particle interaction. Electrons in liquid xenon have a uniform drift velocity. This allows the interaction depth of the event to be determined by measuring the time delay between the S1 and S2 signal. The position of the event in the x-y plane can be determined by looking at the number of photons seen by each of the individual PMTs. The full 3-D position allows for the fiducialization of the detector, in which a low-background region is defined in the inner volume of the TPC. This fiducial volume has a greatly reduced rate of background events as compared to regions of the detector at the edge of the TPC, due to the self-shielding properties of liquid xenon. This allows for a much higher sensitivity when searching for very rare events. Charged particles moving through the detector are expected to either interact with the electrons of the xenon atoms producing electronic recoils, or with the nucleus, producing nuclear recoils. For a given amount of energy deposited by a particle interaction in the detector, the ratio of S2/S1 can be used as a discrimination parameter to distinguish electronic and nuclear recoil events. This ratio is expected to be greater for electronic recoils than for nuclear recoils. In this way backgrounds from electronic recoils can be suppressed by more than 99%, while simultaneously retaining 50% of the nuclear recoil events. XENON10 The XENON10 experiment was installed at the underground Gran Sasso laboratory in Italy during March 2006. The underground location of the laboratory provides 3100 m of water-equivalent shielding. The detector was placed within a shield to further reduce the background rate in the TPC. XENON10 was intended as a prototype detector, to prove the efficacy of the XENON design, as well as verify the achievable threshold, background rejection power and sensitivity. The XENON10 detector contained 15 kg of liquid xenon. The sensitive volume of the TPC measures 20 cm in diameter and 15 cm in height. An analysis of 59 live days of data, taken between October 2006 and February 2007, produced no WIMP signatures. The number of events observed in the WIMP search region is statistically consistent with the expected number of events from electronic recoil backgrounds. This result excluded some of the available parameter space in minimal Supersymmetric models, by placing limits on spin independent WIMP-nucleon cross sections down to below for a WIMP mass. Due to nearly half of natural xenon having odd spin states (129Xe has an abundance of 26% and spin-1/2; 131Xe has an abundance of 21% and spin-3/2), the XENON detectors can also be used to provide limits on spin dependent WIMP-nucleon cross sections for coupling of the dark matter candidate particle to both neutrons and protons. XENON10 set the world's most stringent restrictions on pure neutron coupling. XENON100 The second phase detector, XENON100, contains 165 kg of liquid xenon, with 62 kg in the target region and the remaining xenon in an active veto. The TPC of the detector has a diameter of 30 cm and a height of 30 cm. As WIMP interactions are expected to be extremely rare events, a thorough campaign was launched during the construction and commissioning phase of XENON100 to screen all parts of the detector for radioactivity. The screening was performed using high-purity Germanium detectors. In a few cases mass spectrometry was performed on low mass plastic samples. In doing so the design goal of <10−2 events/kg/day/keV was reached, realising the world's lowest background rate dark matter detector. The detector was installed at the Gran Sasso National Laboratory in 2008 in the same shield as the XENON10 detector, and has conducted several science runs. In each science run, no dark matter signal was observed above the expected background, leading to the most stringent limit on the spin independent WIMP-nucleon cross section in 2012, with a minimum at for a WIMP mass. These results constrain interpretations of signals in other experiments as dark matter interactions, and rule out exotic models such as inelastic dark matter, which would resolve this discrepancy. XENON100 has also provided improved limits on the spin dependent WIMP-nucleon cross section. An axion result was published in 2014, setting a new best axion limit. XENON100 operated the then-lowest background experiment, for dark matter searches, with a background of 50 (1 =10−3 events/kg/day/keV). XENON1T Construction of the next phase, XENON1T, started in Hall B of the Gran Sasso National Laboratory in 2014. The detector contains 3.2 tons of ultra radio-pure liquid xenon, and has a fiducial volume of about 2 tons. The detector is housed in a 10 m water tank that serves as a muon veto. The TPC is 1 m in diameter and 1 m in height. The detector project team, called the XENON Collaboration, is composed of 135 investigators across 22 institutions from Europe, the Middle East, and the United States. The first results from XENON1T were released by the XENON collaboration on May 18, 2017, based on 34 days of data-taking between November 2016 and January 2017. While no WIMPs or dark matter candidate signals were officially detected, the team did announce a record low reduction in the background radioactivity levels being picked up by XENON1T. The exclusion limits exceeded the previous best limits set by the LUX experiment, with an exclusion of cross sections larger than for WIMP masses of . Because some signals that the detector receives might be due to neutrons, reducing the radioactivity increases the sensitivity to WIMPs. In September 2018 the XENON1T experiment published its results from 278.8 days of collected data. A new record limit for WIMP-nucleon spin-independent elastic interactions was set, with a minimum of at a WIMP mass of . In April 2019, based on measurements performed with the XENON1T detector, the XENON Collaboration reported in Nature the first direct observation of two-neutrino double electron capture in xenon-124 nuclei. The measured half-life of this process, which is several orders of magnitude larger than the age of the Universe, demonstrates the capabilities of xenon-based detectors to search for rare events and showcases the broad physics reach of even larger next-generation experiments. This measurement represents a first step in the search for the neutrinoless double electron capture process, the detection of which would provide insight into the nature of the neutrino and allow to determine its absolute mass. As of 2019, the XENON1T experiment has stopped data-taking to allow for construction of the next phase, XENONnT. The XENON1T detector operated 2016–2018, with the detector operations ending at the end of 2018. In June 2020, the XENON1T collaboration reported an excess of electron recoils: 285 events, 53 more than the expected 232 with a statistical significance of 3.5σ. Three explanations were considered: existence of to-date-hypothetical solar axions, a surprisingly large magnetic moment for neutrinos, and tritium contamination in the detector. Multiple other explanations were given later by others groups and in 2021 an interpretation of the results not as dark matter particles but of as dark energy particles candidates called chameleons has also been discussed. In July 2022 a new analysis by XENONnT discarded the excess. XENONnT XENONnT is an upgrade of the XENON1T experiment underground at LNGS. Its systems will contain a total xenon mass of more than 8 tonnes. Apart from a larger xenon target in its time projection chamber the upgraded experiment will feature new components to further reduce or tag radiation that otherwise would constitute background to its measurements. It is designed to reach a sensitivity (in a small part of the mass-range probed) where neutrinos become a significant background. As of 2019, the upgrade was on-going and first light was expected in 2020. The XENONnT detector was under construction in March 2020. Even with the problems posed by COVID-19, the project was able to finish construction and move forwards into commissioning phase by mid 2020. Full detector operations commenced in late 2020. In September 2021, XENONnT was taking science data for its first science run, which was ongoing at the time. On 28 July 2023 the XENONnT published the first results of its search for WIMPs, excluding cross sections above at 28 GeV with 90% confidence level, jointly on the same date the LZ experiment published its first results too excluding cross sections above at 36 GeV with 90% confidence level. References Further reading External links The XENON Experiment XENON home page at the University of Chicago XENON home page at Columbia University XENON home page at the University of Zurich XENON home page at Rice University XENON home page at Brown University Katsuhi Arisaka, XENON at University of California, Los Angeles Dark matter limit plotter with the latest results from XENON and other experiments Enlightening the dark, CERN Courier, Sep 27, 2013 Experiments for dark matter search
XENON
[ "Physics" ]
2,429
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
311,440
https://en.wikipedia.org/wiki/Mammary%20gland
A mammary gland is an exocrine gland in humans and other mammals that produces milk to feed young offspring. Mammals get their name from the Latin word mamma, "breast". The mammary glands are arranged in organs such as the breasts in primates (for example, humans and chimpanzees), the udder in ruminants (for example, cows, goats, sheep, and deer), and the dugs of other animals (for example, dogs and cats). Lactorrhea, the occasional production of milk by the glands, can occur in any mammal, but in most mammals, lactation, the production of enough milk for nursing, occurs only in phenotypic females who have gestated in recent months or years. It is directed by hormonal guidance from sex steroids. In a few mammalian species, male lactation can occur. With humans, male lactation can occur only under specific circumstances. Mammals are divided into 3 groups: prototherians, metatherians, and eutherians. In the case of prototherians, both males and females have functional mammary glands, but their mammary glands are without nipples. These mammary glands are modified sebaceous glands. Concerning most metatherians and eutherians, only females have functional mammary glands, with the exception of some bat species. Their mammary glands can be termed as breasts or udders. In the case of breasts, each mammary gland has its own nipple (e.g., human mammary glands). In the case of udders, pairs of mammary glands comprise a single mass, with more than one nipple (or teat) hanging from it. For instance, cows and buffalo udders have two pairs of mammary glands and four teats, whereas sheep and goat udders have one pair of mammary glands with two teats protruding from the udder. Each gland produces milk for a single teat. These mammary glands are evolutionarily derived from sweat glands. Structure The basic components of a mature mammary gland are the alveoli (hollow cavities, a few millimeters large), which are lined with milk-secreting cuboidal cells and surrounded by myoepithelial cells. These alveoli join to form groups known as lobules. Each lobule has a lactiferous duct that drains into openings in the nipple. The myoepithelial cells contract under the stimulation of oxytocin, excreting the milk secreted by alveolar units into the lobule lumen toward the nipple. As the infant begins to suck, the oxytocin-mediated "let down reflex" ensues, and the mother's milk is secreted—not sucked—from the gland into the infant's mouth. All the milk-secreting tissue leading to a single lactiferous duct is collectively called a "simple mammary gland"; in a "complex mammary gland", all the simple mammary glands serve one nipple. Humans normally have two complex mammary glands, one in each breast, and each complex mammary gland consists of 10–20 simple glands. The opening of each simple gland on the surface of the nipple is called a "pore." The presence of more than two nipples is known as polythelia and the presence of more than two complex mammary glands as polymastia. Maintaining the correct polarized morphology of the lactiferous duct tree requires another essential component – mammary epithelial cells extracellular matrix (ECM) which, together with adipocytes, fibroblast, inflammatory cells, and others, constitute mammary stroma. Mammary epithelial ECM mainly contains myoepithelial basement membrane and the connective tissue. They not only help to support mammary basic structure, but also serve as a communicating bridge between mammary epithelia and their local and global environment throughout this organ's development. Histology A mammary gland is a specific type of apocrine gland specialized for manufacture of colostrum (first milk) when giving birth. Mammary glands can be identified as apocrine because they exhibit striking "decapitation" secretion. Many sources assert that mammary glands are modified sweat glands. Development Mammary glands develop during different growth cycles. They exist in both sexes during the embryonic stage, forming only a rudimentary duct tree at birth. In this stage, mammary gland development depends on systemic (and maternal) hormones, but is also under the (local) regulation of paracrine communication between neighboring epithelial and mesenchymal cells by parathyroid hormone-related protein (PTHrP). This locally secreted factor gives rise to a series of outside-in and inside-out positive feedback between these two types of cells, so that mammary bud epithelial cells can proliferate and sprout down into the mesenchymal layer until they reach the fat pad to begin the first round of branching. At the same time, the embryonic mesenchymal cells around the epithelial bud receive secreting factors activated by PTHrP, such as BMP4. These mesenchymal cells can transform into a dense, mammary-specific mesenchyme, which later develop into connective tissue with fibrous threads, forming blood vessels and the lymph system. A basement membrane, mainly containing laminin and collagen, formed afterward by differentiated myoepithelial cells, keeps the polarity of this primary duct tree. These components of the extracellular matrix are strong determinants of duct morphogenesis. Biochemistry Estrogen and growth hormone (GH) are essential for the ductal component of mammary gland development, and act synergistically to mediate it. Neither estrogen nor GH are capable of inducing ductal development without the other. The role of GH in ductal development has been found to be mostly mediated by its induction of the secretion of insulin-like growth factor 1 (IGF-1), which occurs both systemically (mainly originating from the liver) and locally in the mammary fat pad through activation of the growth hormone receptor (GHR). However, GH itself also acts independently of IGF-1 to stimulate ductal development by upregulating estrogen receptor (ER) expression in mammary gland tissue, which is a downstream effect of mammary gland GHR activation. In any case, unlike IGF-1, GH itself is not essential for mammary gland development, and IGF-1 in conjunction with estrogen can induce normal mammary gland development without the presence of GH. In addition to IGF-1, other paracrine growth factors such as epidermal growth factor (EGF), transforming growth factor beta (TGF-β), amphiregulin, fibroblast growth factor (FGF), and hepatocyte growth factor (HGF) are involved in breast development as mediators downstream to sex hormones and GH/IGF-1. During embryonic development, IGF-1 levels are low, and gradually increase from birth to puberty. At puberty, the levels of GH and IGF-1 reach their highest levels in life and estrogen begins to be secreted in high amounts in females, which is when ductal development mostly takes place. Under the influence of estrogen, stromal and fat tissue surrounding the ductal system in the mammary glands also grows. After puberty, GH and IGF-1 levels progressively decrease, which limits further development until pregnancy, if it occurs. During pregnancy, progesterone and prolactin are essential for mediating lobuloalveolar development in estrogen-primed mammary gland tissue, which occurs in preparation of lactation and nursing. Androgens such as testosterone inhibit estrogen-mediated mammary gland development (e.g., by reducing local ER expression) through activation of androgen receptors expressed in mammary gland tissue, and in conjunction with relatively low estrogen levels, are the cause of the lack of developed mammary glands in males. Timeline Before birth Mammary gland development is characterized by the unique process by which the epithelium invades the stroma. The development of the mammary gland occurs mainly after birth. During puberty, tubule formation is coupled with branching morphogenesis which establishes the basic arboreal network of ducts emanating from the nipple. Developmentally, mammary gland epithelium is constantly produced and maintained by rare epithelial cells, dubbed as mammary progenitors which are ultimately thought to be derived from tissue-resident stem cells. Embryonic mammary gland development can be divided into a series of specific stages. Initially, the formation of the milk lines that run between the fore and hind limbs bilaterally on each side of the midline occurs around embryonic day 10.5 (E10.5). The second stage occurs at E11.5 when placode formation begins along the mammary milk line. This will eventually give rise to the nipple. Lastly, the third stage occurs at E12.5 and involves the invagination of cells within the placode into the mesenchyme, leading to a mammary anlage (biology). The primitive (stem) cells are detected in embryo and their numbers increase steadily during development Growth Postnatally, the mammary ducts elongate into the mammary fat pad. Then, starting around four weeks of age, mammary ductal growth increases significantly with the ducts invading towards the lymph node. Terminal end buds, the highly proliferative structures found at the tips of the invading ducts, expand and increase greatly during this stage. This developmental period is characterized by the emergence of the terminal end buds and lasts until an age of about 7–8 weeks. By the pubertal stage, the mammary ducts have invaded to the end of the mammary fat pad. At this point, the terminal end buds become less proliferative and decrease in size. Side branches form from the primary ducts and begin to fill the mammary fat pad. Ductal development decreases with the arrival of sexual maturity and undergoes estrous cycles (proestrus, estrus, metestrus, and diestrus). As a result of estrous cycling, the mammary gland undergoes dynamic changes where cells proliferate and then regress in an ordered fashion. Pregnancy During pregnancy, the ductal systems undergo rapid proliferation and form alveolar structures within the branches to be used for milk production. After delivery, lactation occurs within the mammary gland; lactation involves the secretion of milk by the luminal cells in the alveoli. Contraction of the myoepithelial cells surrounding the alveoli will cause the milk to be ejected through the ducts and into the nipple for the nursing infant. Upon weaning of the infant, lactation stops and the mammary gland turns in on itself, a process called involution. This process involves the controlled collapse of mammary epithelial cells where cells begin apoptosis in a controlled manner, reverting the mammary gland back to a pubertal state. Postmenopausal During postmenopause, due to much lower levels of estrogen, and due to lower levels of GH and IGF-1, which decrease with age, mammary gland tissue atrophies and the mammary glands become smaller. Physiology Hormonal control Lactiferous duct development occurs in females in response to circulating hormones. First development is frequently seen during pre- and postnatal stages, and later during puberty. Estrogen promotes branching differentiation, whereas in males testosterone inhibits it. A mature duct tree reaching the limit of the fat pad of the mammary gland comes into being by bifurcation of duct terminal end buds (TEB), secondary branches sprouting from primary ducts and proper duct lumen formation. These processes are tightly modulated by components of mammary epithelial ECM interacting with systemic hormones and local secreting factors. However, for each mechanism the epithelial cells' "niche" can be delicately unique with different membrane receptor profiles and basement membrane thickness from specific branching area to area, so as to regulate cell growth or differentiation sub-locally. Important players include beta-1 integrin, epidermal growth factor receptor (EGFR), laminin-1/5, collagen-IV, matrix metalloproteinase (MMPs), heparan sulfate proteoglycans, and others. Elevated circulating level of growth hormone and estrogen get to multipotent cap cells on TEB tips through a thin, leaky layer of basement membrane. These hormones promote specific gene expression. Hence cap cells can differentiate into myoepithelial and luminal (duct) epithelial cells, and the increased amount of activated MMPs can degrade surrounding ECM helping duct buds to reach further in the fat pads. On the other hand, basement membrane along the mature mammary ducts is thicker, with strong adhesion to epithelial cells via binding to integrin and non-integrin receptors. When side branches develop, it is a much more "pushing-forward" working process including extending through myoepithelial cells, degrading basement membrane and then invading into a periductal layer of fibrous stromal tissue. Degraded basement membrane fragments (laminin-5) roles to lead the way of mammary epithelial cells migration. Whereas, laminin-1 interacts with non-integrin receptor dystroglycan negatively regulates this side branching process in case of cancer. These complex "Yin-yang" balancing crosstalks between mammary ECM and epithelial cells "instruct" healthy mammary gland development until adult. There is preliminary evidence that soybean intake mildly stimulates the breast glands in pre- and postmenopausal women. Pregnancy Secretory alveoli develop mainly in pregnancy, when rising levels of prolactin, estrogen, and progesterone cause further branching, together with an increase in adipose tissue and a richer blood flow. In gestation, serum progesterone remains at a stably high concentration so signaling through its receptor is continuously activated. As one of the transcribed genes, Wnts secreted from mammary epithelial cells act paracrinely to induce more neighboring cells' branching. When the lactiferous duct tree is almost ready, "leaves" alveoli are differentiated from luminal epithelial cells and added at the end of each branch. In late pregnancy and for the first few days after giving birth, colostrum is secreted. Milk secretion (lactation) begins a few days later due to reduction in circulating progesterone and the presence of another important hormone prolactin, which mediates further alveologenesis, milk protein production, and regulates osmotic balance and tight junction function. Laminin and collagen in myoepithelial basement membrane interacting with beta-1 integrin on epithelial surface again, is essential in this process. Their binding ensures correct placement of prolactin receptors on the basal lateral side of alveoli cells and directional secretion of milk into lactiferous ducts. Suckling of the baby causes release of the hormone oxytocin, which stimulates contraction of the myoepithelial cells. In this combined control from ECM and systemic hormones, milk secretion can be reciprocally amplified so as to provide enough nutrition for the baby. Weaning During weaning, decreased prolactin, missing mechanical stimulation (baby suckling), and changes in osmotic balance caused by milk stasis and leaking of tight junctions cause cessation of milk production. It is the (passive) process of a child or animal ceasing to be dependent on the mother for nourishment. In some species there is complete or partial involution of alveolar structures after weaning, in humans there is only partial involution and the level of involution in humans appears to be highly individual. The glands in the breast do secrete fluid also in nonlactating women. In some other species (such as cows), all alveoli and secretory duct structures collapse by programmed cell death (apoptosis) and autophagy for lack of growth promoting factors either from the ECM or circulating hormones. At the same time, apoptosis of blood capillary endothelial cells speeds up the regression of lactation ductal beds. Shrinkage of the mammary duct tree and ECM remodeling by various proteinase is under the control of somatostatin and other growth inhibiting hormones and local factors. This major structural change leads loose fat tissue to fill the empty space afterward. But a functional lactiferous duct tree can be formed again when a female is pregnant again. Clinical significance Tumorigenesis in mammary glands can be induced biochemically by abnormal expression level of circulating hormones or local ECM components, or from a mechanical change in the tension of mammary stroma. Under either of the two circumstances, mammary epithelial cells would grow out of control and eventually result in cancer. Almost all instances of breast cancer originate in the lobules or ducts of the mammary glands. Other mammals General The breasts of female humans vary from most other mammals that tend to have less conspicuous mammary glands. The number and positioning of mammary glands varies widely in different mammals. The protruding teats and accompanying glands can be located anywhere along the two milk lines. In general most mammals develop mammary glands in pairs along these lines, with a number approximating the number of young typically birthed at a time. The number of teats varies from 2 (in most primates) to 18 (in pigs). The Virginia opossum has 13, one of the few mammals with an odd number. The following table lists the number and position of teats and glands found in a range of mammals: Male mammals typically have rudimentary mammary glands and nipples, with a few exceptions: male mice do not have nipples, male marsupials do not have mammary glands, and male horses lack nipples. The male dayak fruit bat has lactating mammary glands. Male lactation occurs infrequently in some species. Mammary glands are true protein factories, and several labs have constructed transgenic animals, mainly goats and cows, to produce proteins for pharmaceutical use. Complex glycoproteins such as monoclonal antibodies or antithrombin cannot be produced by genetically engineered bacteria, and the production in live mammals is much cheaper than the use of mammalian cell cultures. Evolution There are many theories on how mammary glands evolved. For example, it is thought that the mammary gland is a transformed sweat gland, more closely related to apocrine sweat glands. Because mammary glands do not fossilize well, supporting such theories with fossil evidence is difficult. Many of the current theories are based on comparisons between lines of living mammals—monotremes, marsupials, and eutherians. One theory proposes that mammary glands evolved from glands that were used to keep the eggs of early mammals moist and free from infection (monotremes still lay eggs). Other theories suggest that early secretions were used directly by hatched young, or that the secretions were used by young to help them orient to their mothers. Lactation is thought to have developed long before the evolution of the mammary gland and mammals; see evolution of lactation. Additional images See also Breastfeeding Mammary tumor Mammaglobin Gynecomastia Hypothalamic–pituitary–prolactin axis Udder Witch's milk Milk line List of glands of the human body#Skin List of distinct cell types in the adult human body References Bibliography Moore, Keith L. et al. (2010) Clinically Oriented Anatomy 6th Ed External links Comparative Mammary Gland Anatomy by W. L. Hurley On the anatomy of the breast by Sir Astley Paston Cooper (1840). Numerous drawings, in the public domain. Breast anatomy Exocrine system Glands Mammal anatomy Human female endocrine system
Mammary gland
[ "Biology" ]
4,340
[ "Exocrine system", "Organ systems" ]
311,507
https://en.wikipedia.org/wiki/Opodeldoc
Opodeldoc is a medical plaster or liniment invented, or at least named, by the German Renaissance physician Paracelsus in the 1500s. In modern form opodeldoc is a mixture of soap in alcohol, to which camphor and sometimes a number of herbal essences, most notably wormwood, are added. Origins In his Bertheonea Sive Chirurgia Minor published in 1603, Paracelsus mentioned "oppodeltoch" twice, but with uncertain ingredients. As to the origin of the name, Kurt Peters speculated that it was coined by Paracelsus from syllables from the words "opoponax, bdellium, and aristolochia." Opoponax is a variety of myrrh; bdellium is Commiphora wightii, which produces a similar resin; and Aristolochia is a widely distributed genus which includes A. pfeiferi, A. rugosa and A. trilobata that are used in folk medicine to cure snakebites. The name suggests that these aromatic plants may have figured in Paracelsus's recipe. In his Medicina Militaris of 1620, German military physician Raymund Minderer ("Mindererus"; 1570-1621) praised the Paracelsus compound as a plaster, good for wounds. Minderer compared it to his own variant, which set more like sealing wax. Opodeldoc and Paracelsus were acknowledged in English no later than 1646, in Sir Thomas Browne's popular and influential Pseudodoxia Epidemica. Paracelsus's recipe is completely unrelated to later preparations of the same name. By the second printing of the Edinburgh Pharmacopoeia in 1722 the name applied to a soap-based liniment. Such a liniment in patent form, sold by John Newbery's company in Great Britain "ever since A.D. 1786", was called "Dr. Steer's Opodeldoc". Produced for decades, the "Dr. Steer" preparation had been successfully imported into the U.S., and was common enough there to rank as one of the eight patent medicines to be analyzed (although not condemned) by the Philadelphia College of Pharmacy in 1824. The name Old Opodeldoc was formerly used as a standard name for a stock character who was a physician, especially when played as a comic figure. Edgar Allan Poe used "Oppodeldoc" as a pseudonym for a character in the short story "The Literary Life of Thingum Bob, Esq." Modern usage The Pharmacopoeia of the United States (U.S.P.) gives a recipe for opodeldoc that contains: Powdered soap, 60 grams; Camphor, 45 grams; Oil of rosemary, 10 milliliters; Alcohol, 700 milliliters; Water, enough to make 1000 milliliters As late as the early 1990s 'Epideldoc' (sic) was compounded on request by several pharmacists in the Northwest of England. References Ointments
Opodeldoc
[ "Chemistry" ]
634
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
311,596
https://en.wikipedia.org/wiki/Allicin
Allicin is an organosulfur compound obtained from garlic and leeks. When fresh garlic is chopped or crushed, the enzyme alliinase converts alliin into allicin, which is responsible for the aroma of fresh garlic. Allicin is unstable and quickly changes into a series of other sulfur-containing compounds such as diallyl disulfide. Allicin is an antifeedant, i.e. the defense mechanism against attacks by pests on the garlic plant. Allicin is an oily, slightly yellow liquid that gives garlic its distinctive odor. It is a thioester of sulfenic acid. It is also known as allyl thiosulfinate. Its biological activity can be attributed to both its antioxidant activity and its reaction with thiol-containing proteins. Structure and occurrence Allicin features the thiosulfinate functional group, R-S-(O)-S-R. The compound is not present in garlic unless tissue damage occurs, and is formed by the action of the enzyme alliinase on alliin. Allicin is chiral but occurs naturally only as a racemate. The racemic form can also be generated by oxidation of diallyl disulfide: (SCH2CH=CH2)2 + 2 RCO3H + H2O → 2 CH2=CHCH2SOH + 2 RCO2H 2 CH2=CHCH2SOH → CH2=CHCH2S(O)SCH2CH=CH2 + H2O Alliinase is irreversibly deactivated below pH 3; as such, allicin is generally not produced in the body from the consumption of fresh or powdered garlic. Furthermore, allicin can be unstable, breaking down within 16 hours at 23 °C. Biosynthesis The biosynthesis of allicin commences with the conversion of cysteine into S-allyl-L-cysteine. Oxidation of this thioether gives the sulfoxide (alliin). The enzyme alliinase, which contains pyridoxal phosphate (PLP), cleaves alliin, generating allylsulfenic acid (CH2=CHCH2SOH), pyruvate, and ammonium ions. At room temperature, two molecules of allylsulfenic acid condense to form allicin. Research Allicin has been studied for its potential to treat various kinds of multiple drug resistance bacterial infections, as well as viral and fungal infections in vitro, but as of 2016, the safety and efficacy of allicin to treat infections in people was unclear. A Cochrane review found there to be insufficient clinical evidence regarding the effects of allicin in preventing or treating common cold. History It was first isolated and studied in the laboratory by Chester J. Cavallito and John Hays Bailey in 1944. Allicin was discovered as part of efforts to create thiamine derivatives in the 1940s, mainly in Japan. Allicin became a model for medicinal chemistry efforts to create other thiamine disulfides. The results included sulbutiamine, fursultiamine (thiamine tetrahydrofurfuryl disulfide) and benfothiamine. These compounds are hydrophobic, easily pass from the intestines to the bloodstream, and are reduced to thiamine by cysteine or glutathione. See also Allyl isothiocyanate, the active piquant chemical in mustard, radishes, horseradish and wasabi syn-Propanethial-S-oxide, the lachrymatory chemical found in onions List of phytochemicals in food References Thiosulfinates Anti-inflammatory agents Antibiotics Dietary antioxidants Pungent flavors Allium Garlic Antifungals Allyl compounds Transient receptor potential channel modulators
Allicin
[ "Chemistry", "Biology" ]
820
[ "Biotechnology products", "Functional groups", "Antibiotics", "Biocides", "Thiosulfinates" ]
312,152
https://en.wikipedia.org/wiki/Spontaneous%20process
In thermodynamics, a spontaneous process is a process which occurs without any external input to the system. A more technical definition is the time-evolution of a system in which it releases free energy and it moves to a lower, more thermodynamically stable energy state (closer to thermodynamic equilibrium). The sign convention for free energy change follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in the free energy of the system and a positive change in the free energy of the surroundings. Depending on the nature of the process, the free energy is determined differently. For example, the Gibbs free energy change is used when considering processes that occur under constant pressure and temperature conditions, whereas the Helmholtz free energy change is used when considering processes that occur under constant volume and temperature conditions. The value and even the sign of both free energy changes can depend upon the temperature and pressure or volume. Because spontaneous processes are characterized by a decrease in the system's free energy, they do not need to be driven by an outside source of energy. For cases involving an isolated system where no energy is exchanged with the surroundings, spontaneous processes are characterized by an increase in entropy. A spontaneous reaction is a chemical reaction which is a spontaneous process under the conditions of interest. Overview In general, the spontaneity of a process only determines whether or not a process can occur and makes no indication as to whether or not the process will occur at an observable rate. In other words, spontaneity is a necessary, but not sufficient, condition for a process to actually occur. Furthermore, spontaneity makes no implication as to the speed at which the spontaneous process may occur - just because a process is spontaneous does not mean it will happen quickly (or at all). As an example, the conversion of a diamond into graphite is a spontaneous process at room temperature and pressure. Despite being spontaneous, this process does not occur since the energy to break the strong carbon-carbon bonds is larger than the release in free energy. Another way to explain this would be that even though the conversion of diamond into graphite is thermodynamically feasible and spontaneous even at room temperature, the high activation energy of this reaction renders it too slow to observe. Using free energy to determine spontaneity For a process that occurs at constant temperature and pressure, spontaneity can be determined using the change in Gibbs free energy, which is given by: where the sign of ΔG depends on the signs of the changes in enthalpy (ΔH) and entropy (ΔS). If these two signs are the same (both positive or both negative), then the sign of ΔG will change from positive to negative (or vice versa) at the temperature In cases where ΔG is: negative, the process is spontaneous and may proceed in the forward direction as written. positive, the process is non-spontaneous as written, but it may proceed spontaneously in the reverse direction. zero, the process is at equilibrium, with no net change taking place over time. This set of rules can be used to determine four distinct cases by examining the signs of the ΔS and ΔH. When ΔS > 0 and ΔH < 0, the process is always spontaneous as written. When ΔS < 0 and ΔH > 0, the process is never spontaneous, but the reverse process is always spontaneous. When ΔS > 0 and ΔH > 0, the process will be spontaneous at high temperatures and non-spontaneous at low temperatures. When ΔS < 0 and ΔH < 0, the process will be spontaneous at low temperatures and non-spontaneous at high temperatures. For the latter two cases, the temperature at which the spontaneity changes will be determined by the relative magnitudes of ΔS and ΔH. Using entropy to determine spontaneity When using the entropy change of a process to assess spontaneity, it is important to carefully consider the definition of the system and surroundings. The second law of thermodynamics states that a process involving an isolated system will be spontaneous if the entropy of the system increases over time. For open or closed systems, however, the statement must be modified to say that the total entropy of the combined system and surroundings must increase, or, This criterion can then be used to explain how it is possible for the entropy of an open or closed system to decrease during a spontaneous process. A decrease in system entropy can only occur spontaneously if the entropy change of the surroundings is both positive in sign and has a larger magnitude than the entropy change of the system: and In many processes, the increase in entropy of the surroundings is accomplished via heat transfer from the system to the surroundings (i.e. an exothermic process). See also Endergonic reaction reactions which are not spontaneous at standard temperature, pressure, and concentrations. Diffusion spontaneous phenomenon that minimizes Gibbs free energy. References Thermodynamics Chemical thermodynamics Chemical processes
Spontaneous process
[ "Physics", "Chemistry", "Mathematics" ]
1,037
[ "Chemical thermodynamics", "Chemical processes", "Thermodynamics", "nan", "Chemical process engineering", "Dynamical systems" ]
312,229
https://en.wikipedia.org/wiki/Chiral%20anomaly
In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have more left than right, or vice versa. Such events are expected to be prohibited according to classical conservation laws, but it is known there must be ways they can be broken, because we have evidence of charge–parity non-conservation ("CP violation"). It is possible that other imbalances have been caused by breaking of a chiral law of this kind. Many physicists suspect that the fact that the observable universe contains more matter than antimatter is caused by a chiral anomaly. Research into chiral symmetry breaking laws is a major endeavor in particle physics research at this time. Informal introduction The chiral anomaly originally referred to the anomalous decay rate of the neutral pion, as computed in the current algebra of the chiral model. These calculations suggested that the decay of the pion was suppressed, clearly contradicting experimental results. The nature of the anomalous calculations was first explained in 1969 by Stephen L. Adler and John Stewart Bell & Roman Jackiw. This is now termed the Adler–Bell–Jackiw anomaly of quantum electrodynamics. This is a symmetry of classical electrodynamics that is violated by quantum corrections. The Adler–Bell–Jackiw anomaly arises in the following way. If one considers the classical (non-quantized) theory of electromagnetism coupled to massless fermions (electrically charged Dirac spinors solving the Dirac equation), one expects to have not just one but two conserved currents: the ordinary electrical current (the vector current), described by the Dirac field as well as an axial current When moving from the classical theory to the quantum theory, one may compute the quantum corrections to these currents; to first order, these are the one-loop Feynman diagrams. These are famously divergent, and require a regularization to be applied, to obtain the renormalized amplitudes. In order for the renormalization to be meaningful, coherent and consistent, the regularized diagrams must obey the same symmetries as the zero-loop (classical) amplitudes. This is the case for the vector current, but not the axial current: it cannot be regularized in such a way as to preserve the axial symmetry. The axial symmetry of classical electrodynamics is broken by quantum corrections. Formally, the Ward–Takahashi identities of the quantum theory follow from the gauge symmetry of the electromagnetic field; the corresponding identities for the axial current are broken. At the time that the Adler–Bell–Jackiw anomaly was being explored in physics, there were related developments in differential geometry that appeared to involve the same kinds of expressions. These were not in any way related to quantum corrections of any sort, but rather were the exploration of the global structure of fiber bundles, and specifically, of the Dirac operators on spin structures having curvature forms resembling that of the electromagnetic tensor, both in four and three dimensions (the Chern–Simons theory). After considerable back and forth, it became clear that the structure of the anomaly could be described with bundles with a non-trivial homotopy group, or, in physics lingo, in terms of instantons. Instantons are a form of topological soliton; they are a solution to the classical field theory, having the property that they are stable and cannot decay (into plane waves, for example). Put differently: conventional field theory is built on the idea of a vacuum – roughly speaking, a flat empty space. Classically, this is the "trivial" solution; all fields vanish. However, one can also arrange the (classical) fields in such a way that they have a non-trivial global configuration. These non-trivial configurations are also candidates for the vacuum, for empty space; yet they are no longer flat or trivial; they contain a twist, the instanton. The quantum theory is able to interact with these configurations; when it does so, it manifests as the chiral anomaly. In mathematics, non-trivial configurations are found during the study of Dirac operators in their fully generalized setting, namely, on Riemannian manifolds in arbitrary dimensions. Mathematical tasks include finding and classifying structures and configurations. Famous results include the Atiyah–Singer index theorem for Dirac operators. Roughly speaking, the symmetries of Minkowski spacetime, Lorentz invariance, Laplacians, Dirac operators and the U(1)xSU(2)xSU(3) fiber bundles can be taken to be a special case of a far more general setting in differential geometry; the exploration of the various possibilities accounts for much of the excitement in theories such as string theory; the richness of possibilities accounts for a certain perception of lack of progress. The Adler–Bell–Jackiw anomaly is seen experimentally, in the sense that it describes the decay of the neutral pion, and specifically, the width of the decay of the neutral pion into two photons. The neutral pion itself was discovered in the 1940s; its decay rate (width) was correctly estimated by J. Steinberger in 1949. The correct form of the anomalous divergence of the axial current is obtained by Schwinger in 1951 in a 2D model of electromagnetism and massless fermions. That the decay of the neutral pion is suppressed in the current algebra analysis of the chiral model is obtained by Sutherland and Veltman in 1967. An analysis and resolution of this anomalous result is provided by Adler and Bell & Jackiw in 1969. A general structure of the anomalies is discussed by Bardeen in 1969. The quark model of the pion indicates it is a bound state of a quark and an anti-quark. However, the quantum numbers, including parity and angular momentum, taken to be conserved, prohibit the decay of the pion, at least in the zero-loop calculations (quite simply, the amplitudes vanish.) If the quarks are assumed to be massive, not massless, then a chirality-violating decay is allowed; however, it is not of the correct size. (Chirality is not a constant of motion of massive spinors; they will change handedness as they propagate, and so mass is itself a chiral symmetry-breaking term. The contribution of the mass is given by the Sutherland and Veltman result; it is termed "PCAC", the partially conserved axial current.) The Adler–Bell–Jackiw analysis provided in 1969 (as well as the earlier forms by Steinberger and Schwinger), do provide the correct decay width for the neutral pion. Besides explaining the decay of the pion, it has a second very important role. The one loop amplitude includes a factor that counts the grand total number of leptons that can circulate in the loop. In order to get the correct decay width, one must have exactly three generations of quarks, and not four or more. In this way, it plays an important role in constraining the Standard model. It provides a direct physical prediction of the number of quarks that can exist in nature. Current day research is focused on similar phenomena in different settings, including non-trivial topological configurations of the electroweak theory, that is, the sphalerons. Other applications include the hypothetical non-conservation of baryon number in GUTs and other theories. General discussion In some theories of fermions with chiral symmetry, the quantization may lead to the breaking of this (global) chiral symmetry. In that case, the charge associated with the chiral symmetry is not conserved. The non-conservation happens in a process of tunneling from one vacuum to another. Such a process is called an instanton. In the case of a symmetry related to the conservation of a fermionic particle number, one may understand the creation of such particles as follows. The definition of a particle is different in the two vacuum states between which the tunneling occurs; therefore a state of no particles in one vacuum corresponds to a state with some particles in the other vacuum. In particular, there is a Dirac sea of fermions and, when such a tunneling happens, it causes the energy levels of the sea fermions to gradually shift upwards for the particles and downwards for the anti-particles, or vice versa. This means particles which once belonged to the Dirac sea become real (positive energy) particles and particle creation happens. Technically, in the path integral formulation, an anomalous symmetry is a symmetry of the action , but not of the measure and therefore not of the generating functional of the quantized theory ( is Planck's action-quantum divided by 2). The measure consists of a part depending on the fermion field and a part depending on its complex conjugate . The transformations of both parts under a chiral symmetry do not cancel in general. Note that if is a Dirac fermion, then the chiral symmetry can be written as where is the chiral gamma matrix acting on . From the formula for one also sees explicitly that in the classical limit, anomalies don't come into play, since in this limit only the extrema of remain relevant. The anomaly is proportional to the instanton number of a gauge field to which the fermions are coupled. (Note that the gauge symmetry is always non-anomalous and is exactly respected, as is required for the theory to be consistent.) Calculation The chiral anomaly can be calculated exactly by one-loop Feynman diagrams, e.g. Steinberger's "triangle diagram", contributing to the pion decays, and . The amplitude for this process can be calculated directly from the change in the measure of the fermionic fields under the chiral transformation. Wess and Zumino developed a set of conditions on how the partition function ought to behave under gauge transformations called the Wess–Zumino consistency condition. Fujikawa derived this anomaly using the correspondence between functional determinants and the partition function using the Atiyah–Singer index theorem. See Fujikawa's method. An example: baryon number non-conservation The Standard Model of electroweak interactions has all the necessary ingredients for successful baryogenesis, although these interactions have never been observed and may be insufficient to explain the total baryon number of the observed universe if the initial baryon number of the universe at the time of the Big Bang is zero. Beyond the violation of charge conjugation and CP violation (charge+parity), baryonic charge violation appears through the Adler–Bell–Jackiw anomaly of the group. Baryons are not conserved by the usual electroweak interactions due to quantum chiral anomaly. The classic electroweak Lagrangian conserves baryonic charge. Quarks always enter in bilinear combinations , so that a quark can disappear only in collision with an antiquark. In other words, the classical baryonic current is conserved: However, quantum corrections known as the sphaleron destroy this conservation law: instead of zero in the right hand side of this equation, there is a non-vanishing quantum term, where is a numerical constant vanishing for ℏ =0, and the gauge field strength is given by the expression Electroweak sphalerons can only change the baryon and/or lepton number by 3 or multiples of 3 (collision of three baryons into three leptons/antileptons and vice versa). An important fact is that the anomalous current non-conservation is proportional to the total derivative of a vector operator, (this is non-vanishing due to instanton configurations of the gauge field, which are pure gauge at the infinity), where the anomalous current is which is the Hodge dual of the Chern–Simons 3-form. Geometric form In the language of differential forms, to any self-dual curvature form we may assign the abelian 4-form . Chern–Weil theory shows that this 4-form is locally but not globally exact, with potential given by the Chern–Simons 3-form locally: . Again, this is true only on a single chart, and is false for the global form unless the instanton number vanishes. To proceed further, we attach a "point at infinity" onto to yield , and use the clutching construction to chart principal A-bundles, with one chart on the neighborhood of and a second on . The thickening around , where these charts intersect, is trivial, so their intersection is essentially . Thus instantons are classified by the third homotopy group , which for is simply the third 3-sphere group . The divergence of the baryon number current is (ignoring numerical constants) , and the instanton number is . See also Anomaly (physics) Chiral magnetic effect Global anomaly Gravitational anomaly Strong CP problem References Further reading Published articles Textbooks Preprints Anomalies (physics) Quantum chromodynamics Standard Model Conservation laws
Chiral anomaly
[ "Physics" ]
2,731
[ "Standard Model", "Equations of physics", "Conservation laws", "Particle physics", "Symmetry", "Physics theorems" ]
312,250
https://en.wikipedia.org/wiki/Partition%20function%20%28number%20theory%29
In number theory, the partition function represents the number of possible partitions of a non-negative integer . For instance, because the integer 4 has the five partitions , , , , and . No closed-form expression for the partition function is known, but it has both asymptotic expansions that accurately approximate it and recurrence relations by which it can be calculated exactly. It grows as an exponential function of the square root of its argument. The multiplicative inverse of its generating function is the Euler function; by Euler's pentagonal number theorem this function is an alternating sum of pentagonal number powers of its argument. Srinivasa Ramanujan first discovered that the partition function has nontrivial patterns in modular arithmetic, now known as Ramanujan's congruences. For instance, whenever the decimal representation of ends in the digit 4 or 9, the number of partitions of will be divisible by 5. Definition and examples For a positive integer , is the number of distinct ways of representing as a sum of positive integers. For the purposes of this definition, the order of the terms in the sum is irrelevant: two sums with the same terms in a different order are not considered to be distinct. By convention , as there is one way (the empty sum) of representing zero as a sum of positive integers. Furthermore when is negative. The first few values of the partition function, starting with , are: Some exact values of for larger values of include: Generating function The generating function for p(n) is given by The equality between the products on the first and second lines of this formula is obtained by expanding each factor into the geometric series To see that the expanded product equals the sum on the first line, apply the distributive law to the product. This expands the product into a sum of monomials of the form for some sequence of coefficients , only finitely many of which can be non-zero. The exponent of the term is , and this sum can be interpreted as a representation of as a partition into copies of each number . Therefore, the number of terms of the product that have exponent is exactly , the same as the coefficient of in the sum on the left. Therefore, the sum equals the product. The function that appears in the denominator in the third and fourth lines of the formula is the Euler function. The equality between the product on the first line and the formulas in the third and fourth lines is Euler's pentagonal number theorem. The exponents of in these lines are the pentagonal numbers for (generalized somewhat from the usual pentagonal numbers, which come from the same formula for the positive values The pattern of positive and negative signs in the third line comes from the term in the fourth line: even choices of produce positive terms, and odd choices produce negative terms. More generally, the generating function for the partitions of into numbers selected from a set of positive integers can be found by taking only those terms in the first product for which . This result is due to Leonhard Euler. The formulation of Euler's generating function is a special case of a -Pochhammer symbol and is similar to the product formulation of many modular forms, and specifically the Dedekind eta function. Recurrence relations The same sequence of pentagonal numbers appears in a recurrence relation for the partition function: As base cases, is taken to equal , and is taken to be zero for negative . Although the sum on the right side appears infinite, it has only finitely many nonzero terms, coming from the nonzero values of in the range The recurrence relation can also be written in the equivalent form Another recurrence relation for can be given in terms of the sum of divisors function : If denotes the number of partitions of with no repeated parts then it follows by splitting each partition into its even parts and odd parts, and dividing the even parts by two, that Congruences Srinivasa Ramanujan is credited with discovering that the partition function has nontrivial patterns in modular arithmetic. For instance the number of partitions is divisible by five whenever the decimal representation of ends in the digit 4 or 9, as expressed by the congruence For instance, the number of partitions for the integer 4 is 5. For the integer 9, the number of partitions is 30; for 14 there are 135 partitions. This congruence is implied by the more general identity also by Ramanujan, where the notation denotes the product defined by A short proof of this result can be obtained from the partition function generating function. Ramanujan also discovered congruences modulo 7 and 11: The first one comes from Ramanujan's identity Since 5, 7, and 11 are consecutive primes, one might think that there would be an analogous congruence for the next prime 13, for some . However, there is no congruence of the form for any prime b other than 5, 7, or 11. Instead, to obtain a congruence, the argument of should take the form for some . In the 1960s, A. O. L. Atkin of the University of Illinois at Chicago discovered additional congruences of this form for small prime moduli. For example: proved that there are such congruences for every prime modulus greater than 3. Later, showed there are partition congruences modulo every integer coprime to 6. Approximation formulas Approximation formulas exist that are faster to calculate than the exact formula given above. An asymptotic expression for p(n) is given by as . This asymptotic formula was first obtained by G. H. Hardy and Ramanujan in 1918 and independently by J. V. Uspensky in 1920. Considering , the asymptotic formula gives about , reasonably close to the exact answer given above (1.415% larger than the true value). Hardy and Ramanujan obtained an asymptotic expansion with this approximation as the first term: where Here, the notation means that the sum is taken only over the values of that are relatively prime to . The function is a Dedekind sum. The error after terms is of the order of the next term, and may be taken to be of the order of . As an example, Hardy and Ramanujan showed that is the nearest integer to the sum of the first terms of the series. In 1937, Hans Rademacher was able to improve on Hardy and Ramanujan's results by providing a convergent series expression for . It is The proof of Rademacher's formula involves Ford circles, Farey sequences, modular symmetry and the Dedekind eta function. It may be shown that the th term of Rademacher's series is of the order so that the first term gives the Hardy–Ramanujan asymptotic approximation. published an elementary proof of the asymptotic formula for . Techniques for implementing the Hardy–Ramanujan–Rademacher formula efficiently on a computer are discussed by , who shows that can be computed in time for any . This is near-optimal in that it matches the number of digits of the result. The largest value of the partition function computed exactly is , which has slightly more than 11 billion digits. Strict partition function Definition and properties A partition in which no part occurs more than once is called strict, or is said to be a partition into distinct parts. The function q(n) gives the number of these strict partitions of the given sum n. For example, q(3) = 2 because the partitions 3 and 1 + 2 are strict, while the third partition 1 + 1 + 1 of 3 has repeated parts. The number q(n) is also equal to the number of partitions of n in which only odd summands are permitted. Generating function The generating function for the numbers q(n) is given by a simple infinite product: where the notation represents the Pochhammer symbol From this formula, one may easily obtain the first few terms : This series may also be written in terms of theta functions as where and In comparison, the generating function of the regular partition numbers p(n) has this identity with respect to the theta function: Identities about strict partition numbers Following identity is valid for the Pochhammer products: From this identity follows that formula: Therefore those two formulas are valid for the synthesis of the number sequence p(n): In the following, two examples are accurately executed: Restricted partition function More generally, it is possible to consider partitions restricted to only elements of a subset A of the natural numbers (for example a restriction on the maximum value of the parts), or with a restriction on the number of parts or the maximum difference between parts. Each particular restriction gives rise to an associated partition function with specific properties. Some common examples are given below. Euler and Glaisher's theorem Two important examples are the partitions restricted to only odd integer parts or only even integer parts, with the corresponding partition functions often denoted and . A theorem from Euler shows that the number of strict partitions is equal to the number of partitions with only odd parts: for all n, . This is generalized as Glaisher's theorem, which states that the number of partitions with no more than d-1 repetitions of any part is equal to the number of partitions with no part divisible by d. Gaussian binomial coefficient If we denote the number of partitions of n in at most M parts, with each part smaller or equal to N, then the generating function of is the following Gaussian binomial coefficient: Asymptotics Some general results on the asymptotic properties of restricted partition functions are known. If pA(n) is the partition function of partitions restricted to only elements of a subset A of the natural numbers, then: If A possesses positive natural density α then , with and conversely if this asymptotic property holds for pA(n) then A has natural density α. This result was stated, with a sketch of proof, by Erdős in 1942. If A is a finite set, this analysis does not apply (the density of a finite set is zero). If A has k elements whose greatest common divisor is 1, then References External links First 4096 values of the partition function Arithmetic functions Integer sequences Integer partitions
Partition function (number theory)
[ "Mathematics" ]
2,152
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Arithmetic functions", "Mathematical objects", "Combinatorics", "Integer partitions", "Numbers", "Number theory" ]
312,301
https://en.wikipedia.org/wiki/Liouville%27s%20theorem%20%28Hamiltonian%29
In physics, Liouville's theorem, named after the French mathematician Joseph Liouville, is a key theorem in classical statistical and Hamiltonian mechanics. It asserts that the phase-space distribution function is constant along the trajectories of the system—that is that the density of system points in the vicinity of a given system point traveling through phase-space is constant with time. This time-independent density is in statistical mechanics known as the classical a priori probability. Liouville's theorem applies to conservative systems, that is, systems in which the effects of friction are absent or can be ignored. The general mathematical formulation for such systems is the measure-preserving dynamical system. Liouville's theorem applies when there are degrees of freedom that can be interpreted as positions and momenta; not all measure-preserving dynamical systems have these, but Hamiltonian systems do. The general setting for conjugate position and momentum coordinates is available in the mathematical setting of symplectic geometry. Liouville's theorem ignores the possibility of chemical reactions, where the total number of particles may change over time, or where energy may be transferred to internal degrees of freedom. There are extensions of Liouville's theorem to cover these various generalized settings, including stochastic systems. Liouville equation The Liouville equation describes the time evolution of the phase space distribution function. Although the equation is usually referred to as the "Liouville equation", Josiah Willard Gibbs was the first to recognize the importance of this equation as the fundamental equation of statistical mechanics. It is referred to as the Liouville equation because its derivation for non-canonical systems utilises an identity first derived by Liouville in 1838. Consider a Hamiltonian dynamical system with canonical coordinates and conjugate momenta , where . Then the phase space distribution determines the probability that the system will be found in the infinitesimal phase space volume . The Liouville equation governs the evolution of in time : Time derivatives are denoted by dots, and are evaluated according to Hamilton's equations for the system. This equation demonstrates the conservation of density in phase space (which was Gibbs's name for the theorem). Liouville's theorem states that The distribution function is constant along any trajectory in phase space. A proof of Liouville's theorem uses the n-dimensional divergence theorem. This proof is based on the fact that the evolution of obeys an 2n-dimensional version of the continuity equation: That is, the 3-tuple is a conserved current. Notice that the difference between this and Liouville's equation are the terms where is the Hamiltonian, and where the derivatives and have been evaluated using Hamilton's equations of motion. That is, viewing the motion through phase space as a 'fluid flow' of system points, the theorem that the convective derivative of the density, , is zero follows from the equation of continuity by noting that the 'velocity field' in phase space has zero divergence (which follows from Hamilton's relations). Other formulations Poisson bracket The theorem above is often restated in terms of the Poisson bracket as or, in terms of the linear Liouville operator or Liouvillian, as Ergodic theory In ergodic theory and dynamical systems, motivated by the physical considerations given so far, there is a corresponding result also referred to as Liouville's theorem. In Hamiltonian mechanics, the phase space is a smooth manifold that comes naturally equipped with a smooth measure (locally, this measure is the 6n-dimensional Lebesgue measure). The theorem says this smooth measure is invariant under the Hamiltonian flow. More generally, one can describe the necessary and sufficient condition under which a smooth measure is invariant under a flow. The Hamiltonian case then becomes a corollary. Symplectic geometry We can also formulate Liouville's Theorem in terms of symplectic geometry. For a given system, we can consider the phase space of a particular Hamiltonian as a manifold endowed with a symplectic 2-form The volume form of our manifold is the top exterior power of the symplectic 2-form, and is just another representation of the measure on the phase space described above. On our phase space symplectic manifold we can define a Hamiltonian vector field generated by a function as Specifically, when the generating function is the Hamiltonian itself, , we get where we utilized Hamilton's equations of motion and the definition of the chain rule. In this formalism, Liouville's Theorem states that the Lie derivative of the volume form is zero along the flow generated by . That is, for a 2n-dimensional symplectic manifold, In fact, the symplectic structure itself is preserved, not only its top exterior power. That is, Liouville's Theorem also gives Quantum Liouville equation The analog of Liouville equation in quantum mechanics describes the time evolution of a mixed state. Canonical quantization yields a quantum-mechanical version of this theorem, the von Neumann equation. This procedure, often used to devise quantum analogues of classical systems, involves describing a classical system using Hamiltonian mechanics. Classical variables are then re-interpreted as quantum operators, while Poisson brackets are replaced by commutators. In this case, the resulting equation is where ρ is the density matrix. When applied to the expectation value of an observable, the corresponding equation is given by Ehrenfest's theorem, and takes the form where is an observable. Note the sign difference, which follows from the assumption that the operator is stationary and the state is time-dependent. In the phase-space formulation of quantum mechanics, substituting the Moyal brackets for Poisson brackets in the phase-space analog of the von Neumann equation results in compressibility of the probability fluid, and thus violations of Liouville's theorem incompressibility. This, then, leads to concomitant difficulties in defining meaningful quantum trajectories. Examples SHO phase-space volume Consider an -particle system in three dimensions, and focus on only the evolution of particles. Within phase space, these particles occupy an infinitesimal volume given by We want to remain the same throughout time, so that is constant along the trajectories of the system. If we allow our particles to evolve by an infinitesimal time step , we see that each particle phase space location changes as where and denote and respectively, and we have only kept terms linear in . Extending this to our infinitesimal hypercube , the side lengths change as To find the new infinitesimal phase-space volume , we need the product of the above quantities. To first order in , we get the following: So far, we have yet to make any specifications about our system. Let us now specialize to the case of -dimensional isotropic harmonic oscillators. That is, each particle in our ensemble can be treated as a simple harmonic oscillator. The Hamiltonian for this system is given by By using Hamilton's equations with the above Hamiltonian we find that the term in parentheses above is identically zero, thus yielding From this we can find the infinitesimal volume of phase space: Thus we have ultimately found that the infinitesimal phase-space volume is unchanged, yielding demonstrating that Liouville's theorem holds for this system. The question remains of how the phase-space volume actually evolves in time. Above we have shown that the total volume is conserved, but said nothing about what it looks like. For a single particle we can see that its trajectory in phase space is given by the ellipse of constant . Explicitly, one can solve Hamilton's equations for the system and find where and denote the initial position and momentum of the -th particle. For a system of multiple particles, each one will have a phase-space trajectory that traces out an ellipse corresponding to the particle's energy. The frequency at which the ellipse is traced is given by the in the Hamiltonian, independent of any differences in energy. As a result, a region of phase space will simply rotate about the point with frequency dependent on . This can be seen in the animation above. Damped harmonic oscillator To see an example where Liouville's theorem does not apply, we can modify the equations of motion for the simple harmonic oscillator to account for the effects of friction or damping. Consider again the system of particles each in a -dimensional isotropic harmonic potential, the Hamiltonian for which is given in the previous example. This time, we add the condition that each particle experiences a frictional force , where is a positive constant dictating the amount of friction. As this is a non-conservative force, we need to extend Hamilton's equations as Unlike the equations of motion for the simple harmonic oscillator, these modified equations do not take the form of Hamilton's equations, and therefore we do not expect Liouville's theorem to hold. Instead, as depicted in the animation in this section, a generic phase space volume will shrink as it evolves under these equations of motion. To see this violation of Liouville's theorem explicitly, we can follow a very similar procedure to the undamped harmonic oscillator case, and we arrive again at Plugging in our modified Hamilton's equations, we find Calculating our new infinitesimal phase space volume, and keeping only first order in we find the following result: We have found that the infinitesimal phase-space volume is no longer constant, and thus the phase-space density is not conserved. As can be seen from the equation as time increases, we expect our phase-space volume to decrease to zero as friction affects the system. As for how the phase-space volume evolves in time, we will still have the constant rotation as in the undamped case. However, the damping will introduce a steady decrease in the radii of each ellipse. Again we can solve for the trajectories explicitly using Hamilton's equations, taking care to use the modified ones above. Letting for convenience, we find where the values and denote the initial position and momentum of the -th particle. As the system evolves the total phase-space volume will spiral in to the origin. This can be seen in the figure above. Remarks The Liouville equation is valid for both equilibrium and nonequilibrium systems. It is a fundamental equation of non-equilibrium statistical mechanics. The Liouville equation is integral to the proof of the fluctuation theorem from which the second law of thermodynamics can be derived. It is also the key component of the derivation of Green–Kubo relations for linear transport coefficients such as shear viscosity, thermal conductivity or electrical conductivity. Virtually any textbook on Hamiltonian mechanics, advanced statistical mechanics, or symplectic geometry will derive the Liouville theorem. In plasma physics, the Vlasov equation can be interpreted as Liouville's theorem, which reduces the task of solving the Vlasov equation to that of single particle motion. By using Liouville's theorem in this way with energy or magnetic moment conservation, for example, one can determine unknown fields using known particle distribution functions, or vice versa. This method is known as Liouville mapping. See also Boltzmann transport equation Reversible reference system propagation algorithm (r-RESPA) References Further reading External links Eponymous theorems of physics Hamiltonian mechanics Theorems in dynamical systems Statistical mechanics theorems
Liouville's theorem (Hamiltonian)
[ "Physics", "Mathematics" ]
2,390
[ "Theorems in dynamical systems", "Mathematical theorems", "Equations of physics", "Theoretical physics", "Classical mechanics", "Statistical mechanics theorems", "Eponymous theorems of physics", "Hamiltonian mechanics", "Theorems in mathematical physics", "Dynamical systems", "Statistical mechan...
312,304
https://en.wikipedia.org/wiki/Symplectomorphism
In mathematics, a symplectomorphism or symplectic map is an isomorphism in the category of symplectic manifolds. In classical mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and preserves the symplectic structure of phase space, and is called a canonical transformation. Formal definition A diffeomorphism between two symplectic manifolds is called a symplectomorphism if where is the pullback of . The symplectic diffeomorphisms from to are a (pseudo-)group, called the symplectomorphism group (see below). The infinitesimal version of symplectomorphisms gives the symplectic vector fields. A vector field is called symplectic if Also, is symplectic if the flow of is a symplectomorphism for every . These vector fields build a Lie subalgebra of . Here, is the set of smooth vector fields on , and is the Lie derivative along the vector field Examples of symplectomorphisms include the canonical transformations of classical mechanics and theoretical physics, the flow associated to any Hamiltonian function, the map on cotangent bundles induced by any diffeomorphism of manifolds, and the coadjoint action of an element of a Lie group on a coadjoint orbit. Flows Any smooth function on a symplectic manifold gives rise, by definition, to a Hamiltonian vector field and the set of all such vector fields form a subalgebra of the Lie algebra of symplectic vector fields. The integration of the flow of a symplectic vector field is a symplectomorphism. Since symplectomorphisms preserve the symplectic 2-form and hence the symplectic volume form, Liouville's theorem in Hamiltonian mechanics follows. Symplectomorphisms that arise from Hamiltonian vector fields are known as Hamiltonian symplectomorphisms. Since the flow of a Hamiltonian vector field also preserves . In physics this is interpreted as the law of conservation of energy. If the first Betti number of a connected symplectic manifold is zero, symplectic and Hamiltonian vector fields coincide, so the notions of Hamiltonian isotopy and symplectic isotopy of symplectomorphisms coincide. It can be shown that the equations for a geodesic may be formulated as a Hamiltonian flow, see Geodesics as Hamiltonian flows. The group of (Hamiltonian) symplectomorphisms The symplectomorphisms from a manifold back onto itself form an infinite-dimensional pseudogroup. The corresponding Lie algebra consists of symplectic vector fields. The Hamiltonian symplectomorphisms form a subgroup, whose Lie algebra is given by the Hamiltonian vector fields. The latter is isomorphic to the Lie algebra of smooth functions on the manifold with respect to the Poisson bracket, modulo the constants. The group of Hamiltonian symplectomorphisms of usually denoted as . Groups of Hamiltonian diffeomorphisms are simple, by a theorem of Banyaga. They have natural geometry given by the Hofer norm. The homotopy type of the symplectomorphism group for certain simple symplectic four-manifolds, such as the product of spheres, can be computed using Gromov's theory of pseudoholomorphic curves. Comparison with Riemannian geometry Unlike Riemannian manifolds, symplectic manifolds are not very rigid: Darboux's theorem shows that all symplectic manifolds of the same dimension are locally isomorphic. In contrast, isometries in Riemannian geometry must preserve the Riemann curvature tensor, which is thus a local invariant of the Riemannian manifold. Moreover, every function H on a symplectic manifold defines a Hamiltonian vector field XH, which exponentiates to a one-parameter group of Hamiltonian diffeomorphisms. It follows that the group of symplectomorphisms is always very large, and in particular, infinite-dimensional. On the other hand, the group of isometries of a Riemannian manifold is always a (finite-dimensional) Lie group. Moreover, Riemannian manifolds with large symmetry groups are very special, and a generic Riemannian manifold has no nontrivial symmetries. Quantizations Representations of finite-dimensional subgroups of the group of symplectomorphisms (after ħ-deformations, in general) on Hilbert spaces are called quantizations. When the Lie group is the one defined by a Hamiltonian, it is called a "quantization by energy". The corresponding operator from the Lie algebra to the Lie algebra of continuous linear operators is also sometimes called the quantization; this is a more common way of looking at it in physics. Arnold conjecture A celebrated conjecture of Vladimir Arnold relates the minimum number of fixed points for a Hamiltonian symplectomorphism , in case is a compact symplectic manifold, to Morse theory (see ). More precisely, the conjecture states that has at least as many fixed points as the number of critical points that a smooth function on must have. Certain weaker version of this conjecture has been proved: when is "nondegenerate", the number of fixed points is bounded from below by the sum of Betti numbers of (see,). The most important development in symplectic geometry triggered by this famous conjecture is the birth of Floer homology (see ), named after Andreas Floer. In popular culture "Symplectomorphism" is a word in a crossword puzzle in episode 1 of the anime Spy × Family. See also References General . . See section 3.2. Symplectomorphism groups . . Symplectic topology Hamiltonian mechanics
Symplectomorphism
[ "Physics", "Mathematics" ]
1,240
[ "Hamiltonian mechanics", "Theoretical physics", "Classical mechanics", "Dynamical systems" ]
313,267
https://en.wikipedia.org/wiki/Neutron%20diffraction
Neutron diffraction or elastic neutron scattering is the application of neutron scattering to the determination of the atomic and/or magnetic structure of a material. A sample to be examined is placed in a beam of thermal or cold neutrons to obtain a diffraction pattern that provides information of the structure of the material. The technique is similar to X-ray diffraction but due to their different scattering properties, neutrons and X-rays provide complementary information: X-Rays are suited for superficial analysis, strong x-rays from synchrotron radiation are suited for shallow depths or thin specimens, while neutrons having high penetration depth are suited for bulk samples. Instrumental and sample requirements The technique requires a source of neutrons. Neutrons are usually produced in a nuclear reactor or spallation source. At a research reactor, other components are needed, including a crystal monochromator (in the case of thermal neutrons), as well as filters to select the desired neutron wavelength. Some parts of the setup may also be movable. For the long-wavelength neutrons, crystals cannot be used and gratings are used instead as diffractive optical components. At a spallation source, the time of flight technique is used to sort the energies of the incident neutrons (higher energy neutrons are faster), so no monochromator is needed, but rather a series of aperture elements synchronized to filter neutron pulses with the desired wavelength. The technique is most commonly performed as powder diffraction, which only requires a polycrystalline powder. Single crystal work is also possible, but the crystals must be much larger than those that are used in single-crystal X-ray crystallography. It is common to use crystals that are about 1 mm3. The technique also requires a device that can detect the neutrons after they have been scattered. Summarizing, the main disadvantage to neutron diffraction is the requirement for a nuclear reactor. For single crystal work, the technique requires relatively large crystals, which are usually challenging to grow. The advantages to the technique are many - sensitivity to light atoms, ability to distinguish isotopes, absence of radiation damage, as well as a penetration depth of several cm Nuclear scattering Like all quantum particles, neutrons can exhibit wave phenomena typically associated with light or sound. Diffraction is one of these phenomena; it occurs when waves encounter obstacles whose size is comparable with the wavelength. If the wavelength of a quantum particle is short enough, atoms or their nuclei can serve as diffraction obstacles. When a beam of neutrons emanating from a reactor is slowed and selected properly by their speed, their wavelength lies near one angstrom (0.1 nanometer), the typical separation between atoms in a solid material. Such a beam can then be used to perform a diffraction experiment. Impinging on a crystalline sample, it will scatter under a limited number of well-defined angles, according to the same Bragg's law that describes X-ray diffraction. Neutrons and X-rays interact with matter differently. X-rays interact primarily with the electron cloud surrounding each atom. The contribution to the diffracted x-ray intensity is therefore larger for atoms with larger atomic number (Z). On the other hand, neutrons interact directly with the nucleus of the atom, and the contribution to the diffracted intensity depends on each isotope; for example, regular hydrogen and deuterium contribute differently. It is also often the case that light (low Z) atoms contribute strongly to the diffracted intensity, even in the presence of large Z atoms. The scattering length varies from isotope to isotope rather than linearly with the atomic number. An element like vanadium strongly scatters X-rays, but its nuclei hardly scatters neutrons, which is why it is often used as a container material. Non-magnetic neutron diffraction is directly sensitive to the positions of the nuclei of the atoms. The nuclei of atoms, from which neutrons scatter, are tiny. Furthermore, there is no need for an atomic form factor to describe the shape of the electron cloud of the atom and the scattering power of an atom does not fall off with the scattering angle as it does for X-rays. Diffractograms therefore can show strong, well-defined diffraction peaks even at high angles, particularly if the experiment is done at low temperatures. Many neutron sources are equipped with liquid helium cooling systems that allow data collection at temperatures down to 4.2 K. The superb high angle (i.e. high resolution) information means that the atomic positions in the structure can be determined with high precision. On the other hand, Fourier maps (and to a lesser extent difference Fourier maps) derived from neutron data suffer from series termination errors, sometimes so much that the results are meaningless. Magnetic scattering Although neutrons are uncharged, they carry a magnetic moment, and therefore interact with magnetic moments, including those arising from the electron cloud around an atom. Neutron diffraction can therefore reveal the microscopic magnetic structure of a material. Magnetic scattering does require an atomic form factor as it is caused by the much larger electron cloud around the tiny nucleus. The intensity of the magnetic contribution to the diffraction peaks will therefore decrease towards higher angles. Uses Neutron diffraction can be used to determine the static structure factor of gases, liquids or amorphous solids. Most experiments, however, aim at the structure of crystalline solids, making neutron diffraction an important tool of crystallography. Neutron diffraction is closely related to X-ray powder diffraction. In fact, the single crystal version of the technique is less commonly used because currently available neutron sources require relatively large samples and large single crystals are hard or impossible to come by for most materials. Future developments, however, may well change this picture. Because the data is typically a 1D powder diffractogram they are usually processed using Rietveld refinement. In fact the latter found its origin in neutron diffraction (at Petten in the Netherlands) and was later extended for use in X-ray diffraction. One practical application of elastic neutron scattering/diffraction is that the lattice constant of metals and other crystalline materials can be very accurately measured. Together with an accurately aligned micropositioner a map of the lattice constant through the metal can be derived. This can easily be converted to the stress field experienced by the material. This has been used to analyse stresses in aerospace and automotive components to give just two examples. The high penetration depth permits measuring residual stresses in bulk components as crankshafts, pistons, rails, gears. This technique has led to the development of dedicated stress diffractometers, such as the ENGIN-X instrument at the ISIS neutron source. Neutron diffraction can also be employed to give insight into the 3D structure any material that diffracts. Another use is for the determination of the solvation number of ion pairs in electrolytes solutions. The magnetic scattering effect has been used since the establishment of the neutron diffraction technique to quantify magnetic moments in materials, and study the magnetic dipole orientation and structure. One of the earliest applications of neutron diffraction was in the study of magnetic dipole orientations in antiferromagnetic transition metal oxides such as manganese, iron, nickel, and cobalt oxides. These experiments, first performed by Clifford Shull, were the first to show the existence of the antiferromagnetic arrangement of magnetic dipoles in a material structure. Now, neutron diffraction continues to be used to characterize newly developed magnetic materials. Hydrogen, null-scattering and contrast variation Neutron diffraction can be used to establish the structure of low atomic number materials like proteins and surfactants much more easily with lower flux than at a synchrotron radiation source. This is because some low atomic number materials have a higher cross section for neutron interaction than higher atomic weight materials. One major advantage of neutron diffraction over X-ray diffraction is that the latter is rather insensitive to the presence of hydrogen (H) in a structure, whereas the nuclei 1H and 2H (i.e. Deuterium, D) are strong scatterers for neutrons. The greater scattering power of protons and deuterons means that the position of hydrogen in a crystal and its thermal motions can be determined with greater precision by neutron diffraction. The structures of metal hydride complexes, e.g., Mg2FeH6 have been assessed by neutron diffraction. The neutron scattering lengths bH = −3.7406(11) fm and bD = 6.671(4) fm, for H and D respectively, have opposite sign, which allows the technique to distinguish them. In fact there is a particular isotope ratio for which the contribution of the element would cancel, this is called null-scattering. It is undesirable to work with the relatively high concentration of H in a sample. The scattering intensity by H-nuclei has a large inelastic component, which creates a large continuous background that is more or less independent of scattering angle. The elastic pattern typically consists of sharp Bragg reflections if the sample is crystalline. They tend to drown in the inelastic background. This is even more serious when the technique is used for the study of liquid structure. Nevertheless, by preparing samples with different isotope ratios, it is possible to vary the scattering contrast enough to highlight one element in an otherwise complicated structure. The variation of other elements is possible but usually rather expensive. Hydrogen is inexpensive and particularly interesting, because it plays an exceptionally large role in biochemical structures and is difficult to study structurally in other ways. History Neutron was discovered around early 1930s, and diffraction was first observed in 1936 by two groups, von Halban and Preiswerk and by Mitchell and Powers. In 1944, Ernest O. Wollan, with a background in X-ray scattering from his PhD work under Arthur Compton, recognized the potential for applying thermal neutrons from the newly operational X-10 nuclear reactor to crystallography. Joined by Clifford G. Shull they developed neutron diffraction throughout the 1940s. The first neutron diffraction experiments were carried out in 1945 by Ernest O. Wollan using the Graphite Reactor at Oak Ridge. He was joined shortly thereafter (June 1946) by Clifford Shull, and together they established the basic principles of the technique, and applied it successfully to many different materials, addressing problems like the structure of ice and the microscopic arrangements of magnetic moments in materials. For this achievement, Shull was awarded one half of the 1994 Nobel Prize in Physics. (Wollan died in 1984). (The other half of the 1994 Nobel Prize for Physics went to Bert Brockhouse for development of the inelastic scattering technique at the Chalk River facility of AECL. This also involved the invention of the triple axis spectrometer). The delay between the achieved work (1946) and the Nobel Prize awarded to Brockhouse and Shull (1994) brings them close to the delay between the invention by Ernst Ruska of the electron microscope (1933) - also in the field of particle optics - and his own Nobel prize (1986). This in turn is near to the record of 55 years between the discoveries of Peyton Rous and his award of the Nobel Prize in 1966. See also Crystallography Crystallographic database Electron diffraction Grazing incidence diffraction Inelastic neutron scattering X-ray diffraction computed tomography References Further reading External links National Institute of Standards and Technology Center for Neutron Research From Bragg’s law to neutron diffraction Integrated Infrastructure Initiative for Neutron Scattering and Muon Spectroscopy (NMI3) - a European consortium of 18 partner organisations from 12 countries, including all major facilities in the fields of neutron scattering and muon spectroscopy Frank Laboratory of Neutron Physics of Joint Institute for Nuclear Research (JINR) IAEA neutron beam instrument database Diffraction Neutron scattering
Neutron diffraction
[ "Physics", "Chemistry", "Materials_science" ]
2,458
[ "Spectrum (physical sciences)", "Neutron scattering", "Crystallography", "Diffraction", "Scattering", "Spectroscopy" ]
313,416
https://en.wikipedia.org/wiki/Spectrum%20analyzer
A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements. By analyzing the spectra of electrical signals, dominant frequency, power, distortion, harmonics, bandwidth, and other spectral components of a signal can be observed that are not easily detectable in time domain waveforms. These parameters are useful in the characterization of electronic devices, such as wireless transmitters. The display of a spectrum analyzer has frequency displayed on the horizontal axis and the amplitude on the vertical axis. To the casual observer, a spectrum analyzer looks like an oscilloscope, which plots amplitude on the vertical axis but time on the horizontal axis. In fact, some lab instruments can function either as an oscilloscope or a spectrum analyzer. History The first spectrum analyzers, in the 1960s, were swept-tuned instruments. Following the discovery of the fast Fourier transform (FFT) in 1965, the first FFT-based analyzers were introduced in 1967. Today, there are three basic types of analyzer: the swept-tuned spectrum analyzer, the vector signal analyzer, and the real-time spectrum analyzer. Types Spectrum analyzer types are distinguished by the methods used to obtain the spectrum of a signal. There are swept-tuned and fast Fourier transform (FFT) based spectrum analyzers: A swept-tuned analyzer uses a superheterodyne receiver to down-convert a portion of the input signal spectrum to the center frequency of a narrow band-pass filter, whose instantaneous output power is recorded or displayed as a function of time. By sweeping the receiver's center-frequency (using a voltage-controlled oscillator) through a range of frequencies, the output is also a function of frequency. But while the sweep centers on any particular frequency, it may be missing short-duration events at other frequencies. An FFT analyzer computes a time-sequence of periodograms. FFT refers to a particular mathematical algorithm used in the process. This is commonly used in conjunction with a receiver and analog-to-digital converter. As above, the receiver reduces the center-frequency of a portion of the input signal spectrum, but the portion is not swept. The purpose of the receiver is to reduce the sampling rate that the analyzer must contend with. With a sufficiently low sample-rate, FFT analyzers can process all the samples (100% duty-cycle), and are therefore able to avoid missing short-duration events. Form factor Spectrum analyzers tend to fall into four form factors: benchtop, portable, handheld and networked. Benchtop This form factor is useful for applications where the spectrum analyzer can be plugged into AC power, which generally means in a lab environment or production/manufacturing area. Bench top spectrum analyzers have historically offered better performance and specifications than the portable or handheld form factor. Bench top spectrum analyzers normally have multiple fans (with associated vents) to dissipate heat produced by the processor. Due to their architecture, bench top spectrum analyzers typically weigh more than . Some bench top spectrum analyzers offer optional battery packs, allowing them to be used away from AC power. This type of analyzer is often referred to as a "portable" spectrum analyzer. Portable This form factor is useful for any applications where the spectrum analyzer needs to be taken outside to make measurements or simply carried while in use. Attributes that contribute to a useful portable spectrum analyzer include: Optional battery-powered operation to allow the user to move freely outside. Clearly viewable display to allow the screen to be read in bright sunlight, darkness or dusty conditions. Light weight (usually less than ). Handheld This form factor is useful for any application where the spectrum analyzer needs to be very light and small. Handheld analyzers usually offer a limited capability relative to larger systems. Attributes that contribute to a useful handheld spectrum analyzer include: Very low power consumption. Battery-powered operation while in the field to allow the user to move freely outside. Very small size Light weight (usually less than ). Networked This form factor does not include a display and these devices are designed to enable a new class of geographically-distributed spectrum monitoring and analysis applications. The key attribute is the ability to connect the analyzer to a network and monitor such devices across a network. While many spectrum analyzers have an Ethernet port for control, they typically lack efficient data transfer mechanisms and are too bulky or expensive to be deployed in such a distributed manner. Key applications for such devices include RF intrusion detection systems for secure facilities where wireless signaling is prohibited. As well cellular operators are using such analyzers to remotely monitor interference in licensed spectral bands. The distributed nature of such devices enable geo-location of transmitters, spectrum monitoring for dynamic spectrum access and many other such applications. Key attributes of such devices include: Network-efficient data transfer Low power consumption The ability to synchronize data captures across a network of analyzers Low cost to enable mass deployment. Theory of operation Swept-tuned As discussed above in types, a swept-tuned spectrum analyzer down-converts a portion of the input signal spectrum to the center frequency of a band-pass filter by sweeping the voltage-controlled oscillator through a range of frequencies, enabling the consideration of the full frequency range of the instrument. The bandwidth of the band-pass filter dictates the resolution bandwidth, which is related to the minimum bandwidth detectable by the instrument. As demonstrated by the animation to the right, the smaller the bandwidth, the more spectral resolution. However, there is a trade-off between how quickly the display can update the full frequency span under consideration and the frequency resolution, which is relevant for distinguishing frequency components that are close together. For a swept-tuned architecture, this relation for sweep time is useful: Where ST is sweep time in seconds, k is proportionality constant, Span is the frequency range under consideration in hertz, and RBW is the resolution bandwidth in Hertz. Sweeping too fast, however, causes a drop in displayed amplitude and a shift in the displayed frequency. Also, the animation contains both up- and down-converted spectra, which is due to a frequency mixer producing both sum and difference frequencies. The local oscillator feedthrough is due to the imperfect isolation from the IF signal path in the mixer. For very weak signals, a pre-amplifier is used, although harmonic and intermodulation distortion may lead to the creation of new frequency components that were not present in the original signal. FFT-based With an FFT based spectrum analyzer, the frequency resolution is , the inverse of the time T over which the waveform is measured and Fourier transformed. With Fourier transform analysis in a digital spectrum analyzer, it is necessary to sample the input signal with a sampling frequency that is at least twice the bandwidth of the signal, due to the Nyquist limit. A Fourier transform will then produce a spectrum containing all frequencies from zero to . This can place considerable demands on the required analog-to-digital converter and processing power for the Fourier transform, making FFT based spectrum analyzers limited in frequency range. Hybrid superheterodyne-FFT Since FFT based analyzers are only capable of considering narrow bands, one technique is to combine swept and FFT analysis for consideration of wide and narrow spans. This technique allows for faster sweep time. This method is made possible by first down converting the signal, then digitizing the intermediate frequency and using superheterodyne or FFT techniques to acquire the spectrum. One benefit of digitizing the intermediate frequency is the ability to use digital filters, which have a range of advantages over analog filters such as near perfect shape factors and improved filter settling time. Also, for consideration of narrow spans, the FFT can be used to increase sweep time without distorting the displayed spectrum. Realtime FFT A realtime spectrum analyser does not have any blind time—up to some maximum span, often called the "realtime bandwidth". The analyser is able to sample the incoming RF spectrum in the time domain and convert the information to the frequency domain using the FFT process. FFT's are processed in parallel, gapless and overlapped so there are no gaps in the calculated RF spectrum and no information is missed. Online realtime and offline realtime In a sense, any spectrum analyzer that has vector signal analyzer capability is a realtime analyzer. It samples data fast enough to satisfy Nyquist Sampling theorem and stores the data in memory for later processing. This kind of analyser is only realtime for the amount of data / capture time it can store in memory and still produces gaps in the spectrum and results during processing time. FFT overlapping Minimizing distortion of information is important in all spectrum analyzers. The FFT process applies windowing techniques to improve the output spectrum due to producing less side lobes. The effect of windowing may also reduce the level of a signal where it is captured on the boundary between one FFT and the next. For this reason FFT's in a Realtime spectrum analyzer are overlapped. Overlapping rate is approximately 80%. An analyzer that utilises a 1024-point FFT process will re-use approximately 819 samples from the previous FFT process. Minimum signal detection time This is related to the sampling rate of the analyser and the FFT rate. It is also important for the realtime spectrum analyzer to give good level accuracy. Example: for an analyser with of realtime bandwidth (the maximum RF span that can be processed in realtime) approximately (complex) are needed. If the spectrum analyzer produces an FFT calculation is produced every For a FFT a full spectrum is produced approximately every This also gives us our overlap rate of 80% (20 μs − 4 μs) / 20 μs = 80%. Persistence Realtime spectrum analyzers are able to produce much more information for users to examine the frequency spectrum in more detail. A normal swept spectrum analyzer would produce max peak, min peak displays for example but a realtime spectrum analyzer is able to plot all calculated FFT's over a given period of time with the added colour-coding which represents how often a signal appears. For example, this image shows the difference between how a spectrum is displayed in a normal swept spectrum view and using a "Persistence" view on a realtime spectrum analyzer. Hidden signals Realtime spectrum analyzers are able to see signals hidden behind other signals. This is possible because no information is missed and the display to the user is the output of FFT calculations. An example of this can be seen on the right. Typical functionality Center frequency and span In a typical spectrum analyzer there are options to set the start, stop, and center frequency. The frequency halfway between the stop and start frequencies on a spectrum analyzer display is known as the center frequency. This is the frequency that is in the middle of the display's frequency axis. Span specifies the range between the start and stop frequencies. These two parameters allow for adjustment of the display within the frequency range of the instrument to enhance visibility of the spectrum measured. Resolution bandwidth As discussed in the operation section, the resolution bandwidth filter or RBW filter is the bandpass filter in the IF path. It's the bandwidth of the RF chain before the detector (power measurement device). It determines the RF noise floor and how close two signals can be and still be resolved by the analyzer into two separate peaks. Adjusting the bandwidth of this filter allows for the discrimination of signals with closely spaced frequency components, while also changing the measured noise floor. Decreasing the bandwidth of an RBW filter decreases the measured noise floor and vice versa. This is due to higher RBW filters passing more frequency components through to the envelope detector than lower bandwidth RBW filters, therefore a higher RBW causes a higher measured noise floor. Video bandwidth The video bandwidth filter or VBW filter is the low-pass filter directly after the envelope detector. It's the bandwidth of the signal chain after the detector. Averaging or peak detection then refers to how the digital storage portion of the device records samples—it takes several samples per time step and stores only one sample, either the average of the samples or the highest one. The video bandwidth determines the capability to discriminate between two different power levels. This is because a narrower VBW will remove noise in the detector output. This filter is used to "smooth" the display by removing noise from the envelope. Similar to the RBW, the VBW affects the sweep time of the display if the VBW is less than the RBW. If VBW is less than RBW, this relation for sweep time is useful: Here tsweep is the sweep time, k is a dimensionless proportionality constant, f2 − f1 is the frequency range of the sweep, RBW is the resolution bandwidth, and VBW is the video bandwidth. Detector With the advent of digitally based displays, some modern spectrum analyzers use analog-to-digital converters to sample spectrum amplitude after the VBW filter. Since displays have a discrete number of points, the frequency span measured is also digitised. Detectors are used in an attempt to adequately map the correct signal power to the appropriate frequency point on the display. There are in general three types of detectors: sample, peak, and average Sample detection – sample detection simply uses the midpoint of a given interval as the display point value. While this method does represent random noise well, it does not always capture all sinusoidal signals. Peak detection – peak detection uses the maximum measured point within a given interval as the display point value. This insures that the maximum sinusoid is measured within the interval; however, smaller sinusoids within the interval may not be measured. Also, peak detection does not give a good representation of random noise. Average detection – average detection uses all of the data points within the interval to consider the display point value. This is done by power (rms) averaging, voltage averaging, or log-power averaging. Displayed average noise level The Displayed Average Noise Level (DANL) is just what it says it is—the average noise level displayed on the analyzer. This can either be with a specific resolution bandwidth (e.g. −120 dBm @1 kHz RBW), or normalized to 1 Hz (usually in dBm/Hz) e.g. −150 dBm(Hz).This is also called the sensitivity of the spectrum analyzer. If a signal level equal to the average noise level is fed there will be a 3 dB display. To increase the sensitivity of the spectrum analyzer a preamplifier with lower noise figure may be connected at the input of the spectrum analyzer. Radio-frequency uses Spectrum analyzers are widely used to measure the frequency response, noise and distortion characteristics of all kinds of radio-frequency (RF) circuitry, by comparing the input and output spectra. For example, in RF mixers, spectrum analyzer is used to find the levels of third order inter-modulation products and conversion loss. In RF oscillators, spectrum analyzer is used to find the levels of different harmonics. In telecommunications, spectrum analyzers are used to determine occupied bandwidth and track interference sources. For example, cell planners use this equipment to determine interference sources in the GSM frequency bands and UMTS frequency bands. In EMC testing, a spectrum analyzer is used for basic precompliance testing; however, it can not be used for full testing and certification. Instead, an EMI receiver is used. A spectrum analyzer is used to determine whether a wireless transmitter is working according to defined standards for purity of emissions. Output signals at frequencies other than the intended communications frequency appear as vertical lines (pips) on the display. A spectrum analyzer is also used to determine, by direct observation, the bandwidth of a digital or analog signal. A spectrum analyzer interface is a device that connects to a wireless receiver or a personal computer to allow visual detection and analysis of electromagnetic signals over a defined band of frequencies. This is called panoramic reception and it is used to determine the frequencies of sources of interference to wireless networking equipment, such as Wi-Fi and wireless routers. Spectrum analyzers can also be used to assess RF shielding. RF shielding is of particular importance for the siting of a magnetic resonance imaging machine since stray RF fields would result in artifacts in an MR image. Audio-frequency uses Spectrum analysis can be used at audio frequencies to analyse the harmonics of an audio signal. A typical application is to measure the distortion of a nominally sinewave signal; a very-low-distortion sinewave is used as the input to equipment under test, and a spectrum analyser can examine the output, which will have added distortion products, and determine the percentage distortion at each harmonic of the fundamental. Such analysers were at one time described as "wave analysers". Analysis can be carried out by a general-purpose digital computer with a sound card selected for suitable performance and appropriate software. Instead of using a low-distortion sinewave, the input can be subtracted from the output, attenuated and phase-corrected, to give only the added distortion and noise, which can be analysed. An alternative technique, total harmonic distortion measurement, cancels out the fundamental with a notch filter and measures the total remaining signal, which is total harmonic distortion plus noise; it does not give the harmonic-by-harmonic detail of an analyser. Spectrum analyzers are also used by audio engineers to assess their work. In these applications, the spectrum analyzer will show volume levels of frequency bands across the typical range of human hearing, rather than displaying a wave. In live sound applications, engineers can use them to pinpoint feedback. Optical spectrum analyzer An optical spectrum analyzer uses reflective or refractive techniques to separate out the wavelengths of light. An electro-optical detector is used to measure the intensity of the light, which is then normally displayed on a screen in a similar manner to a radio- or audio-frequency spectrum analyzer. The input to an optical spectrum analyzer may be simply via an aperture in the instrument's case, an optical fiber or an optical connector to which a fiber-optic cable can be attached. Different techniques exist for separating out the wavelengths. One method is to use a monochromator, for example a Czerny–Turner design, with an optical detector placed at the output slit. As the grating in the monochromator moves, bands of different frequencies (colors) are 'seen' by the detector, and the resulting signal can then be plotted on a display. More precise measurements (down to MHz in the optical spectrum) can be made with a scanning Fabry–Pérot interferometer along with analog or digital control electronics, which sweep the resonant frequency of an optically resonant cavity using a voltage ramp to piezoelectric motor that varies the distance between two highly reflective mirrors. A sensitive photodiode embedded in the cavity provides an intensity signal, which is plotted against the ramp voltage to produce a visual representation of the optical power spectrum. The frequency response of optical spectrum analyzers tends to be relatively limited, e.g. (near-infrared), depending on the intended purpose, although (somewhat) wider-bandwidth general purpose instruments are available. Vibration spectrum analyzer A vibration spectrum analyzer allows to analyze vibration amplitudes at various component frequencies, In this way, vibration occurring at specific frequencies can be identified and tracked. Since particular machinery problems generate vibration at specific frequencies, machinery faults can be detected or diagnosed. Vibration Spectrum Analyzers use the signal from different types of sensor, such as: accelerometers, velocity transducers and proximity sensors. The uses of a vibration spectrum analyzer in machine condition monitoring allows to detect and identify machine faults such as: rotor imbalance, shaft misalignment, mechanical looseness, bearing defects, among others. Vibration analysis can also be used in structures to identify structural resonances or to perform modal analysis. See also Electrical measurements Electromagnetic spectrum Measuring receiver Radio-frequency sweep Spectral leakage Spectral music Radio spectrum scope Stationary-wave integrated Fourier-transform spectrometry References Footnotes External links Sri Welaratna, "", Sound and Vibration (January 1997, 30th anniversary issue). A historical review of hardware spectrum-analyzer devices. Electronic test equipment Laboratory equipment Radio technology Signal processing Spectroscopy Scattering Acoustics Spectrum (physical sciences)
Spectrum analyzer
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
4,312
[ "Physical phenomena", "Computer engineering", "Radio technology", "Measuring instruments", "Spectroscopy", "Instrumental analysis", "Scattering", "Particle physics", "Nuclear physics", "Information and communications technology", "Telecommunications engineering", "Molecular physics", "Spectr...
313,845
https://en.wikipedia.org/wiki/Formal%20concept%20analysis
In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s. Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology. Overview and history The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that the extent A consists of all objects that share the attributes in B, and dually the intent B consists of all attributes shared by the objects in A. In this way, formal concept analysis formalizes the semantic notions of extension and intension. The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret. The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic. Motivation and philosophical background In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker. This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research. It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic. Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines: Example The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified. The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below. The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute. In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural. The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea. Formal contexts and concepts A formal context is a triple , where G is a set of objects, M is a set of attributes, and is a binary relation called incidence that expresses which objects have which attributes. For subsets of objects and subsets of attributes, one defines two derivation operators as follows: , i.e., a set of all attributes shared by all objects from A, and dually , i.e., a set of all objects sharing all attributes from B. Applying either derivation operator and then the other constitutes two closure operators: A ↦ = () for A ⊆ G (extent closure), and B ↦ = () for B ⊆ M (intent closure). The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice). With these derivation operators, Wille gave an elegant definition of a formal concept: a pair (A,B) is a formal concept of a context provided that: A ⊆ G, B ⊆ M, = B, and = A. Equivalently and more intuitively, (A,B) is a formal concept precisely when: every object in A has every attribute in B, for every object in G that is not in A, there is some attribute in B that the object does not have, for every attribute in M that is not in B, there is some object in A that does not have that attribute. For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence. Concept lattice of a formal context The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2. In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have. These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism). Attribute values and negation Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling. The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m) = G \ . It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling. For possible negations of formal concepts see the section concept algebras below. Implications An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if ⊆ . For each finite formal context, the set of all valid implications has a canonical basis, an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications. Arrow relations Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For and let and dually Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice. Extensions of the theory Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence then expresses that the object has the attribute under the condition . Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult. Voutsadakis has studied the n-ary case. Fuzzy concept analysis: Extensive work has been done on a fuzzy version of formal concept analysis. Concept algebras: Modelling negation of formal concepts is somewhat problematic because the complement of a formal concept (A, B) is in general not a concept. However, since the concept lattice is complete one can consider the join (A, B)Δ of all concepts (C, D) that satisfy ; or dually the meet (A, B)𝛁 of all concepts satisfying . These two operations are known as weak negation and weak opposition, respectively. This can be expressed in terms of the derivation operators. Weak negation can be written as , and weak opposition can be written as . The concept lattice equipped with the two additional operations Δ and 𝛁 is known as the concept algebra of a context. Concept algebras generalize power sets. Weak negation on a concept lattice L is a weak complementation, i.e. an order-reversing map which satisfies the axioms . Weak opposition is a dual weak complementation. A (bounded) lattice such as a concept algebra, which is equipped with a weak complementation and a dual weak complementation, is called a weakly dicomplemented lattice. Weakly dicomplemented lattices generalize distributive orthocomplemented lattices, i.e. Boolean algebras. Temporal concept analysis Temporal concept analysis (TCA) is an extension of Formal Concept Analysis (FCA) aiming at a conceptual description of temporal phenomena. It provides animations in concept lattices obtained from data about changing objects. It offers a general way of understanding change of concrete or abstract objects in continuous, discrete or hybrid space and time. TCA applies conceptual scaling to temporal data bases. In the simplest case TCA considers objects that change in time like a particle in physics, which, at each time, is at exactly one place. That happens in those temporal data where the attributes 'temporal object' and 'time' together form a key of the data base. Then the state (of a temporal object at a time in a view) is formalized as a certain object concept of the formal context describing the chosen view. In this simple case, a typical visualization of a temporal system is a line diagram of the concept lattice of the view into which trajectories of temporal objects are embedded. TCA generalizes the above mentioned case by considering temporal data bases with an arbitrary key. That leads to the notion of distributed objects which are at any given time at possibly many places, as for example, a high pressure zone on a weather map. The notions of 'temporal objects', 'time' and 'place' are represented as formal concepts in scales. A state is formalized as a set of object concepts. That leads to a conceptual interpretation of the ideas of particles and waves in physics. Algorithms and tools There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems. Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as: ConExp ToscanaJ Lattice Miner Coron FcaBedrock GALACTIC Related analytical techniques Bicliques A formal context can naturally be interpreted as a bipartite graph. The formal concepts then correspond to the maximal bicliques in that graph. The mathematical and algorithmic results of formal concept analysis may thus be used for the theory of maximal bicliques. The notion of bipartite dimension (of the complemented bipartite graph) translates to that of Ferrers dimension (of the formal context) and of order dimension (of the concept lattice) and has applications e.g. for Boolean matrix factorization. Biclustering and multidimensional clustering Given an object-attribute numerical data-table, the goal of biclustering is to group together some objects having similar values of some attributes. For example, in gene expression data, it is known that genes (objects) may share a common behavior for a subset of biological situations (attributes) only: one should accordingly produce local patterns to characterize biological processes, the latter should possibly overlap, since a gene may be involved in several processes. The same remark applies for recommender systems where one is interested in local patterns characterizing groups of users that strongly share almost the same tastes for a subset of items. A bicluster in a binary object-attribute data-table is a pair (A,B) consisting of an inclusion-maximal set of objects A and an inclusion-maximal set of attributes B such that almost all objects from A have almost all attributes from B and vice versa. Of course, formal concepts can be considered as "rigid" biclusters where all objects have all attributes and vice versa. Hence, it is not surprising that some bicluster definitions coming from practice are just definitions of a formal concept. Relaxed FCA-based versions of biclustering and triclustering include OA-biclustering and OAC-triclustering (here O stands for object, A for attribute, C for condition); to generate patterns these methods use prime operators only once being applied to a single entity (e.g. object) or a pair of entities (e.g. attribute-condition), respectively. A bicluster of similar values in a numerical object-attribute data-table is usually defined as a pair consisting of an inclusion-maximal set of objects and an inclusion-maximal set of attributes having similar values for the objects. Such a pair can be represented as an inclusion-maximal rectangle in the numerical table, modulo rows and columns permutations. In it was shown that biclusters of similar values correspond to triconcepts of a triadic context where the third dimension is given by a scale that represents numerical attribute values by binary attributes. This fact can be generalized to n-dimensional case, where n-dimensional clusters of similar values in n-dimensional data are represented by n+1-dimensional concepts. This reduction allows one to use standard definitions and algorithms from multidimensional concept analysis for computing multidimensional clusters. Knowledge spaces In the theory of knowledge spaces it is assumed that in any knowledge space the family of knowledge states is union-closed. The complements of knowledge states therefore form a closure system and may be represented as the extents of some formal context. Hands-on experience with formal concept analysis The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and cell biology, genetics, ecology, software engineering, ontology, information and library sciences, office administration, law, linguistics, political science. Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), Concept Lattices and their Applications (CLA), or International Conference on Conceptual Structures (ICCS). See also Association rule learning Cluster analysis Commonsense reasoning Conceptual analysis Conceptual clustering Conceptual space Concept learning Correspondence analysis Description logic Factor analysis Formal semantics (natural language) General Concept Lattice Graphical model Grounded theory Inductive logic programming Pattern theory Statistical relational learning Schema (genetic algorithms) Notes References External links A Formal Concept Analysis Homepage Demo Formal Concept Analysis. ICFCA International Conference Proceedings 2007 5th 2008 6th 2009 7th 2010 8th 2011 9th 2012 10th 2013 11th 2014 12th 2015 13th 2017 14th 2019 15th 2021 16th Machine learning Lattice theory Data mining Formal semantics (natural language) Ontology (information science) Semantic relations
Formal concept analysis
[ "Mathematics", "Engineering" ]
4,130
[ "Lattice theory", "Machine learning", "Fields of abstract algebra", "Artificial intelligence engineering", "Order theory" ]
314,204
https://en.wikipedia.org/wiki/Chemometrics
Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics. Background Chemometrics is applied to solve both descriptive and predictive problems in experimental natural sciences, especially in chemistry. In descriptive applications, properties of chemical systems are modeled with the intent of learning the underlying relationships and structure of the system (i.e., model understanding and identification). In predictive applications, properties of chemical systems are modeled with the intent of predicting new properties or behavior of interest. In both cases, the datasets can be small but are often large and complex, involving hundreds to thousands of variables, and hundreds to thousands of cases or observations. Chemometric techniques are particularly heavily used in analytical chemistry and metabolomics, and the development of improved chemometric methods of analysis also continues to advance the state of the art in analytical instrumentation and methodology. It is an application-driven discipline, and thus while the standard chemometric methodologies are very widely used industrially, academic groups are dedicated to the continued development of chemometric theory, method and application development. Origins Although one could argue that even the earliest analytical experiments in chemistry involved a form of chemometrics, the field is generally recognized to have emerged in the 1970s as computers became increasingly exploited for scientific investigation. The term 'chemometrics' was coined by Svante Wold in a 1971 grant application, and the International Chemometrics Society was formed shortly thereafter by Svante Wold and Bruce Kowalski, two pioneers in the field. Wold was a professor of organic chemistry at Umeå University, Sweden, and Kowalski was a professor of analytical chemistry at University of Washington, Seattle. Many early applications involved multivariate classification, numerous quantitative predictive applications followed, and by the late 1970s and early 1980s a wide variety of data- and computer-driven chemical analyses were occurring. Multivariate analysis was a critical facet even in the earliest applications of chemometrics. Data from infrared and UV/visible spectroscopy are often counted in thousands of measurements per sample. Mass spectrometry, nuclear magnetic resonance, atomic emission/absorption and chromatography experiments are also all by nature highly multivariate. The structure of these data was found to be conducive to using techniques such as principal components analysis (PCA), partial least-squares (PLS), orthogonal partial least-squares (OPLS), and two-way orthogonal partial least squares (O2PLS). This is primarily because, while the datasets may be highly multivariate there is strong and often linear low-rank structure present. PCA and PLS have been shown over time very effective at empirically modeling the more chemically interesting low-rank structure, exploiting the interrelationships or 'latent variables' in the data, and providing alternative compact coordinate systems for further numerical analysis such as regression, clustering, and pattern recognition. Partial least squares in particular was heavily used in chemometric applications for many years before it began to find regular use in other fields. Through the 1980s three dedicated journals appeared in the field: Journal of Chemometrics, Chemometrics and Intelligent Laboratory Systems, and Journal of Chemical Information and Modeling. These journals continue to cover both fundamental and methodological research in chemometrics. At present, most routine applications of existing chemometric methods are commonly published in application-oriented journals (e.g., Applied Spectroscopy, Analytical Chemistry, Analytica Chimica Acta, Talanta). Several important books/monographs on chemometrics were also first published in the 1980s, including the first edition of Malinowski's Factor Analysis in Chemistry, Sharaf, Illman and Kowalski's Chemometrics, Massart et al. Chemometrics: a textbook, and Multivariate Calibration by Martens and Naes. Some large chemometric application areas have gone on to represent new domains, such as molecular modeling and QSAR, cheminformatics, the '-omics' fields of genomics, proteomics, metabonomics and metabolomics, process modeling and process analytical technology. An account of the early history of chemometrics was published as a series of interviews by Geladi and Esbensen. Techniques Multivariate calibration Many chemical problems and applications of chemometrics involve calibration. The objective is to develop models which can be used to predict properties of interest based on measured properties of the chemical system, such as pressure, flow, temperature, infrared, Raman, NMR spectra and mass spectra. Examples include the development of multivariate models relating 1) multi-wavelength spectral response to analyte concentration, 2) molecular descriptors to biological activity, 3) multivariate process conditions/states to final product attributes. The process requires a calibration or training data set, which includes reference values for the properties of interest for prediction, and the measured attributes believed to correspond to these properties. For case 1), for example, one can assemble data from a number of samples, including concentrations for an analyte of interest for each sample (the reference) and the corresponding infrared spectrum of that sample. Multivariate calibration techniques such as partial-least squares regression, or principal component regression (and near countless other methods) are then used to construct a mathematical model that relates the multivariate response (spectrum) to the concentration of the analyte of interest, and such a model can be used to efficiently predict the concentrations of new samples. Techniques in multivariate calibration are often broadly categorized as classical or inverse methods. The principal difference between these approaches is that in classical calibration the models are solved such that they are optimal in describing the measured analytical responses (e.g., spectra) and can therefore be considered optimal descriptors, whereas in inverse methods the models are solved to be optimal in predicting the properties of interest (e.g., concentrations, optimal predictors). Inverse methods usually require less physical knowledge of the chemical system, and at least in theory provide superior predictions in the mean-squared error sense, and hence inverse approaches tend to be more frequently applied in contemporary multivariate calibration. The main advantages of the use of multivariate calibration techniques is that fast, cheap, or non-destructive analytical measurements (such as optical spectroscopy) can be used to estimate sample properties which would otherwise require time-consuming, expensive or destructive testing (such as LC-MS). Equally important is that multivariate calibration allows for accurate quantitative analysis in the presence of heavy interference by other analytes. The selectivity of the analytical method is provided as much by the mathematical calibration, as the analytical measurement modalities. For example, near-infrared spectra, which are extremely broad and non-selective compared to other analytical techniques (such as infrared or Raman spectra), can often be used successfully in conjunction with carefully developed multivariate calibration methods to predict concentrations of analytes in very complex matrices. Classification, pattern recognition, clustering Supervised multivariate classification techniques are closely related to multivariate calibration techniques in that a calibration or training set is used to develop a mathematical model capable of classifying future samples. The techniques employed in chemometrics are similar to those used in other fields – multivariate discriminant analysis, logistic regression, neural networks, regression/classification trees. The use of rank reduction techniques in conjunction with these conventional classification methods is routine in chemometrics, for example discriminant analysis on principal components or partial least squares scores. A family of techniques, referred to as class-modelling or one-class classifiers, are able to build models for an individual class of interest. Such methods are particularly useful in the case of quality control and authenticity verification of products. Unsupervised classification (also termed cluster analysis) is also commonly used to discover patterns in complex data sets, and again many of the core techniques used in chemometrics are common to other fields such as machine learning and statistical learning. Multivariate curve resolution In chemometric parlance, multivariate curve resolution seeks to deconstruct data sets with limited or absent reference information and system knowledge. Some of the earliest work on these techniques was done by Lawton and Sylvestre in the early 1970s. These approaches are also called self-modeling mixture analysis, blind source/signal separation, and spectral unmixing. For example, from a data set comprising fluorescence spectra from a series of samples each containing multiple fluorophores, multivariate curve resolution methods can be used to extract the fluorescence spectra of the individual fluorophores, along with their relative concentrations in each of the samples, essentially unmixing the total fluorescence spectrum into the contributions from the individual components. The problem is usually ill-determined due to rotational ambiguity (many possible solutions can equivalently represent the measured data), so the application of additional constraints is common, such as non-negativity, unimodality, or known interrelationships between the individual components (e.g., kinetic or mass-balance constraints). Other techniques Experimental design remains a core area of study in chemometrics and several monographs are specifically devoted to experimental design in chemical applications. Sound principles of experimental design have been widely adopted within the chemometrics community, although many complex experiments are purely observational, and there can be little control over the properties and interrelationships of the samples and sample properties. Signal processing is also a critical component of almost all chemometric applications, particularly the use of signal pretreatments to condition data prior to calibration or classification. The techniques employed commonly in chemometrics are often closely related to those used in related fields. Signal pre-processing may affect the way in which outcomes of the final data processing can be interpreted. Performance characterization, and figures of merit Like most arenas in the physical sciences, chemometrics is quantitatively oriented, so considerable emphasis is placed on performance characterization, model selection, verification & validation, and figures of merit. The performance of quantitative models is usually specified by root mean squared error in predicting the attribute of interest, and the performance of classifiers as a true-positive rate/false-positive rate pairs (or a full ROC curve). A recent report by Olivieri et al. provides a comprehensive overview of figures of merit and uncertainty estimation in multivariate calibration, including multivariate definitions of selectivity, sensitivity, SNR and prediction interval estimation. Chemometric model selection usually involves the use of tools such as resampling (including bootstrap, permutation, cross-validation). Multivariate statistical process control (MSPC), modeling and optimization accounts for a substantial amount of historical chemometric development. Spectroscopy has been used successfully for online monitoring of manufacturing processes for 30–40 years, and this process data is highly amenable to chemometric modeling. Specifically in terms of MSPC, multiway modeling of batch and continuous processes is increasingly common in industry and remains an active area of research in chemometrics and chemical engineering. Process analytical chemistry as it was originally termed, or the newer term process analytical technology continues to draw heavily on chemometric methods and MSPC. Multiway methods are heavily used in chemometric applications. These are higher-order extensions of more widely used methods. For example, while the analysis of a table (matrix, or second-order array) of data is routine in several fields, multiway methods are applied to data sets that involve 3rd, 4th, or higher-orders. Data of this type is very common in chemistry, for example a liquid-chromatography / mass spectrometry (LC-MS) system generates a large matrix of data (elution time versus m/z) for each sample analyzed. The data across multiple samples thus comprises a data cube. Batch process modeling involves data sets that have time vs. process variables vs. batch number. The multiway mathematical methods applied to these sorts of problems include PARAFAC, trilinear decomposition, and multiway PLS and PCA. References Further reading External links An Introduction to Chemometrics (archived website) IUPAC Glossary for Chemometrics Homepage of Chemometrics, Sweden Homepage of Chemometrics (a starting point) Chemometric Analysis for Spectroscopy General resource on advanced chemometric methods and recent developments Computational chemistry Metrics Analytical chemistry Cheminformatics
Chemometrics
[ "Chemistry", "Mathematics" ]
2,666
[ "Metrics", "Quantity", "Theoretical chemistry", "Computational chemistry", "nan", "Cheminformatics" ]
4,052,453
https://en.wikipedia.org/wiki/Behavioral%20modeling
The behavioral approach to systems theory and control theory was initiated in the late-1970s by J. C. Willems as a result of resolving inconsistencies present in classical approaches based on state-space, transfer function, and convolution representations. This approach is also motivated by the aim of obtaining a general framework for system analysis and control that respects the underlying physics. The main object in the behavioral setting is the behavior – the set of all signals compatible with the system. An important feature of the behavioral approach is that it does not distinguish a priority between input and output variables. Apart from putting system theory and control on a rigorous basis, the behavioral approach unified the existing approaches and brought new results on controllability for nD systems, control via interconnection, and system identification. Dynamical system as a set of signals In the behavioral setting, a dynamical system is a triple where is the "time set" – the time instances over which the system evolves, is the "signal space" – the set in which the variables whose time evolution is modeled take on their values, and the "behavior" – the set of signals that are compatible with the laws of the system ( denotes the set of all signals, i.e., functions from into ). means that is a trajectory of the system, while means that the laws of the system forbid the trajectory to happen. Before the phenomenon is modeled, every signal in is deemed possible, while after modeling, only the outcomes in remain as possibilities. Special cases: – continuous-time systems – discrete-time systems – most physical systems a finite set – discrete event systems Linear time-invariant differential systems System properties are defined in terms of the behavior. The system is said to be "linear" if is a vector space and is a linear subspace of , "time-invariant" if the time set consists of the real or natural numbers and for all , where denotes the -shift, defined by . In these definitions linearity articulates the superposition law, while time-invariance articulates that the time-shift of a legal trajectory is in its turn a legal trajectory. A "linear time-invariant differential system" is a dynamical system whose behavior is the solution set of a system of constant coefficient linear ordinary differential equations , where is a matrix of polynomials with real coefficients. The coefficients of are the parameters of the model. In order to define the corresponding behavior, we need to specify when we consider a signal to be a solution of . For ease of exposition, often infinite differentiable solutions are considered. There are other possibilities, as taking distributional solutions, or solutions in , and with the ordinary differential equations interpreted in the sense of distributions. The behavior defined is This particular way of representing the system is called "kernel representation" of the corresponding dynamical system. There are many other useful representations of the same behavior, including transfer function, state space, and convolution. For accessible sources regarding the behavioral approach, see . Observability of latent variables A key question of the behavioral approach is whether a quantity w1 can be deduced given an observed quantity w2 and a model. If w1 can be deduced given w2 and the model, w2 is said to be observable. In terms of mathematical modeling, the to-be-deduced quantity or variable is often referred to as the latent variable and the observed variable is the manifest variable. Such a system is then called an observable (latent variable) system. References Additional sources Paolo Rapisarda and Jan C.Willems, 2006. Recent Developments in Behavioral System Theory, July 24–28, 2006, MTNS 2006, Kyoto, Japan J.C. Willems. Terminals and ports. IEEE Circuits and Systems Magazine Volume 10, issue 4, pages 8–16, December 2010 J.C. Willems and H.L. Trentelman. On quadratic differential forms. SIAM Journal on Control and Optimization Volume 36, pages 1702-1749, 1998 J.C. Willems. Paradigms and puzzles in the theory of dynamical systems. IEEE Transactions on Automatic Control Volume 36, pages 259-294, 1991 J.C. Willems. Models for dynamics. Dynamics Reported Volume 2, pages 171-269, 1989 Systems theory Dynamical systems
Behavioral modeling
[ "Physics", "Mathematics" ]
883
[ "Mechanics", "Dynamical systems" ]
4,055,584
https://en.wikipedia.org/wiki/Rotary%20vane%20pump
A rotary vane pump is a type of positive-displacement pump that consists of vanes mounted to a rotor that rotates inside a cavity. In some cases, these vanes can have variable length and/or be tensioned to maintain contact with the walls as the pump rotates. This type of pump is considered less suitable than other vacuum pumps for high-viscosity and high-pressure fluids, and is . They can endure short periods of dry operation, and are considered good for low-viscosity fluids. Types The simplest vane pump has a circular rotor rotating inside a larger circular cavity. The centers of these two circles are offset, causing eccentricity. Vanes are mounted in slots cut into the rotor. The vanes are allowed a certain limited range of movement within these slots such that they can maintain contact with the wall of the cavity as the rotor rotates. The vanes may be encouraged to maintain such contact through means such as springs, gravity, or centrifugal force. A small amount of oil may be present within the mechanism to help create a better seal between the tips of the vanes and the cavity's wall. The contact between the vanes and the cavity wall divides up the cavity into "vane chambers" that do the pumping work. On the suction side of the pump, the vane chambers are increased in volume and are thus filled with fluid forced in by the inlet vacuum pressure, which is the pressure from the system being pumped, sometimes just the atmosphere. On the discharge side of the pump, the vane chambers decrease in volume, compressing the fluid and thus forcing it out of the outlet. The action of the vanes pulls through the same volume of fluid with each rotation. Multi-stage rotary-vane vacuum pumps, which force the fluid through a series of two or more rotary-vane pump mechanisms to enhance the pressure, can attain vacuum pressures as low as 10−6 bar (0.1 Pa). Uses Vane pumps are commonly used as high-pressure hydraulic pumps and in automobiles, including supercharging, power-steering, air conditioning, and automatic-transmission pumps. Pumps for mid-range pressures include applications such as carbonators for fountain soft-drink dispensers and espresso coffee machines. Furthermore, vane pumps can be used in low-pressure gas applications such as secondary air injection for auto exhaust emission control, or in low-pressure chemical vapor deposition systems. Rotary-vane pumps are also a common type of vacuum pump, with two-stage pumps able to reach pressures well below 10−6 bar. These are found in such applications as providing braking assistance in large trucks and diesel-powered passenger cars (whose engines do not generate intake vacuum) through a braking booster, in most light aircraft to drive gyroscopic flight instruments, in evacuating refrigerant lines during installation of air conditioners, in laboratory freeze dryers, and vacuum experiments in physics. In the vane pump, the pumped gas and the oil are mixed within the pump, and so they must be separated externally. Therefore, the inlet and the outlet have a large chamber, perhaps with swirl, where the oil drops fall out of the gas. Sometimes the inlet has louvers cooled by the room air (the pump is usually 40 K hotter) to condense cracked pumping oil and water, and let it drop back into the inlet. When these pumps are used in high-vacuum systems (where the inflow of gas into the pump becomes very low), a significant concern is contamination of the entire system by molecular oil back streaming. History Like many simple mechanisms, it is unclear when the rotary vane pump was invented. Agostino Ramelli's 1588 book Le diverse et artificiose machine del capitano Agostino Ramelli ("The Various and Ingenious Machines of Captain Agostino Ramelli") contains a description and an engraving of a rotary vane pump along with other types of rotary pumps, which suggests that the design was known at the time. In more recent times, vane pumps also show up in 19th-century patent records. In 1858, a US patent was granted to one W. Pierce for "a new and useful Improvement in Rotary Pumps", which acknowledged as prior art sliding blades "used in connection with an eccentric inner surface". In 1874, a Canadian patent was granted to Charles C. Barnes of Sackville, New Brunswick. There have been various improvements since, including a variable vane pump for gases (1909). Variable-displacement vane pump One of the major advantages of the vane pump is that the design readily lends itself to become a variable-displacement pump, rather than a fixed-displacement pump such as a spur-gear or a gerotor pump. The centerline distance from the rotor to the eccentric ring is used to determine the pump's displacement. By allowing the eccentric ring to pivot or translate relative to the rotor, the displacement can be varied. It is even possible for a vane pump to pump in reverse if the eccentric ring moves far enough. However, performance cannot be optimized to pump in both directions. This can make for a very interesting hydraulic-control oil pump. A variable-displacement vane pump is used as an energy-saving device and has been used in many applications, including automotive transmissions, for over 30 years. Materials Externals (head, casing) – cast iron, ductile iron, steel, brass, plastic, and stainless steel Vane, pushrods – carbon graphite, PEEK End plates – carbon graphite Shaft seal – component mechanical seals, industry-standard cartridge mechanical seals, and magnetically driven pumps Packing – available from some vendors, but not usually recommended for thin liquid service See also Guided-rotor compressor Powerplus supercharger Dry rotary vane pump diagram References External links H. Eugene Bassett's articulated displacer compressor Vane Pump Animation Pumps Canadian inventions
Rotary vane pump
[ "Physics", "Chemistry" ]
1,201
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
4,055,903
https://en.wikipedia.org/wiki/Glutamate%20transporter
Glutamate transporters are a family of neurotransmitter transporter proteins that move glutamate – the principal excitatory neurotransmitter – across a membrane. The family of glutamate transporters is composed of two primary subclasses: the excitatory amino acid transporter (EAAT) family and vesicular glutamate transporter (VGLUT) family. In the brain, EAATs remove glutamate from the synaptic cleft and extrasynaptic sites via glutamate reuptake into glial cells and neurons, while VGLUTs move glutamate from the cell cytoplasm into synaptic vesicles. Glutamate transporters also transport aspartate and are present in virtually all peripheral tissues, including the heart, liver, testes, and bone. They exhibit stereoselectivity for L-glutamate but transport both L-aspartate and D-aspartate. The EAATs are membrane-bound secondary transporters that superficially resemble ion channels. These transporters play the important role of regulating concentrations of glutamate in the extracellular space by transporting it along with other ions across cellular membranes. After glutamate is released as the result of an action potential, glutamate transporters quickly remove it from the extracellular space to keep its levels low, thereby terminating the synaptic transmission. Without the activity of glutamate transporters, glutamate would build up and kill cells in a process called excitotoxicity, in which excessive amounts of glutamate acts as a toxin to neurons by triggering a number of biochemical cascades. The activity of glutamate transporters also allows glutamate to be recycled for repeated release. Classes There are two general classes of glutamate transporters, those that are dependent on an electrochemical gradient of sodium ions (the EAATs) and those that are not (VGLUTs and xCT). The cystine-glutamate antiporter (xCT) is localised to the plasma membrane of cells whilst vesicular glutamate transporters (VGLUTs) are found in the membrane of glutamate-containing synaptic vesicles. Na+-dependent EAATs are also dependent on transmembrane K+ and H+concentration gradients, and so are also known as 'sodium and potassium coupled glutamate transporters'. Na+-dependent transporters have also been called 'high-affinity glutamate transporters', though their glutamate affinity actually varies widely. EAATs are antiporters which carry one molecule of glutamate in along with three Na+ and one H+, while export one K+. EAATs are transmembrane integral proteins which traverse the plasmalemma 8 times. Mitochondria also possess mechanisms for taking up glutamate that are quite distinct from membrane glutamate transporters. EAATs In humans (as well as in rodents), five subtypes have been identified and named EAAT1-5 (SLC1A3, SLC1A2, SLC1A1, , ). Subtypes EAAT1-2 are found in membranes of glial cells (astrocytes, microglia, and oligodendrocytes). However, low levels of EAAT2 are also found in the axon-terminals of hippocampal CA3 pyramidal cells. EAAT2 is responsible for over 90% of glutamate reuptake within the central nervous system (CNS). The EAAT3-4 subtypes are exclusively neuronal, and are expressed in axon terminals, cell bodies, and dendrites. Finally, EAAT5 is only found in the retina where it is principally localized to photoreceptors and bipolar neurons in the retina. When glutamate is taken up into glial cells by the EAATs, it is converted to glutamine and subsequently transported back into the presynaptic neuron, converted back into glutamate, and taken up into synaptic vesicles by action of the VGLUTs. This process is named the glutamate–glutamine cycle. VGLUTs Three types of vesicular glutamate transporters are known, VGLUTs 1–3 (SLC17A7, SLC17A6, and SLC17A8 respectively) and the novel glutamate/aspartate transporter sialin. These transporters pack the neurotransmitter into synaptic vesicles so that they can be released into the synapse. VGLUTs are dependent on the proton gradient that exists in the secretory system (vesicles being more acidic than the cytosol). VGLUTs have only between one hundredth and one thousandth the affinity for glutamate that EAATs have. Also unlike EAATs, they do not appear to transport aspartate. VGluT3 VGluT3 (Vesicular Glutamate Transporter 3) that is encoded by the SLC17A8 gene is a member of the vesicular glutamate transporter family that transports glutamate into the cells. It is involved in neurological and pain diseases. Neurons are able to express VGluT3 when they use a neurotransmitter different to Glutamate, for example in the specific case of central 5-HT neurons. The role of this unconventional transporter (VGluT3) still remains unknown but, at the moment, has been demonstrated that, in auditory system, the VGluT3 is involved in fast excitatory glutamatergic transmission very similar to the other two vesicular glutamate transporters, VGluT1 and VGluT2. There are behavioral and physiological consequences of VGluT3 ablation because it modulates a wide range of neuronal and physiological processes like anxiety, mood regulation, impulsivity, aggressive behavior, pain perception, sleep–wake cycle, appetite, body temperature and sexual behavior. Certainly, no significant change was found in aggression and depression-like behaviors, but in contrast, the loss of VGluT3 resulted in a specific anxiety-related phenotype. The sensory nerve fibers have different ways to detect the pain hypersensivity throughout their sensory modalities and conduction velocities, but at the moment is still unknown which types of sensory is related to the different forms of inflammatory and neuropathic pain hypersensivity. In this case, Vesicular glutamate transporter 3 (VGluT3), have been implicated in mechanical hypersensitivity after inflammation, but their role in neuropathic pain still remains under debate. VGluT3 has extensive somatic throughout development, which could be involved in non-synaptic modulation by glutamate in developing retina, and could influence trophic and extra-synaptic neuronal signaling by glutamate in the inner retina. Molecular Structure of EAATs Like all glutamate transporters, EAATs are trimers, with each protomer consisting of two domains : the central scaffold domain (Figure 1A, wheat) and the peripheral transport domain (Figure 1A, blue). The transport conformational path is as follows. First, the outward facing conformation occurs (OF, open) which allows the glutamate to bind. Then the HP2 region closes after uptake (OF, closed) and the elevator like movement carries the substrate to the intracellular side of the membrane. It worth nothing that this elevator motion consists of several yet to be categorized/identified conformational changes. After the elevator motion brings the substrate to the IC side of the membrane, EAAT adopts the inward facing (IF, closed) state in which the transport domain is lowered, but the HP2 gate is still closed with the glutamate still bound to the transporter. Lastly, the HP2 gate opens and the glutamate diffuses into the cytoplasm of the cell. Pathology Overactivity of glutamate transporters may result in inadequate synaptic glutamate and may be involved in schizophrenia and other mental illnesses. During injury processes such as ischemia and traumatic brain injury, the action of glutamate transporters may fail, leading to toxic buildup of glutamate. In fact, their activity may also actually be reversed due to inadequate amounts of adenosine triphosphate to power ATPase pumps, resulting in the loss of the electrochemical ion gradient. Since the direction of glutamate transport depends on the ion gradient, these transporters release glutamate instead of removing it, which results in neurotoxicity due to overactivation of glutamate receptors. Loss of the Na+-dependent glutamate transporter EAAT2 is suspected to be associated with neurodegenerative diseases such as Alzheimer's disease, Huntington's disease, and ALS–parkinsonism dementia complex. Also, degeneration of motor neurons in the disease amyotrophic lateral sclerosis has been linked to loss of EAAT2 from patients' brains and spinal cords. Addiction to certain addictive drugs (e.g., cocaine, heroin, alcohol, and nicotine) is correlated with a persistent reduction in the expression of EAAT2 in the nucleus accumbens (NAcc); the reduced expression of EAAT2 in this region is implicated in addictive drug-seeking behavior. In particular, the long-term dysregulation of glutamate neurotransmission in the NAcc of addicts is associated with an increase in vulnerability to relapse after re-exposure to the addictive drug or its associated drug cues. Drugs which help to normalize the expression of EAAT2 in this region, such as N-acetylcysteine, have been proposed as an adjunct therapy for the treatment of addiction to cocaine, nicotine, alcohol, and other drugs. See also Dopamine transporters Norepinephrine transporters Serotonin transporters NMDA receptors AMPA receptors Kainate receptors Metabotropic glutamate receptors References External links Amphetamine Membrane proteins Neurotransmitter transporters Solute carrier family Glutamate (neurotransmitter)
Glutamate transporter
[ "Biology" ]
2,215
[ "Protein classification", "Membrane proteins" ]
14,340,077
https://en.wikipedia.org/wiki/Ultrahyperbolic%20equation
In the mathematical field of differential equations, the ultrahyperbolic equation is a partial differential equation (PDE) for an unknown scalar function of variables of the form More generally, if is any quadratic form in variables with signature , then any PDE whose principal part is is said to be ultrahyperbolic. Any such equation can be put in the form above by means of a change of variables. The ultrahyperbolic equation has been studied from a number of viewpoints. On the one hand, it resembles the classical wave equation. This has led to a number of developments concerning its characteristics, one of which is due to Fritz John: the John equation. In 2008, Walter Craig and Steven Weinstein proved that under a nonlocal constraint, the initial value problem is well-posed for initial data given on a codimension-one hypersurface. And later, in 2022, a research team at the University of Michigan extended the conditions for solving ultrahyperbolic wave equations to complex-time (kime), demonstrated space-kime dynamics, and showed data science applications using tensor-based linear modeling of functional magnetic resonance imaging data. The equation has also been studied from the point of view of symmetric spaces, and elliptic differential operators. In particular, the ultrahyperbolic equation satisfies an analog of the mean value theorem for harmonic functions. Notes References Differential operators
Ultrahyperbolic equation
[ "Mathematics" ]
287
[ "Mathematical analysis", "Differential operators", "Mathematical analysis stubs" ]
14,341,419
https://en.wikipedia.org/wiki/Electrohydrogenesis
Electrohydrogenesis or biocatalyzed electrolysis is the name given to a process for generating hydrogen gas from organic matter being decomposed by bacteria. This process uses a modified fuel cell to contain the organic matter and water. A small amount, 0.2–0.8 V of electricity is used, the original article reports an overall energy efficiency of 288% can be achieved (this is computed relative to the amount of electricity used, waste heat lowers the overall efficiency). This work was reported by Cheng and Logan. See also Biohydrogen Electrochemical reduction of carbon dioxide Electromethanogenesis Fermentative hydrogen production Microbial fuel cell References External links Biocatalyzed electrolysis Hydrogen production Environmental engineering Biotechnology
Electrohydrogenesis
[ "Chemistry", "Engineering", "Biology" ]
150
[ "Chemical engineering", "Biotechnology", "Civil engineering", "nan", "Environmental engineering" ]
14,343,009
https://en.wikipedia.org/wiki/TBX21
T-box transcription factor TBX21, also called T-bet (T-box expressed in T cells), is a protein that in humans is encoded by the TBX21 gene. Though being for long thought of only as a master regulator of type 1 immune response, T-bet has recently been shown to be implicated in development of various immune cell subsets and maintenance of mucosal homeostasis. Function This gene is a member of a phylogenetically conserved family of genes that share a common DNA-binding domain, the T-box. T-box genes encode transcription factors involved in the regulation of developmental processes. This gene is the human ortholog of mouse Tbx21/Tbet gene. Studies in mouse show that Tbx21 protein is a Th1 cell-specific transcription factor that controls the expression of the hallmark Th1 cytokine, interferon-gamma (IFNg). Expression of the human ortholog also correlates with IFNg expression in Th1 and natural killer cells, suggesting a role for this gene in initiating Th1 lineage development from naive Th precursor cells. The function of T-bet is best known in T helper cells (Th cells). In naïve Th cells the gene is not constitutively expressed, but can be induced via 2 independent signalling pathways, IFNg-STAT1 and IL-12-STAT4 pathways. Both need to cooperate to reach stable Th1 phenotype. Th1 phenotype is also stabilised by repression of regulators of other Th cell phenotypes (Th2 and Th17). In a typical scenario it is thought that IFNg and T cell receptor (TCR) signalling initiates the expression of Tbet, and once TCR signalling stops, signalling via IL-12 receptor can come to play as it was blocked by repression of expression of one of its receptor subunits (IL12Rb2) by TCR signalling. IL-2 signalling enhances the expression of IL-12R. The 2-step expression of T-bet can be viewed as a safety mechanism of sort, which ensures, that cells commit to the Th1 phenotype only when desired. T-bet controls transcription of many genes, for example proinflammatory cytokines like lymphotoxin-a, tumour necrosis factor and ifng, which is a hallmark cytokine of type one immunity. Certain chemokines are also regulated by T-bet, namely xcl1, ccl3, ccl4 and chemokine receptors cxcr3, ccr5. The expression of T-bet controlled genes is facilitated by 2 distinct mechanisms: chromatin remodelation via enzyme recruitment and direct binding to enhancer sequences promoting transcription or 3D gene structure supporting transcription. T-bet also recruits other transcription factors like HLX, RUNX1, RUNX3 which aid it in setting Th1 transcription profile. Apart from promoting type 1 immune response (Th1), T-bet also suppresses the other types of immune response. Type 2 immune response (Th2) phenotype is repressed by sequestering of its master regulator, GATA3 away from its target genes. Gata3 expression is further silenced by promotion of silencing epigenetic changes in its region. In addition to that the Th2 specific cytokines are also silenced by binding of T-bet and RUNX3 to il4 silencer region. Type 17 immune response (Th17) phenotype is suppressed by RUNX1 recruitment, which disallows it to mediate Th17 specific genes, like rorc, a Th17 master regulator. Rorc is also silenced by epigenetic changes promoted by T-bet and STAT4. T-bet also performs function in cytotoxic T cells and B cells. In cytotoxic T cells it promotes IFNg, granzyme B expression and in cooperation with another transcription factor EOMES their maturation. The role of T-bet in B cells seems to be to direct the cell towards type 1 immune response expression profile, which involves secretion of antibodies IGg1 and IGg3 and is usually elevated during viral infections. These populations of B cells differ from standard ones by their lack of receptors CD21 and CD27, also given that these cells have undergone antibody class switch, they are regarded as memory B cells. These cells have been shown to secrete IFNg and in vitro to polarise naïve T helper cells towards Th1 phenotype. Populations of T-bet positive B cells were also identified in various autoimmune diseases like systemic lupus erythematosus, Crohn's disease, multiple sclerosis and rheumatoid arthritis. Role in mucosal homeostasis It has been identified that T-bet contributes to the maintenance of mucosal homeostasis and mucosal immune response. Mice lacking adapative immune cells and T-bet (RAG -/-, T-bet -/-) developed disease similar to human ulcerative colitis (hence the name TRUC), which was later attributed to the outgrowth Gram-negative bacteria, namely Helicobacter typhlonius. The dysbiosis appears to be a consequence of multiple factors, firstly the innate lymphoid cells 1 (ILC1) population and a subset of ILC3s are missing, because the expression of T-bet is needed for their maturation. Secondly, T-bet ablation causes increased levels of TNF, as its expression is not repressed in dendritic cells and immune system is more biased away from Th1. Role in disease Atherosclerosis Atherosclerosis is an autoimmune disease caused by inflammation and associated infiltration of immune cells in fatty deposits in arteries called atherosclerosis plaques. Th1 cells are responsible for production of proinflammatory cytokines contributing to the progression of the disease by promoting expression of adhesive (e.g., ICAM1) and homing molecules (mainly CCR5) needed for cellular migration. Experimental vaccination of patients with peptides derived from apolipoprotein B, part of low-density lipoprotein, which is deposited on arterial walls, has shown increased T regulatory cells (TREGs) and cytotoxic T cells. The vaccination has showed smaller Th1 differentiation, though the mechanism behind it remains unresolved. Currently it is hypothesised that the decrease of Th1 differentiation is caused by the destruction of dendritic cells presenting auto antigens by cytotoxic T cells and increased differentiation of TREGs suppressing immune response. Taken together T-bet might serve as a potential target in treatment of atherosclerosis. Asthma The transcription factor encoded by TBX21 is T-bet, which regulates the development of naive T lymphocytes. Asthma is a disease of chronic inflammation, and it is known that transgenic mice born without TBX21 spontaneously develop abnormal lung function consistent with asthma. It is thought that TBX21, therefore, may play a role in the development of asthma in humans as well. Experimental autoimmune encephalomyelitis Initially it was thought that experimental autoimmune encephalomyelitis (EAE) is caused by autoreactive Th1 cells. T-bet-deficient mice were resistant to EAE. However, later research has discovered, that not only Th1 but also Th17 and ThGM-CSF cells are the cause of immunopathology. Interestingly, IFNg, a main product of T-bet, has shown bidirectional effect in EAE. Injection of IFNg during acute stage worsens the course of the disease, presumably by strengthening Th1 response, however injection of IFNg in chronic stage has shown suppressive effect on EAE symptoms. Currently it is thought that IFNg stops T helper cells from committing for example to the Th17 phenotype, stimulates indoleamine 2,3-dioxygenase transcription (kynurenines or kyn pathway) in certain dendritic cells, stimulates cytotoxic T cells, downregulates T cell trafficking and limits their survival. T-bet and its controlled genes remain a possible target in treatment of neurological autoimmune diseases. References Further reading External links Transcription factors
TBX21
[ "Chemistry", "Biology" ]
1,760
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,346,088
https://en.wikipedia.org/wiki/Adduct%20purification
Adduct purification is a technique for preparing extremely pure simple organometallic compounds, which are generally unstable and hard to handle, by purifying a stable adduct with a Lewis acid and then obtaining the desired product from the pure adduct by thermal decomposition. Epichem Limited is the licensee of the major patents in this field, and uses the trademark EpiPure to refer to adduct-purified materials; Professor Anthony Jones at Liverpool University is the initiator of the field and author of many of the important papers. The choice of Lewis acid and of reaction medium is important; the desired organometallics are almost always air- and water-sensitive. Initial work was done in ether, but this led to oxygen impurities, and so more recent work involves tertiary amines or nitrogen-substituted crown ethers. References Professor Anthony C. Jones Purification of dialkylzinc precursors using tertiary amine ligands Chemical reactions Separation processes
Adduct purification
[ "Chemistry" ]
200
[ "Chemical reaction stubs", "nan", "Separation processes" ]
15,853,493
https://en.wikipedia.org/wiki/Sectional%20density
Sectional density (often abbreviated SD) is the ratio of an object's mass to its cross sectional area with respect to a given axis. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. Sectional density is used in gun ballistics. In this context, it is the ratio of a projectile's weight (often in either kilograms, grams, pounds or grains) to its transverse section (often in either square centimeters, square millimeters or square inches), with respect to the axis of motion. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. For illustration, a nail can penetrate a target medium with its pointed end first with less force than a coin of the same mass lying flat on the target medium. During World War II, bunker-busting Röchling shells were developed by German engineer August Coenders, based on the theory of increasing sectional density to improve penetration. Röchling shells were tested in 1942 and 1943 against the Belgian Fort d'Aubin-Neufchâteau and saw very limited use during World War II. Formula In a general physics context, sectional density is defined as: SD is the sectional density M is the mass of the projectile A is the cross-sectional area The SI derived unit for sectional density is kilograms per square meter (kg/m2). The general formula with units then becomes: where: SDkg/m2 is the sectional density in kilograms per square meters mkg is the weight of the object in kilograms Am2 is the cross sectional area of the object in meters Units conversion table (Values in bold face are exact.)<noinclude> 1 g/mm2 equals exactly  kg/m2. 1 kg/cm2 equals exactly  kg/m2. With the pound and inch legally defined as and 0.0254 m respectively, it follows that the (mass) pounds per square inch is approximately: 1 lb/in2 = /(0.0254 m × 0.0254 m) ≈ Use in ballistics The sectional density of a projectile can be employed in two areas of ballistics. Within external ballistics, when the sectional density of a projectile is divided by its coefficient of form (form factor in commercial small arms jargon); it yields the projectile's ballistic coefficient. Sectional density has the same (implied) units as the ballistic coefficient. Within terminal ballistics, the sectional density of a projectile is one of the determining factors for projectile penetration. The interaction between projectile (fragments) and target media is however a complex subject. A study regarding hunting bullets shows that besides sectional density several other parameters determine bullet penetration. If all other factors are equal, the projectile with the greatest amount of sectional density will penetrate the deepest. Metric units When working with ballistics using SI units, it is common to use either grams per square millimeter or kilograms per square centimeter. Their relationship to the base unit kilograms per square meter is shown in the conversion table above. Grams per square millimeter Using grams per square millimeter (g/mm2), the formula then becomes: Where: SDg/mm2 is the sectional density in grams per square millimeters mg is the mass of the projectile in grams dmm is the diameter of the projectile in millimeters For example, a small arms bullet with a mass of and having a diameter of has a sectional density of: 4 · 10.4 / (π·6.72) = 0.295 g/mm2 Kilograms per square centimeter Using kilograms per square centimeter (kg/cm2), the formula then becomes: Where: SDkg/cm2 is the sectional density in kilograms per square centimeter mg is the mass of the projectile in grams dcm is the diameter of the projectile in centimeters For example, an M107 projectile with a mass of 43.2 kg and having a body diameter of has a sectional density of: 4 · 43.2 / (π·154.712) = 0.230 kg/cm2 English units In older ballistics literature from English speaking countries, and still to this day, the most commonly used unit for sectional density of circular cross-sections is (mass) pounds per square inch (lbm/in2) The formula then becomes: where: SD is the sectional density in (mass) pounds per square inch the mass of the projectile is: mlb in pounds mgr in grains din is the diameter of the projectile in inches The sectional density defined this way is usually presented without units. In Europe the derivative unit g/cm2 is also used in literature regarding small arms projectiles to get a number in front of the decimal separator. As an example, a bullet with a mass of and a diameter of , has a sectional density (SD) of: 4·(160 gr/7000) / (π·0.264 in2) = 0.418 lbm/in2 As another example, the M107 projectile mentioned above with a mass of and having a body diameter of has a sectional density of: 4 · (95.24) / (π·6.09092) = 3.268 lbm/in2 See also Ballistic coefficient References Projectiles Aerodynamics Ballistics
Sectional density
[ "Physics", "Chemistry", "Engineering" ]
1,087
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
15,855,253
https://en.wikipedia.org/wiki/Quantification%20of%20margins%20and%20uncertainties
Quantification of Margins and Uncertainty (QMU) is a decision support methodology for complex technical decisions. QMU focuses on the identification, characterization, and analysis of performance thresholds and their associated margins for engineering systems that are evaluated under conditions of uncertainty, particularly when portions of those results are generated using computational modeling and simulation. QMU has traditionally been applied to complex systems where comprehensive experimental test data is not readily available and cannot be easily generated for either end-to-end system execution or for specific subsystems of interest. Examples of systems where QMU has been applied include nuclear weapons performance, qualification, and stockpile assessment. QMU focuses on characterizing in detail the various sources of uncertainty that exist in a model, thus allowing the uncertainty in the system response output variables to be well quantified. These sources are frequently described in terms of probability distributions to account for the stochastic nature of complex engineering systems. The characterization of uncertainty supports comparisons of design margins for key system performance metrics to the uncertainty associated with their calculation by the model. QMU supports risk-informed decision-making processes where computational simulation results provide one of several inputs to the decision-making authority. There is currently no standardized methodology across the simulation community for conducting QMU; the term is applied to a variety of different modeling and simulation techniques that focus on rigorously quantifying model uncertainty in order to support comparison to design margins. History The fundamental concepts of QMU were originally developed concurrently at several national laboratories supporting nuclear weapons programs in the late 1990s, including Lawrence Livermore National Laboratory, Sandia National Laboratory, and Los Alamos National Laboratory. The original focus of the methodology was to support nuclear stockpile decision-making, an area where full experimental test data could no longer be generated for validation due to bans on nuclear weapons testing. The methodology has since been applied in other applications where safety or mission critical decisions for complex projects must be made using results based on modeling and simulation. Examples outside of the nuclear weapons field include applications at NASA for interplanetary spacecraft and rover development, missile six-degree-of-freedom (6DOF) simulation results, and characterization of material properties in terminal ballistic encounters. Overview QMU focuses on quantification of the ratio of design margin to model output uncertainty. The process begins with the identification of the key performance thresholds for the system, which can frequently be found in the systems requirements documents. These thresholds (also referred to as performance gates) can specify an upper bound of performance, a lower bound of performance, or both in the case where the metric must remain within the specified range. For each of these performance thresholds, the associated performance margin must be identified. The margin represents the targeted range the system is being designed to operate in to safely avoid the upper and lower performance bounds. These margins account for aspects such as the design safety factor the system is being developed to as well as the confidence level in that safety factor. QMU focuses on determining the quantified uncertainty of the simulation results as they relate to the performance threshold margins. This total uncertainty includes all forms of uncertainty related to the computational model as well as the uncertainty in the threshold and margin values. The identification and characterization of these values allows the ratios of margin-to-uncertainty (M/U) to be calculated for the system. These M/U values can serve as quantified inputs that can help authorities make risk-informed decisions regarding how to interpret and act upon results based on simulations. QMU recognizes that there are multiple types of uncertainty that propagate through a model of a complex system. The simulation in the QMU process produces output results for the key performance thresholds of interest, known as the Best Estimate Plus Uncertainty (BE+U). The best estimate component of BE+U represents the core information that is known and understood about the model response variables. The basis that allows high confidence in these estimates is usually ample experimental test data regarding the process of interest which allows the simulation model to be thoroughly validated. The types of uncertainty that contribute to the value of the BE+U can be broken down into several categories: Aleatory uncertainty: This type of uncertainty is naturally present in the system being modeled and is sometimes known as “irreducible uncertainty” and “stochastic variability.” Examples include processes that are naturally stochastic such as wind gust parameters and manufacturing tolerances. Epistemic uncertainty: This type of uncertainty is due to a lack of knowledge about the system being modeled and is also known as “reducible uncertainty.” Epistemic uncertainty can result from uncertainty about the correct underlying equations of the model, incomplete knowledge of the full set of scenarios to be encountered, and lack of experimental test data defining the key model input parameters. The system may also suffer from requirements uncertainty related to the specified thresholds and margins associated with the system requirements. QMU acknowledges that in some situations, the system designer may have high confidence in what the correct value for a specific metric may be, while at other times, the selected value may itself suffer from uncertainty due to lack of experience operating in this particular regime. QMU attempts to separate these uncertainty values and quantify each of them as part of the overall inputs to the process. QMU can also factor in human error in the ability to identify the unknown unknowns that can affect a system. These errors can be quantified to some degree by looking at the limited experimental data that may be available for previous system tests and identifying what percentage of tests resulted in system thresholds being exceeded in an unexpected manner. This approach attempts to predict future events based on the past occurrences of unexpected outcomes. The underlying parameters that serve as inputs to the models are frequently modeled as samples from a probability distribution. The input parameter model distributions as well as the model propagation equations determine the distribution of the output parameter values. The distribution of a specific output value must be considered when determining what is an acceptable M/U ratio for that performance variable. If the uncertainty limit for U includes a finite upper bound due to the particular distribution of that variable, a lower M/U ratio may be acceptable. However, if U is modeled as a normal or exponential distribution which can potentially include outliers from the far tails of the distribution, a larger value may be required in order to reduce system risk to an acceptable level. Ratios of acceptable M/U for safety critical systems can vary from application to application. Studies have cited acceptable M/U ratios as being in the 2:1 to 10:1 range for nuclear weapons stockpile decision-making. Intuitively, the larger the value of M/U, the less of the available performance margin is being consumed by uncertainty in the simulation outputs. A ratio of 1:1 could result in a simulation run where the simulated performance threshold is not exceeded when in actuality the entire design margin may have been consumed. It is important to note that rigorous QMU does not ensure that the system itself is capable of meeting its performance margin; rather, it serves to ensure that the decision-making authority can make judgments based on accurately characterized results. The underlying objective of QMU is to present information to decision-makers that fully characterizes the results in light of the uncertainty as understood by the model developers. This presentation of results allows decision makers an opportunity to make informed decisions while understanding what sensitivities exist in the results due to the current understanding of uncertainty. Advocates of QMU recognize that decisions for complex systems cannot be made strictly based on the quantified M/U metrics. Subject matter expert (SME) judgment and other external factors such as stakeholder opinions and regulatory issues must also be considered by the decision-making authority before a final outcome is decided. Verification and validation Verification and validation (V & V) of a model is closely interrelated with QMU. Verification is broadly acknowledged as the process of determining if a model was built correctly; validation activities focus on determining if the correct model was built. V&V against available experimental test data is an important aspect of accurately characterizing the overall uncertainty of the system response variables. V&V seeks to make maximum use of component and subsystem-level experimental test data to accurately characterize model input parameters and the physics-based models associated with particular sub-elements of the system. The use of QMU in the simulation process helps to ensure that the stochastic nature of the input variables (due to both aleatory and epistemic uncertainties) as well as the underlying uncertainty in the model are properly accounted for when determining the simulation runs required to establish model credibility prior to accreditation. Advantages and disadvantages QMU has the potential to support improved decision-making for programs that must rely heavily on modeling and simulation. Modeling and simulation results are being used more often during the acquisition, development, design, and testing of complex engineering systems. One of the major challenges of developing simulations is to know how much fidelity should be built into each element of the model. The pursuit of higher fidelity can significantly increase development time and total cost of the simulation development effort. QMU provides a formal method for describing the required fidelity relative to the design threshold margins for key performance variables. This information can also be used to prioritize areas of future investment for the simulation. Analysis of the various M/U ratios for the key performance variables can help identify model components that are in need of fidelity upgrades to order to increase simulation effectiveness. A variety of potential issues related to the use of QMU have also been identified. QMU can lead to longer development schedules and increased development costs relative to traditional simulation projects due to the additional rigor being applied. Proponents of QMU state that the level of uncertainty quantification required is driven by certification requirements for the intended application of the simulation. Simulations used for capability planning or system trade analyses must generally model the overall performance trends of the systems and components being analyzed. However, for safety-critical systems where experimental test data is lacking, simulation results provide a critical input to the decision-making process. Another potential risk related to the use of QMU is a false sense of confidence regarding protection from unknown risks. The use of quantified results for key simulation parameters can lead decision makers to believe all possible risks have been fully accounted for, which is particularly challenging for complex systems. Proponents of QMU advocate for a risk-informed decision-making process to counter this risk; in this paradigm, M/U results as well as SME judgment and other external factors are always factored into the final decision. See also Uncertainty quantification Sandia National Laboratory Los Alamos National Laboratory Lawrence Livermore National Laboratory Verification and Validation References Nuclear stockpile stewardship Numerical analysis Decision-making
Quantification of margins and uncertainties
[ "Mathematics" ]
2,166
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
15,857,311
https://en.wikipedia.org/wiki/Mining%20feasibility%20study
A mining feasibility study is an evaluation of a proposed mining project to determine whether the mineral resource can be mined economically. There are three types of feasibility study used in mining, order of magnitude, preliminary feasibility and detailed feasibility. Order of magnitude Order of magnitude feasibility studies (sometimes referred to as "scoping studies") are an initial financial appraisal of an inferred mineral resource. Depending on the size of the project, an order of magnitude study may be carried out by a single individual. It will involve a preliminary mine plan, and is the basis for determining whether to proceed with an exploration program, and more detailed engineering work. Order-of-magnitude studies are developed by copying plans and factoring known costs from existing projects completed elsewhere and are accurate to within 40–50%. Preliminary feasibility Preliminary feasibility studies or "pre-feasibility studies" are more detailed than order of magnitude studies. A preliminary feasibility study is used in due diligence work, determining whether to proceed with a detailed feasibility study and as a "reality check" to determine areas within the project that require more attention. Preliminary feasibility studies are done by factoring known unit costs and by estimating gross dimensions or quantities once conceptual or preliminary engineering and mine design has been completed. Preliminary feasibility studies are completed by a small group of multi-disciplined technical individuals and have an accuracy within 20-30%. Detailed feasibility Detailed feasibility studies are the most detailed and will determine definitively whether to proceed with the project. A detailed feasibility study will be the basis for capital appropriation, and will provide the budget figures for the project. Detailed feasibility studies require a significant amount of formal engineering work, are accurate to within 10-15% and can cost between ½-1½% of the total estimated project cost. Footnotes Mining engineering Feasibility study
Mining feasibility study
[ "Engineering" ]
364
[ "Mining engineering" ]
15,858,460
https://en.wikipedia.org/wiki/Ei-ichi%20Negishi
was a Japanese chemist who was best known for his discovery of the Negishi coupling. He spent most of his career at Purdue University in the United States, where he was the Herbert C. Brown Distinguished Professor and the director of the Negishi-Brown Institute. He was awarded the 2010 Nobel Prize in Chemistry "for palladium catalyzed cross couplings in organic synthesis" jointly with Richard F. Heck and Akira Suzuki. Early life and education Negishi was born in Xinjing (today known as Changchun), the capital of Manchukuo, in July 1935. Following the transfer of his father who worked at the South Manchuria Railway in 1936, he moved to Harbin, and lived eight years there. In 1943, when he was nine, the Negishi family moved to Incheon, and a year later to Kyongsong Prefecture (now Seoul), both in Japanese-occupied Korea. In November 1945, three months after World War II ended, they moved to Japan. Since he excelled as a student, a year ahead of what would have been his graduation from grammar school, he was admitted to an elite secondary school, Shonan High School. At the age of 17, he gained admission to the University of Tokyo. After graduation from the University of Tokyo in 1958, Negishi did his internship at Teijin, where he conducted research on polymer chemistry. Later, he continued his studies in the United States after having won a Fulbright Scholarship and obtained his Ph.D. from the University of Pennsylvania in 1963, under the supervision of professor Allan R. Day. Career After obtaining his Ph.D., Negishi decided to become an academic researcher. Although he was hoping to work at a Japanese university, he could not find a position. In 1966 he resigned from Teijin, and became a postdoctoral associate at Purdue University, working under future Nobel laureate Herbert C. Brown. From 1968 to 1972 he was an instructor at Purdue. In 1972, he became an assistant professor at Syracuse University, where began his lifelong study of transition metal–catalyzed reactions, and was promoted to associate professor in 1979. He returned to Purdue University as a full professor in the same year. He discovered Negishi coupling, a process which condenses organic zinc compounds and organic halides under a palladium or nickel catalyst to obtain a C-C bonded product. For this achievement, he was awarded the Nobel Prize in Chemistry in 2010. Negishi also reported that organoaluminum compounds and organic zirconium compounds can be used for cross-coupling. He did not seek a patent for this coupling technology and explained his reasoning as follows: "If we did not obtain a patent, we thought that everyone could use our results easily." In addition, Zr(CH) obtained by reducing zirconocene dichloride is also called Negishi reagent, which can be used in oxidative cyclisation reactions. The technique he developed is estimated to be used in a quarter of all reactions in the pharmaceutical industry. By the time Negishi retired in 2019, he had published more than 400 academic papers. He was committed to instilling rigorous practices in his lab, emphasizing the need of keeping organized and comprehensive records. Before any separations, he asked his student to evaluate crude reaction mixtures in order to minimize loss of any useful scientific information. Recognition Awards 1996 – A. R. Day Award (ACS Philadelphia Section award) 1997 – Chemical Society of Japan Award 1998 – Herbert N. McCoy Award 1998 – American Chemical Society Award for Organometallic Chemistry 1998–2000 – Alexander von Humboldt Senior Researcher Award 2003 – Sigma Xi Award, Purdue University 2007 – Yamada–Koga Prize 2007 – Gold Medal of Charles University, Prague, Czech Republic 2010 – Nobel Prize in Chemistry 2010 – ACS Award for Creative Work in Synthetic Organic Chemistry 2015 – Fray International Sustainability Award, SIPS 2015 Honors 1960–61 – Fulbright–Smith–Mundt Fellowship 1962–63 – Harrison Fellowship at University of Pennsylvania 1986 – Guggenheim Fellowship 2000 – Sir Edward Frankland Prize Lectureship 2009 – Invited Lectureship, 4th Mitsui International Catalysis Symposium (MICS-4), Kisarazu, Japan 2010 – Order of Culture 2010 – Person of Cultural Merit 2011 – Sagamore of the Wabash 2011 – Order of the Griffin, Purdue University 2011 – Fellow, American Academy of Arts & Sciences 2011 – Honorary doctor of science, University of Pennsylvania. 2012 – Honorary Fellow of Royal Society of Chemistry (RSC) 2014 – Foreign Associate of the National Academy of Sciences Personal life and death Negishi began dating Sumire Suzuki in his freshman year and they announced their engagement to their parents in March 1958. They had met at a choir of which they were both members at in university. They married the next year and together they had two daughters. Negishi loved playing the piano and conducting. During the "Pacifichem" 2015 conference's closing ceremony, he conducted an orchestra. Disappearance On the evening of March 12, 2018, both Negishi and his wife were reported missing by family members. Police determined that, based on a purchase made earlier in the day, the couple had left their home in West Lafayette, Indiana, and headed north. At about 5 a.m. the next day, officers in Ogle County, Illinois, received a call to check on the welfare of an elderly man who was walking on a rural road south of Rockford. When he was taken to hospital, officers identified him as Negishi and found that police in Indiana were looking for him and his wife. A short time later, Suzuki's body was found at the Orchard Hills Landfill in Davis Junction, along with the couple's car. According to a statement from the family, the couple was driving to Rockford International Airport for a trip when their car became stuck in a ditch on a road near the landfill. Negishi went looking for help and was said to be suffering from an "acute state of confusion and shock". The Ogle County Sheriff Department said there was no suspicion of foul play in Suzuki's death, although the cause of her death was not immediately released. The family said Suzuki was near the end of her battle with Parkinson's disease. In May 2018, an autopsy concluded that Suzuki died from hypothermia, but Parkinson's disease and hypertension were contributing factors. Death Negishi died in Indianapolis, Indiana, on June 6, 2021. He was 85 years old. No funeral services took place in the United States, but his family planned to lay him to rest in Japan in 2022. See also List of Japanese Nobel laureates Richard F. Heck Makoto Kumada Akira Suzuki Kenkichi Sonogashira References External links Ei-ichi Negishi – – Purdue University 1935 births 2021 deaths Japanese organic chemists Japanese Nobel laureates Japanese people from Manchukuo Nobel laureates in Chemistry Syracuse University faculty Purdue University faculty Academic staff of Hokkaido University University of Tokyo alumni University of Pennsylvania alumni Recipients of the Order of Culture People from Changchun 20th-century Japanese chemists Foreign associates of the National Academy of Sciences 21st-century Japanese chemists Chemists from Jilin Educators from Jilin
Ei-ichi Negishi
[ "Chemistry" ]
1,487
[ "Organic chemists", "Japanese organic chemists" ]
13,179,037
https://en.wikipedia.org/wiki/GPER
G protein-coupled estrogen receptor 1 (GPER), also known as G protein-coupled receptor 30 (GPR30), is a protein that in humans is encoded by the GPER gene. GPER binds to and is activated by the female sex hormone estradiol and is responsible for some of the rapid effects that estradiol has on cells. Discovery The classical estrogen receptors first characterized in 1958 are water-soluble proteins located in the interior of cells that are activated by estrogenenic hormones such as estradiol and several of its metabolites such as estrone or estriol. These proteins belong to the nuclear hormone receptor class of transcription factors that regulate gene transcription. Since it takes time for genes to be transcribed into RNA and translated into protein, the effects of estrogens binding to these classical estrogen receptors is delayed. However, estrogens are also known to have effects that are too fast to be caused by regulation of gene transcription. In 2005, it was discovered that a member of the G protein-coupled receptor (GPCR) family, GPR30 also binds with high affinity to estradiol and is responsible in part for the rapid non-genomic actions of estradiol. Based on its ability to bind estradiol, GPR30 was renamed as G protein-coupled estrogen receptor (GPER). GPER is localized in the plasma membrane but is predominantly detected in the endoplasmic reticulum. Ligands GPER binds estradiol with high affinity though not other endogenous estrogens, such as estrone or estriol, nor other endogenous steroids, including progesterone, testosterone, and cortisol. Although potentially involved in signaling by aldosterone, GPER does not show any detectable binding towards aldosterone. Niacin and nicotinamide bind to the receptor in vitro with very low affinity. CCL18 has been identified as an endogenous antagonist of the GPER. GPER-selective ligands (that do not bind the classical estrogen receptors) include the agonist G-1 and the antagonists G15 and G36. Agonists 2-Methoxyestradiol 2,2',5'-PCB-4-OH Afimoxifene Aldosterone Atrazine Bisphenol A Daidzein DDT (p,p'-DDT, o',p'-DDE) Diarylpropionitrile (DPN) Equol Estradiol Ethynylestradiol Fulvestrant (ICI-182780)) G-1 Genistein GPER-L1 GPER-L2 Hydroxytyrosol Kepone LNS8801 Niacin Nicotinamide Nonylphenol Oleuropein Protocatechuic aldehyde Propylpyrazoletriol (PPT) Quercetin Raloxifene Resveratrol STX Tamoxifen Tectoridin Antagonists CCL18 Estriol G15 G36 MIBE Unknown Diethylstilbestrol Zearalenone Non-ligand 17α-Estradiol Estrone Function This protein is a member of the rhodopsin-like family of G protein-coupled receptors and is a multi-pass membrane protein that localizes to the plasma membrane. The protein binds estradiol, resulting in intracellular calcium mobilization and synthesis of phosphatidylinositol (3,4,5)-trisphosphate in the nucleus. This protein therefore plays a role in the rapid nongenomic signaling events widely observed following stimulation of cells and tissues with estradiol. The distribution of GPER is well established in the rodent, with high expression observed in the hypothalamus, pituitary gland, adrenal medulla, kidney medulla and developing follicles of the ovary. Role in cancer GPER expression has been studied in cancer using immunohistochemical and transcriptomic approaches, and has been detected in: colon, lung, melanoma, pancreatic, breast, ovarian, and testicular cancer. Many groups have demonstrated that GPER signaling is tumor suppressive in cancers that are not traditionally hormone responsive, including melanoma, pancreatic, lung and colon cancer. Additionally, many groups have demonstrated that GPER activation is also tumor suppressive in cancers that are classically considered sex hormone responsive, including endometrial cancer, ovarian cancer, prostate cancer, and Leydig cell tumors. Although GPER signaling was originally thought to be tumor promoting in some breast cancer models, subsequent reports show that GPER signaling inhibits breast cancer. Consistent with this, recent studies showed that the presence of GPER protein in human breast cancer tissue correlates with longer survival. In summary, many independent groups have demonstrated that GPER activation may be a therapeutically useful mechanism for a wide range of cancer types. Linnaeus Therapeutics is currently running NCI clinical trial (NCT04130516) using GPER agonist, LNS8801, as monotherapy and in combination with the immune checkpoint inhibitor, pembrolizumab, for the treatment of multiple solid tumor malignancies. Activation of GPER with LNS8801 has demonstrated efficacy in humans in cutaneous melanoma, uveal melanoma, lung cancer, neuroendocrine cancer, colorectal cancer, and other PD-1 inhibitor refractory cancers. Role in normal tissues Reproductive tissue Estradiol produces cell proliferation in both normal and malignant breast epithelial tissue. However, GPER knockout mice show no overt mammary phenotype, unlike ERα knockout mice, but similarly to ERβ knockout mice. This indicates that although GPER and ERβ play a modulatory role in breast development, ERα is the main receptor responsible for estrogen-mediated breast tissue growth. GPER is expressed in germ cells and has been found to be essential for male fertility, specifically, in spermatogenesis. GPER has been found to modulate gonadotropin-releasing hormone (GnRH) secretion in the hypothalamic-pituitary-gonadal (HPG) axis. Cardiovascular effects GPER is expressed in the blood vessel endothelium and is responsible for vasodilation and as a result, blood pressure lowering effects of 17β-estradiol. GPER also regulates components of the renin–angiotensin system, which also controls blood pressure, and is required for superoxide-mediated cardiovascular function and aging. Central nervous system activity GPER and ERα, but not ERβ, have been found to mediate the antidepressant-like effects of estradiol. Contrarily, activation of GPER has been found to be anxiogenic in mice, while activation of ERβ has been found to be anxiolytic. There is a high expression of GPER, as well as ERβ, in oxytocin neurons in various parts of the hypothalamus, including the paraventricular nucleus and the supraoptic nucleus. It is speculated that activation of GPER may be the mechanism by which estradiol mediates rapid effects on the oxytocin system, for instance, rapidly increasing oxytocin receptor expression. Estradiol has also been found to increase oxytocin levels and release in the medial preoptic area and medial basal hypothalamus, actions that may be mediated by activation of GPER and/or ERβ. Estradiol, as well as tamoxifen and fulvestrant, have been found to rapidly induce lordosis through activation of GPER in the arcuate nucleus of the hypothalamus of female rats. Metabolic roles Female GPER knockout mice display hyperglycemia and impaired glucose tolerance, reduced body growth, and increased blood pressure. Male GPER knockout mice are observed to have increased growth, body fat, insulin resistance and glucose intolerance, dyslipidemia, increased osteoblast function (mineralization), resulting in higher bone mineral density and trabecular bone volume, and persistent growth plate activity resulting in longer bones. The GPER-selective agonist G-1 shows therapeutic efficacy in mouse models of obesity and diabetes. Role in neurological disorders GPER is broadly expressed on the nervous system, and GPER activation promotes beneficial effects in several brain disorders. A study suggests that GPER levels were significantly lower in children with ADHD compared to controls. See also Membrane estrogen receptor Gq-mER ER-X ERx References External links G protein-coupled receptors
GPER
[ "Chemistry" ]
1,814
[ "G protein-coupled receptors", "Signal transduction" ]
13,180,391
https://en.wikipedia.org/wiki/Anisohedral%20tiling
In geometry, a shape is said to be anisohedral if it admits a tiling, but no such tiling is isohedral (tile-transitive); that is, in any tiling by that shape there are two tiles that are not equivalent under any symmetry of the tiling. A tiling by an anisohedral tile is referred to as an anisohedral tiling. Existence The first part of Hilbert's eighteenth problem asked whether there exists an anisohedral polyhedron in Euclidean 3-space; Grünbaum and Shephard suggest that Hilbert was assuming that no such tile existed in the plane. Reinhardt answered Hilbert's problem in 1928 by finding examples of such polyhedra, and asserted that his proof that no such tiles exist in the plane would appear soon. However, Heesch then gave an example of an anisohedral tile in the plane in 1935. Convex tiles Reinhardt had previously considered the question of anisohedral convex polygons, showing that there were no anisohedral convex hexagons but being unable to show there were no such convex pentagons, while finding the five types of convex pentagon tiling the plane isohedrally. Kershner gave three types of anisohedral convex pentagon in 1968; one of these tiles using only direct isometries without reflections or glide reflections, so answering a question of Heesch. Isohedral numbers The problem of anisohedral tiling has been generalised by saying that the isohedral number of a tile is the lowest number orbits (equivalence classes) of tiles in any tiling of that tile under the action of the symmetry group of that tiling, and that a tile with isohedral number k is k-anisohedral. Berglund asked whether there exist k-anisohedral tiles for all k, giving examples for k ≤ 4 (examples of 2-anisohedral and 3-anisohedral tiles being previously known, while the 4-anisohedral tile given was the first such published tile). Goodman-Strauss considered this in the context of general questions about how complex the behaviour of a given tile or set of tiles can be, noting a 10-anisohedral example of Myers. Grünbaum and Shephard had previously raised a slight variation on the same question. Socolar showed in 2007 that arbitrarily high isohedral numbers can be achieved in two dimensions if the tile is disconnected, or has coloured edges with constraints on what colours can be adjacent, and in three dimensions with a connected tile without colours, noting that in two dimensions for a connected tile without colours the highest known isohedral number is 10. Joseph Myers has produced a collection of tiles with high isohedral numbers, particularly a polyhexagon with isohedral number 10 (occurring in 20 orbits under translation) and another with isohedral number 9 (occurring in 36 orbits under translation). References External links John Berglund, Anisohedral Tilings Page Joseph Myers, Polyomino, polyhex and polyiamond tiling Tessellation
Anisohedral tiling
[ "Physics", "Mathematics" ]
619
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
13,180,501
https://en.wikipedia.org/wiki/Phytomining
Phytomining, sometimes called agromining, is the concept of extracting heavy metals from the soil using plants. Specifically, phytomining is for the purpose of economic gain. The approach exploits the existence of hyperaccumulators, proteins or compounds secreted by plants to bind certain metal ions. These extracted ores are called bio-ores. A 2021 review concluded that the commercial viability of phytomining was "limited" because it is a slow and inefficient process. History Phytomining was first proposed in 1983 by Rufus Chaney, a USDA agronomist. He and Alan Baker, a University of Melbourne professor, first tested it in 1996. They, as well as Jay Scott Angle and Yin-Ming Li, filed a patent on the process in 1995 which expired in 2015. Advantages Phytomining would, in principle, cause minimal environmental effects compared to mining. Phytomining could also remove low-grade heavy metals from mine waste. See also Semisynthesis References Bioremediation Biotechnology Ecological restoration Environmental terminology Phytoremediation plants Soil contamination Sustainable technologies
Phytomining
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
235
[ "Ecological restoration", "Phytoremediation plants", "Environmental chemistry", "Biotechnology", "Biodegradation", "Environmental engineering", "Ecological techniques", "Soil contamination", "nan", "Bioremediation", "Environmental soil science" ]
13,182,827
https://en.wikipedia.org/wiki/Weld%20quality%20assurance
Weld quality assurance is the use of technological methods and actions to test or assure the quality of welds, and secondarily to confirm the presence, location and coverage of welds. In manufacturing, welds are used to join two or more metal surfaces. Because these connections may encounter loads and fatigue during product lifetime, there is a chance they may fail if not created to proper specification. Weld testing and analysis Methods of weld testing and analysis are used to assure the quality and correctness of the weld after it is completed. This term generally refers to testing and analysis focused on the quality and strength of the weld but may refer to technological actions to check for the presence, position, and extent of welds. These are divided into destructive and non-destructive methods. A few examples of destructive testing include macro etch testing, fillet-weld break tests, transverse tension tests, and guided bend tests. Other destructive methods include acid etch testing, back bend testing, tensile strength break testing, nick break testing, and free bend testing. Non-destructive methods include fluorescent penetrate tests, magnaflux tests, eddy current (electromagnetic) tests, hydrostatic testing, tests using magnetic particles, X-rays and gamma ray-based methods, and acoustic emission techniques. Other methods include ferrite and hardness testing. Imaging-based methods Industrial Radiography X-ray-based weld inspection may be manual, performed by an inspector on X-ray-based images or video, or automated using machine vision. Gamma Rays can also be used Visible light imaging Inspection may be manual, conducted by an inspector using imaging equipment, or automated using machine vision. Since the similarity of materials between weld and workpiece, and between good and defective areas, provides little inherent contrast, the latter usually requires methods other than simple imaging. One (destructive) method involves the microscopic analysis of a weld cross-section. Ultrasonic- and acoustic-based methods Ultrasonic testing uses the principle that a gap in the weld changes the propagation of ultrasonic sound through the metal. One common method uses single-probe ultrasonic testing involving operator interpretation of an oscilloscope-type screen. Another senses using a 2D array of ultrasonic sensors. Conventional, phased array and time of flight diffraction (TOFD) methods can be combined into the same piece of test equipment. Acoustic emission methods monitor for the sound created by the loading or flexing of the weld. Peel testing of spot welds This method includes tearing the weld apart and measuring the size of the remaining weld. Weld monitoring Weld monitoring methods ensure the weld's quality and correctness during welding. The term is generally applied to automated monitoring for weld-quality purposes and secondarily for process-control purposes such as vision-based robot guidance. Visual weld monitoring is also performed during the welding process. On vehicular applications, weld monitoring aims to enable improvements in the quality, durability, and safety of vehicles – with cost savings in the avoidance of recalls to fix the large proportion of systemic quality problems that arise from suboptimal welding. Quality monitoring of automatic welding can save production downtime and reduce the need for product reworking and recall. Industrial monitoring systems encourage high production rates and reduce scrap costs. Inline coherent imaging Inline coherent imaging (ICI) is a recently developed interferometric technique based on optical coherence tomography that is used for quality assurance of keyhole laser beam welding, a welding method that is gaining popularity in a variety of industries. ICI aims a low-powered broadband light source through the same optical path as the primary welding laser. The beam enters the keyhole of the weld and is reflected back into the head optics by the bottom of the keyhole. An interference pattern is produced by combining the reflected light with a separate beam that has traveled through a path of a known distance. This interference pattern is then analyzed to obtain a precise measurement of the depth of the keyhole. Because these measurements are acquired in real-time, ICI can also be used to control the laser penetration depth by using the depth measurement in a feedback loop that modulates the laser's output power. Transient thermal analysis method Transient thermal analysis is used for range of weld optimization tasks. Signature image processing method Signature image processing (SIP) is a technology for analyzing electrical data collected from welding processes. Acceptable welding requires exact conditions; variations in conditions can render a weld unacceptable. SIP allows the identification of welding faults in real time, measures the stability of welding processes, and enables the optimization of welding processes. Development The idea of using electrical data analyzed by algorithms to assess the quality of the welds produced in robotic manufacturing emerged in 1995 from research by Associate Professor Stephen Simpson at the University of Sydney on the complex physical phenomena that occur in welding arcs. Simpson realized that a way of determining the quality of a weld could be developed without a definitive understanding of those phenomena. The development involved: a method for handling sampled data blocks by treating them as phase-space portrait signatures with appropriate image processing. Typically, one second's worth of sampled welding voltage and current data are collected from GMAW pulse or short arc welding processes. The data is converted to a 2D histogram, and signal-processing operations such as image smoothing are performed. a technique for analyzing welding signatures based on statistical methods from the social sciences, such as principal component analysis. The relationship between the welding voltage and the current reflects the state of the welding process, and the signature image includes this information. Comparing signatures quantitatively using principal component analysis allows for the spread of signature images, enabling faults to be detected and identified The system includes algorithms and mathematics appropriate for real-time welding analysis on personal computers, and the multidimensional optimization of fault-detection performance using experimental welding data. Comparing signature images from moment to moment in a weld provides a useful estimate of how stable the welding process is. "Through-the-arc" sensing, by comparing signature images when the physical parameters of the process change, leads to quantitative estimates—for example, of the position of the weld bead. Unlike systems that log information for later study or use X-rays or ultrasound to check samples, SIP technology looks at the electrical signal and detects faults when they occur. Data blocks of 4,000 points of electrical data are collected four times a second and converted to signature images. After image processing operations, statistical analyses of the signatures provide a quantitative assessment of the welding process, revealing its stability and reproducibility and providing fault detection and process diagnostics. A similar approach, using voltage-current histograms and a simplified statistical measure of distance between signature images, has been evaluated for tungsten inert gas (TIG) welding by researchers from Osaka University. Industrial application SIP provides the basis for the WeldPrint system, which consists of a front-end interface and software based on the SIP engine and relies on electrical signals alone. It is designed to be non-intrusive and sufficiently robust to withstand harsh industrial welding environments. The first major purchaser of the technology, GM Holden provided feedback that allowed the system to be refined in ways that increased its industrial and commercial value. Improvements in the algorithms, including multiple parameter optimization with a server network, have led to an order-of-magnitude improvement in fault-detection performance over the past five years. WeldPrint for arc welding became available in mid-2001. About 70 units have been deployed since 2001, about 90% used on the shop floors of automotive manufacturing companies and their suppliers. Industrial users include Lear (UK), Unidrive, GM Holden, Air International and QTB Automotive (Australia). Units have been leased to Australian companies such as Rheem, Dux, and OneSteel for welding evaluation and process improvement. The WeldPrint software received the Brother business software of the year award (2001); in 2003, the technology received the A$100,000 inaugural Australasian Peter Doherty Prize for Innovation; and WTi, the University of Sydney's original spin-off company, received an AusIndustry Certificate of Achievement in recognition of the development. SIP has opened opportunities for researchers to use it as a measurement tool both in welding and in related disciplines, such as structural engineering. Research opportunities have opened up in the application of biomonitoring of external EEGs, where SIP offers advantages in interpreting the complex signals Weld mapping Weld mapping is the process of assigning information to a weld repair or joint to enable easy identification of weld processes, production (welders, their qualifications, date welded), quality (visual inspection, NDT, standards and specifications) and traceability (tracking weld joints and welded castings, the origin of weld materials). Weld mapping should also incorporate a pictorial identification to represent the weld number on the fabrication drawing or casting repair. Military, nuclear and commercial industries possess unique quality standards (eg., ISO, CEN, ASME, ASTM, AWS, NAVSEA) which direct weld mapping procedures and specifications, both in metal casting in which defects are removed and filled in via GTAW (TIG welding) or SMAW (stick welding) processes, or fabrication of weld joints which primarily involves GMAW (MIG welding). See also Welding defect Industrial radiography Robot welding Pipeline and Hazardous Materials Safety Administration References Further reading ISO 3834-1: "Quality requirements for fusion welding of metallic materials. Criteria for the selection of the appropriate level of quality requirements" 2005) ISO 3834-2: "Quality requirements for fusion welding of metallic materials. Comprehensive quality requirements" (2005) ISO 3834-3: "Quality requirements for fusion welding of metallic materials. Standard quality requirements" (2005) ISO 3834-4: "Quality requirements for fusion welding of metallic materials. Elementary quality requirements" (2005) ISO 3834-5: "Quality requirements for fusion welding of metallic materials. Documents with which it is necessary to conform to claim conformity to the quality requirements of ISO 3834-2, ISO 3834-3 or ISO 3834-4" ISO/TR 3834-6: "Quality requirements for fusion welding of metallic materials. Guidelines on implementing ISO 3834" (2007) Welding
Weld quality assurance
[ "Engineering" ]
2,111
[ "Welding", "Mechanical engineering" ]
13,183,981
https://en.wikipedia.org/wiki/Istv%C3%A1n%20F%C3%A1ry
István Fáry (30 June 1922 – 2 November 1984) was a Hungarian-born mathematician known for his work in geometry and algebraic topology. He proved Fáry's theorem that every planar graph has a straight-line embedding in 1948, and the Fáry–Milnor theorem lower-bounding the curvature of a nontrivial knot in 1949. Biography Fáry was born June 30, 1922, in Gyula, Hungary. After studying for a master's degree at the University of Budapest, he moved to the University of Szeged, where he earned a Ph.D. in 1947. He then studied at the Sorbonne before taking a faculty position at the University of Montreal in 1955. He moved to the University of California, Berkeley in 1958 and became a full professor in 1962. He died on November 2, 1984, in El Cerrito, California. Selected publications . . References External links Photos from the Oberwolfach Photo Collection 1922 births 1984 deaths 20th-century Hungarian mathematicians University of California, Berkeley College of Letters and Science faculty Geometers Topologists University of Paris alumni Hungarian expatriates in France Hungarian expatriates in Canada Hungarian expatriates in the United States
István Fáry
[ "Mathematics" ]
252
[ "Topologists", "Topology", "Geometers", "Geometry" ]
13,185,596
https://en.wikipedia.org/wiki/Stack%20%28mathematics%29
In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist. Descent theory is concerned with generalisations of situations where isomorphic, compatible geometrical objects (such as vector bundles on topological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced with pullbacks; fibred categories then make a good framework to discuss the possibility of such gluing. The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another base category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology. Overview Stacks are the underlying structure of algebraic stacks (also called Artin stacks) and Deligne–Mumford stacks, which generalize schemes and algebraic spaces and which are particularly useful in studying moduli spaces. There are inclusions: schemes ⊆ algebraic spaces ⊆ Deligne–Mumford stacks ⊆ algebraic stacks (Artin stacks) ⊆ stacks. and give a brief introductory accounts of stacks, , and give more detailed introductions, and describes the more advanced theory. Motivation and history The concept of stacks has its origin in the definition of effective descent data in . In a 1959 letter to Serre, Grothendieck observed that a fundamental obstruction to constructing good moduli spaces is the existence of automorphisms. A major motivation for stacks is that if a moduli space for some problem does not exist because of the existence of automorphisms, it may still be possible to construct a moduli stack. studied the Picard group of the moduli stack of elliptic curves, before stacks had been defined. Stacks were first defined by , and the term "stack" was introduced by for the original French term "champ" meaning "field". In this paper they also introduced Deligne–Mumford stacks, which they called algebraic stacks, though the term "algebraic stack" now usually refers to the more general Artin stacks introduced by . When defining quotients of schemes by group actions, it is often impossible for the quotient to be a scheme and still satisfy desirable properties for a quotient. For example, if a few points have non-trivial stabilisers, then the categorical quotient will not exist among schemes, but it will exist as a stack. In the same way, moduli spaces of curves, vector bundles, or other geometric objects are often best defined as stacks instead of schemes. Constructions of moduli spaces often proceed by first constructing a larger space parametrizing the objects in question, and then quotienting by group action to account for objects with automorphisms which have been overcounted. Definitions Abstract stacks A category with a functor to a category is called a fibered category over if for any morphism in and any object of with image (under the functor), there is a pullback of by . This means a morphism with image such that any morphism with image can be factored as by a unique morphism in such that the functor maps to . The element is called the pullback of along and is unique up to canonical isomorphism. The category c is called a prestack over a category C with a Grothendieck topology if it is fibered over C and for any object U of C and objects x, y of c with image U, the functor from the over category C/U to sets taking F:V→U to Hom(F*x,F*y) is a sheaf. This terminology is not consistent with the terminology for sheaves: prestacks are the analogues of separated presheaves rather than presheaves. Some authors require this as a property of stacks, rather than of prestacks. The category c is called a stack over the category C with a Grothendieck topology if it is a prestack over C and every descent datum is effective. A descent datum consists roughly of a covering of an object V of C by a family Vi, elements xi in the fiber over Vi, and morphisms fji between the restrictions of xi and xj to Vij=Vi×VVj satisfying the compatibility condition fki = fkjfji. The descent datum is called effective if the elements xi are essentially the pullbacks of an element x with image V. A stack is called a stack in groupoids or a (2,1)-sheaf if it is also fibered in groupoids, meaning that its fibers (the inverse images of objects of C) are groupoids. Some authors use the word "stack" to refer to the more restrictive notion of a stack in groupoids. Algebraic stacks An algebraic stack or Artin stack is a stack in groupoids X over the fppf site such that the diagonal map of X is representable and there exists a smooth surjection from (the stack associated to) a scheme to X. A morphism Y X of stacks is representable if, for every morphism S X from (the stack associated to) a scheme to X, the fiber product Y ×X S is isomorphic to (the stack associated to) an algebraic space. The fiber product of stacks is defined using the usual universal property, and changing the requirement that diagrams commute to the requirement that they 2-commute. See also morphism of algebraic stacks for further information. The motivation behind the representability of the diagonal is the following: the diagonal morphism is representable if and only if for any pair of morphisms of algebraic spaces , their fiber product is representable. A Deligne–Mumford stack is an algebraic stack X such that there is an étale surjection from a scheme to X. Roughly speaking, Deligne–Mumford stacks can be thought of as algebraic stacks whose objects have no infinitesimal automorphisms. Local structure of algebraic stacks Since the inception of algebraic stacks it was expected that they are locally quotient stacks of the form where is a linearly reductive algebraic group. This was recently proved to be the case: given a quasi-separated algebraic stack locally of finite type over an algebraically closed field whose stabilizers are affine, and a smooth and closed point with linearly reductive stabilizer group , there exists an etale cover of the GIT quotient , where , such that the diagramis cartesian, and there exists an etale morphisminducing an isomorphism of the stabilizer groups at and . Examples Elementary examples Every sheaf from a category with a Grothendieck topology can canonically be turned into a stack. For an object , instead of a set there is a groupoid whose objects are the elements of and the arrows are the identity morphism. More concretely, let be a contravariant functor Then, this functor determines the following category an object is a pair consisting of a scheme in and an element a morphism consists of a morphism in such that . Via the forgetful functor , the category is a category fibered over . For example, if is a scheme in , then it determines the contravariant functor and the corresponding fibered category is the . Stacks (or prestacks) can be constructed as a variant of this construction. In fact, any scheme with a quasi-compact diagonal is an algebraic stack associated to the scheme . Stacks of objects A group-stack. The moduli stack of vector bundles: the category of vector bundles V→S is a stack over the category of topological spaces S. A morphism from V→S to W→T consists of continuous maps from S to T and from V to W (linear on fibers) such that the obvious square commutes. The condition that this is a fibered category follows because one can take pullbacks of vector bundles over continuous maps of topological spaces, and the condition that a descent datum is effective follows because one can construct a vector bundle over a space by gluing together vector bundles on elements of an open cover. The stack of quasi-coherent sheaves on schemes (with respect to the fpqc-topology and weaker topologies) The stack of affine schemes on a base scheme (again with respect to the fpqc topology or a weaker one) Constructions with stacks Stack quotients If is a scheme and is a smooth affine group scheme acting on , then there is a quotient algebraic stack , taking a scheme to the groupoid of -torsors over the -scheme with -equivariant maps to . Explicitly, given a space with a -action, form the stack , which (intuitively speaking) sends a space to the groupoid of pullback diagramswhere is a -equivariant morphism of spaces and is a principal -bundle. The morphisms in this category are just morphisms of diagrams where the arrows on the right-hand side are equal and the arrows on the left-hand side are morphisms of principal -bundles. Classifying stacks A special case of this when X is a point gives the classifying stack BG of a smooth affine group scheme G: It is named so since the category , the fiber over Y, is precisely the category of principal -bundles over . Note that itself can be considered as a stack, the moduli stack of principal G-bundles on Y. An important subexample from this construction is , which is the moduli stack of principal -bundles. Since the data of a principal -bundle is equivalent to the data of a rank vector bundle, this is isomorphic to the moduli stack of rank vector bundles . Moduli stack of line bundles The moduli stack of line bundles is since every line bundle is canonically isomorphic to a principal -bundle. Indeed, given a line bundle over a scheme , the relative specgives a geometric line bundle. By removing the image of the zero section, one obtains a principal -bundle. Conversely, from the representation , the associated line bundle can be reconstructed. Gerbes A gerbe is a stack in groupoids that is locally nonempty, for example the trivial gerbe that assigns to each scheme the groupoid of principal -bundles over the scheme, for some group . Relative spec and proj If A is a quasi-coherent sheaf of algebras in an algebraic stack X over a scheme S, then there is a stack Spec(A) generalizing the construction of the spectrum Spec(A) of a commutative ring A. An object of Spec(A) is given by an S-scheme T, an object x of X(T), and a morphism of sheaves of algebras from x*(A) to the coordinate ring O(T) of T. If A is a quasi-coherent sheaf of graded algebras in an algebraic stack X over a scheme S, then there is a stack Proj(A) generalizing the construction of the projective scheme Proj(A) of a graded ring A. Moduli stacks Moduli of curves studied the moduli stack M1,1 of elliptic curves, and showed that its Picard group is cyclic of order 12. For elliptic curves over the complex numbers the corresponding stack is similar to a quotient of the upper half-plane by the action of the modular group. The moduli space of algebraic curves defined as a universal family of smooth curves of given genus does not exist as an algebraic variety because in particular there are curves admitting nontrivial automorphisms. However there is a moduli stack , which is a good substitute for the non-existent fine moduli space of smooth genus curves. More generally there is a moduli stack of genus curves with marked points. In general this is an algebraic stack, and is a Deligne–Mumford stack for or or (in other words when the automorphism groups of the curves are finite). This moduli stack has a completion consisting of the moduli stack of stable curves (for given and ), which is proper over Spec Z. For example, is the classifying stack of the projective general linear group. (There is a subtlety in defining , as one has to use algebraic spaces rather than schemes to construct it.) Kontsevich moduli spaces Another widely studied class of moduli spaces are the Kontsevich moduli spaces parameterizing the space of stable maps between curves of a fixed genus to a fixed space whose image represents a fixed cohomology class. These moduli spaces are denotedand can have wild behavior, such as being reducible stacks whose components are non-equal dimension. For example, the moduli stack has smooth curves parametrized by an open subset . On the boundary of the moduli space, where curves may degenerate to reducible curves, there is a substack parametrizing reducible curves with a genus component and a genus component intersecting at one point, and the map sends the genus curve to a point. Since all such genus curves are parametrized by , and there is an additional dimensional choice of where these curves intersect on the genus curve, the boundary component has dimension . Other moduli stacks A Picard stack generalizes a Picard variety. The moduli stack of formal group laws classifies formal group laws. An ind-scheme such as an infinite projective space and a formal scheme is a stack. A moduli stack of shtukas is used in geometric Langlands program. (See also shtukas.) Geometric stacks Weighted projective stacks Constructing weighted projective spaces involves taking the quotient variety of some by a -action. In particular, the action sends a tupleand the quotient of this action gives the weighted projective space . Since this can instead be taken as a stack quotient, the weighted projective stack pg 30 isTaking the vanishing locus of a weighted polynomial in a line bundle gives a stacky weighted projective variety. Stacky curves Stacky curves, or orbicurves, can be constructed by taking the stack quotient of a morphism of curves by the monodromy group of the cover over the generic points. For example, take a projective morphismwhich is generically etale. The stack quotient of the domain by gives a stacky with stacky points that have stabilizer group at the fifth roots of unity in the -chart. This is because these are the points where the cover ramifies. Non-affine stack An example of a non-affine stack is given by the half-line with two stacky origins. This can be constructed as the colimit of two inclusion of . Quasi-coherent sheaves on algebraic stacks On an algebraic stack one can construct a category of quasi-coherent sheaves similar to the category of quasi-coherent sheaves over a scheme. A quasi-coherent sheaf is roughly one that looks locally like the sheaf of a module over a ring. The first problem is to decide what one means by "locally": this involves the choice of a Grothendieck topology, and there are many possible choices for this, all of which have some problems and none of which seem completely satisfactory. The Grothendieck topology should be strong enough so that the stack is locally affine in this topology: schemes are locally affine in the Zariski topology so this is a good choice for schemes as Serre discovered, algebraic spaces and Deligne–Mumford stacks are locally affine in the etale topology so one usually uses the etale topology for these, while algebraic stacks are locally affine in the smooth topology so one can use the smooth topology in this case. For general algebraic stacks the etale topology does not have enough open sets: for example, if G is a smooth connected group then the only etale covers of the classifying stack BG are unions of copies of BG, which are not enough to give the right theory of quasicoherent sheaves. Instead of using the smooth topology for algebraic stacks one often uses a modification of it called the Lis-Et topology (short for Lisse-Etale: lisse is the French term for smooth), which has the same open sets as the smooth topology but the open covers are given by etale rather than smooth maps. This usually seems to lead to an equivalent category of quasi-coherent sheaves, but is easier to use: for example it is easier to compare with the etale topology on algebraic spaces. The Lis-Et topology has a subtle technical problem: a morphism between stacks does not in general give a morphism between the corresponding topoi. (The problem is that while one can construct a pair of adjoint functors f*, f*, as needed for a geometric morphism of topoi, the functor f* is not left exact in general. This problem is notorious for having caused some errors in published papers and books.) This means that constructing the pullback of a quasicoherent sheaf under a morphism of stacks requires some extra effort. It is also possible to use finer topologies. Most reasonable "sufficiently large" Grothendieck topologies seem to lead to equivalent categories of quasi-coherent sheaves, but the larger a topology is the harder it is to handle, so one generally prefers to use smaller topologies as long as they have enough open sets. For example, the big fppf topology leads to essentially the same category of quasi-coherent sheaves as the Lis-Et topology, but has a subtle problem: the natural embedding of quasi-coherent sheaves into OX modules in this topology is not exact (it does not preserve kernels in general). Other types of stack Differentiable stacks and topological stacks are defined in a way similar to algebraic stacks, except that the underlying category of affine schemes is replaced by the category of smooth manifolds or topological spaces. More generally one can define the notion of an n-sheaf or n–1 stack, which is roughly a sort of sheaf taking values in n–1 categories. There are several inequivalent ways of doing this. 1-sheaves are the same as sheaves, and 2-sheaves are the same as stacks. They are called higher stacks. A very similar and analogous extension is to develop the stack theory on non-discrete objects (i.e., a space is really a spectrum in algebraic topology). The resulting stacky objects are called derived stacks (or spectral stacks). Jacob Lurie's under-construction book Spectral Algebraic Geometry studies a generalization that he calls a spectral Deligne–Mumford stack. By definition, it is a ringed ∞-topos that is étale-locally the étale spectrum of an E∞-ring (this notion subsumes that of a derived scheme, at least in characteristic zero.) Set-theoretical problems There are some minor set theoretical problems with the usual foundation of the theory of stacks, because stacks are often defined as certain functors to the category of sets and are therefore not sets. There are several ways to deal with this problem: One can work with Grothendieck universes: a stack is then a functor between classes of some fixed Grothendieck universe, so these classes and the stacks are sets in a larger Grothendieck universe. The drawback of this approach is that one has to assume the existence of enough Grothendieck universes, which is essentially a large cardinal axiom. One can define stacks as functors to the set of sets of sufficiently large rank, and keep careful track of the ranks of the various sets one uses. The problem with this is that it involves some additional rather tiresome bookkeeping. One can use reflection principles from set theory stating that one can find set models of any finite fragment of the axioms of ZFC to show that one can automatically find sets that are sufficiently close approximations to the universe of all sets. One can simply ignore the problem. This is the approach taken by many authors. See also Algebraic stack Chow group of a stack Deligne–Mumford stack Glossary of algebraic geometry Pursuing Stacks Quotient space of an algebraic stack Ring of modular forms Simplicial presheaf Stacks Project Toric stack Generalized space Notes References Pedagogical is an expository article describing the basics of stacks with examples. Guides to the literature https://maths-people.anu.edu.au/~alperj/papers/stacks-guide.pdf http://stacks.math.columbia.edu/tag/03B0 References Unfortunately this book uses the incorrect assertion that morphisms of algebraic stacks induce morphisms of lisse-étale topoi. Some of these errors were fixed by . Further reading External links "Good introductory references on algebraic stacks?" Algebraic geometry Category theory
Stack (mathematics)
[ "Mathematics" ]
4,457
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Category theory", "Algebraic geometry" ]
41,245
https://en.wikipedia.org/wiki/Hybrid%20balance
In telecommunications, a hybrid balance is an expression of the degree of electrical symmetry between two impedances connected to two conjugate sides of a hybrid coil or resistance hybrid. It is usually expressed in dB. If the respective impedances of the branches of the hybrid that are connected to the conjugate sides of the hybrid are known, hybrid balance may be computed by the formula for return loss. Telecommunications engineering Electrical parameters
Hybrid balance
[ "Engineering" ]
87
[ "Electrical engineering", "Telecommunications engineering", "Electrical parameters" ]
41,248
https://en.wikipedia.org/wiki/Hydroxyl%20ion%20absorption
Hydroxyl ion absorption is the absorption in optical fibers of electromagnetic radiation, including the near-infrared, due to the presence of trapped hydroxyl ions remaining from water as a contaminant. The hydroxyl (OH−) ion can penetrate glass during or after product fabrication, resulting in significant attenuation of discrete optical wavelengths, e.g., centred at 1.383 μm, used for communications via optical fibres. See also Electromagnetic absorption by water References Fiber optics Glass engineering and science
Hydroxyl ion absorption
[ "Chemistry", "Materials_science", "Engineering" ]
107
[ "Glass engineering and science", "Materials science", "Analytical chemistry stubs" ]
41,256
https://en.wikipedia.org/wiki/Index-matching%20material
In optics, an index-matching material is a substance, usually a liquid, cement (adhesive), or gel, which has an index of refraction that closely approximates that of another object (such as a lens, material, fiber-optic, etc.). When two substances with the same index are in contact, light passes from one to the other with neither reflection nor refraction. As such, they are used for various purposes in science, engineering, and art. For example, in a popular home experiment, a glass rod is made almost invisible by immersing it in an index-matched transparent fluid such as mineral spirits. In microscopy In light microscopy, oil immersion is a technique used to increase the resolution of a microscope. This is achieved by immersing both the objective lens and the specimen in a transparent oil of high refractive index, thereby increasing the numerical aperture of the objective lens. Immersion oils are transparent oils that have specific optical and viscosity characteristics necessary for use in microscopy. Typical oils used have an index of refraction around 1.515. An oil immersion objective is an objective lens specially designed to be used in this way. The index of the oil is typically chosen to match the index of the microscope lens glass, and of the cover slip. For more details, see the main article, oil immersion. Some microscopes also use other index-matching materials besides oil; see water immersion objective and solid immersion lens. In fiber optics In fiber optics and telecommunications, an index-matching material may be used in conjunction with pairs of mated connectors or with mechanical splices to reduce signal reflected in the guided mode (known as return loss) (see Optical fiber connector). Without the use of an index-matching material, Fresnel reflections will occur at the smooth end faces of a fiber unless there is no fiber-air interface or other significant mismatch in refractive index. These reflections may be as high as −14 dB (i.e., 14 dB below the optical power of the incident signal). When the reflected signal returns to the transmitting end, it may be reflected again and return to the receiving end at a level that is 28 dB plus twice the fiber loss below the direct signal. The reflected signal will also be delayed by twice the delay time introduced by the fiber. The twice-reflected, delayed signal superimposed on the direct signal may noticeably degrade an analog baseband intensity-modulated video signal. Conversely, for digital transmission, the reflected signal will often have no practical effect on the detected signal seen at the decision point of the digital optical receiver except in marginal cases where bit-error ratio is significant. However, certain digital transmitters such as those employing a Distributed Feedback Laser may be affected by back reflection and then fall outside specifications such as Side Mode Suppression Ratio, potentially degrading system bit error ratio, so networking standards intended for DFB lasers may specify a back-reflection tolerance such as −10 dB for transmitters so that they remain within specification even without index matching. This back-reflection tolerance might be achieved using an optical isolator or by way of reduced coupling efficiency. For some applications, instead of standard polished connectors (e.g. FC/PC), angle polished connectors (e.g. FC/APC) may be used, whereby the non-perpendicular polish angle greatly reduces the ratio of reflected signal launched into the guided mode even in the case of a fiber-air interface. In experimental fluid dynamics Index matching is used in liquid-liquid and liquid-solid (Multiphase flow) experimental systems to minimise the distortions that occur in these systems, this is particularly important for systems with many interfaces which become optically inaccessible. Matching the refractive index minimises reflection, refraction, diffraction and rotations that occurs at the interfaces allowing access to regions that would otherwise be inaccessible to optical measurements. This is particularly important for advanced optical measurements like Laser-induced fluorescence, Particle image velocimetry and Particle tracking velocimetry to name a few. In art conservation If a sculpture is broken into several pieces, art conservators may reattach the pieces using an adhesive such as Paraloid B-72 or epoxy. If the sculpture is made of a transparent or semitransparent material (such as glass), the seam where the pieces are attached will usually be much less noticeable if the refractive index of the adhesive matches the refractive index of the surrounding object. Therefore, art conservators may measure the index of objects and then use an index-matched adhesive. Similarly, losses (missing sections) in transparent or semitransparent objects are often filled using an index-matched material. In optical component adhesives Certain optical components, such as a Wollaston prism or Nicol prism, are made of multiple transparent pieces that are directly attached to each other. The adhesive is usually index-matched to the pieces. Historically, Canada balsam was used in this application, but it is now more common to use epoxy or other synthetic adhesives. References Fiber optics Optical materials
Index-matching material
[ "Physics" ]
1,050
[ "Materials", "Optical materials", "Matter" ]
41,281
https://en.wikipedia.org/wiki/Intermediate-field%20region
In antenna theory, intermediate-field region (also known as intermediate field, intermediate zone or transition zone) refers to the transition region lying between the near-field region and the far-field region in which the field strength of an electromagnetic wave is dependent upon the inverse distance, inverse square of the distance, and the inverse cube of the distance from the antenna. For an antenna that is small compared to the wavelength in question, the intermediate-field region is considered to exist at all distances between 0.1 wavelength and 1.0 wavelength from the antenna. References Radio frequency propagation
Intermediate-field region
[ "Physics" ]
117
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,306
https://en.wikipedia.org/wiki/Lambert%27s%20cosine%20law
In optics, Lambert's cosine law says that the observed radiant intensity or luminous intensity from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal; . The law is also known as the cosine emission law or Lambert's emission law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760. A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface has a constant radiance/luminance, regardless of the angle from which it is observed; a single human eye perceives such a surface as having a constant brightness, regardless of the angle from which the eye observes the surface. It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the solid angle, subtended by surface visible to the viewer, is reduced by the very same amount. Because the ratio between power and solid angle is constant, radiance (power per unit solid angle per unit projected source area) stays the same. Lambertian scatterers and radiators When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons /time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles than a Lambertian scatterer. The emission of a Lambertian radiator does not depend on the amount of incident radiation, but rather from radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator. Details of equal brightness effect The situation for a Lambertian surface (emitting or scattering) is illustrated in Figures 1 and 2. For conceptual clarity we will think in terms of photons rather than energy or luminous energy. The wedges in the circle each represent an equal angle dΩ, of an arbitrarily chosen size, and for a Lambertian surface, the number of photons per second emitted into each wedge is proportional to the area of the wedge. The length of each wedge is the product of the diameter of the circle and cos(θ). The maximum rate of photon emission per unit solid angle is along the normal, and diminishes to zero for θ = 90°. In mathematical terms, the radiance along the normal is I photons/(s·m2·sr) and the number of photons per second emitted into the vertical wedge is . The number of photons per second emitted into the wedge at angle θ is . Figure 2 represents what an observer sees. The observer directly above the area element will be seeing the scene through an aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0, which is a portion of the observer's total angular field-of-view of the scene. Since the wedge size dΩ was chosen arbitrarily, for convenience we may assume without loss of generality that it coincides with the solid angle subtended by the aperture when "viewed" from the locus of the emitting area element dA. Thus the normal observer will then be recording the same photons per second emission derived above and will measure a radiance of photons/(s·m2·sr). The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA0 (still corresponding to a dΩ wedge) and from this oblique vantage the area element dA is foreshortened and will subtend a (solid) angle of dΩ0 cos(θ). This observer will be recording photons per second, and so will be measuring a radiance of photons/(s·m2·sr), which is the same as the normal observer. Relating peak luminous intensity and luminous flux In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the Lambertian assumption holds, we can calculate the total luminous flux, , from the peak luminous intensity, , by integrating the cosine law: and so where is the determinant of the Jacobian matrix for the unit sphere, and realizing that is luminous flux per steradian. Similarly, the peak intensity will be of the total radiated luminous flux. For Lambertian surfaces, the same factor of relates luminance to luminous emittance, radiant intensity to radiant flux, and radiance to radiant emittance. Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included only for clarity. Example: A surface with a luminance of say 100 cd/m2 (= 100 nits, typical PC monitor) will, if it is a perfect Lambert emitter, have a luminous emittance of 100π lm/m2. If its area is 0.1 m2 (~19" monitor) then the total light emitted, or luminous flux, would thus be 31.4 lm. See also Transmittance Reflectivity Passive solar building design Sun path References Eponymous laws of physics Radiometry Photometry 3D computer graphics Scattering
Lambert's cosine law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,318
[ "Telecommunications engineering", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics", "Radiometry" ]
41,343
https://en.wikipedia.org/wiki/Magneto-optic%20effect
A magneto-optic effect is any one of a number of phenomena in which an electromagnetic wave propagates through a medium that has been altered by the presence of a quasistatic magnetic field. In such a medium, which is also called gyrotropic or gyromagnetic, left- and right-rotating elliptical polarizations can propagate at different speeds, leading to a number of important phenomena. When light is transmitted through a layer of magneto-optic material, the result is called the Faraday effect: the plane of polarization can be rotated, forming a Faraday rotator. The results of reflection from a magneto-optic material are known as the magneto-optic Kerr effect (not to be confused with the nonlinear Kerr effect). In general, magneto-optic effects break time reversal symmetry locally (i.e. when only the propagation of light, and not the source of the magnetic field, is considered) as well as Lorentz reciprocity, which is a necessary condition to construct devices such as optical isolators (through which light passes in one direction but not the other). Two gyrotropic materials with reversed rotation directions of the two principal polarizations, corresponding to complex-conjugate ε tensors for lossless media, are called optical isomers. Gyrotropic permittivity In particular, in a magneto-optic material the presence of a magnetic field (either externally applied or because the material itself is ferromagnetic) can cause a change in the permittivity tensor ε of the material. The ε becomes anisotropic, a 3×3 matrix, with complex off-diagonal components, depending on the frequency ω of incident light. If the absorption losses can be neglected, ε is a Hermitian matrix. The resulting principal axes become complex as well, corresponding to elliptically-polarized light where left- and right-rotating polarizations can travel at different speeds (analogous to birefringence). More specifically, for the case where absorption losses can be neglected, the most general form of Hermitian ε is: or equivalently the relationship between the displacement field D and the electric field E is: where is a real symmetric matrix and is a real pseudovector called the gyration vector, whose magnitude is generally small compared to the eigenvalues of . The direction of g is called the axis of gyration of the material. To first order, g is proportional to the applied magnetic field: where is the magneto-optical susceptibility (a scalar in isotropic media, but more generally a tensor). If this susceptibility itself depends upon the electric field, one can obtain a nonlinear optical effect of magneto-optical parametric generation (somewhat analogous to a Pockels effect whose strength is controlled by the applied magnetic field). The simplest case to analyze is the one in which g is a principal axis (eigenvector) of , and the other two eigenvalues of are identical. Then, if we let g lie in the z direction for simplicity, the ε tensor simplifies to the form: Most commonly, one considers light propagating in the z direction (parallel to g). In this case the solutions are elliptically polarized electromagnetic waves with phase velocities (where μ is the magnetic permeability). This difference in phase velocities leads to the Faraday effect. For light propagating purely perpendicular to the axis of gyration, the properties are known as the Cotton-Mouton effect and used for a Circulator. Kerr rotation and Kerr ellipticity Kerr rotation and Kerr ellipticity are changes in the polarization of incident light which comes in contact with a gyromagnetic material. Kerr rotation is a rotation in the plane of polarization of transmitted light, and Kerr ellipticity is the ratio of the major to minor axis of the ellipse traced out by elliptically polarized light on the plane through which it propagates. Changes in the orientation of polarized incident light can be quantified using these two properties. According to classical physics, the speed of light varies with the permittivity of a material: where is the velocity of light through the material, is the material permittivity, and is the material permeability. Because the permittivity is anisotropic, polarized light of different orientations will travel at different speeds. This can be better understood if we consider a wave of light that is circularly polarized (seen to the right). If this wave interacts with a material at which the horizontal component (green sinusoid) travels at a different speed than the vertical component (blue sinusoid), the two components will fall out of the 90 degree phase difference (required for circular polarization) changing the Kerr ellipticity. A change in Kerr rotation is most easily recognized in linearly polarized light, which can be separated into two circularly polarized components: Left-handed circular polarized (LHCP) light and right-handed circular polarized (RHCP) light. The anisotropy of the magneto-optic material permittivity causes a difference in the speed of LHCP and RHCP light, which will cause a change in the angle of polarized light. Materials that exhibit this property are known as birefringent. From this rotation, we can calculate the difference in orthogonal velocity components, find the anisotropic permittivity, find the gyration vector, and calculate the applied magnetic field . See also Zeeman effect QMR effect Magneto-optic Kerr effect Faraday effect Voigt Effect Photoelectric effect References Federal Standard 1037C and from MIL-STD-188 Broad band magneto-optical spectroscopy Optical phenomena Electric and magnetic fields in matter de:Magnetooptik#Magnetooptische Effekte
Magneto-optic effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,226
[ "Physical phenomena", "Electric and magnetic fields in matter", "Materials science", "Optical phenomena", "Condensed matter physics", "Magneto-optic effects" ]
41,359
https://en.wikipedia.org/wiki/Maximum%20usable%20frequency
In radio transmission, maximum usable frequency (MUF) is the highest radio frequency that can be used for transmission between two points on Earth by reflection from the ionosphere (skywave or skip) at a specified time, independent of transmitter power. This index is especially useful for shortwave transmissions. In shortwave radio communication, a major mode of long distance propagation is for the radio waves to reflect off the ionized layers of the atmosphere and return diagonally back to Earth. In this way radio waves can travel beyond the horizon, around the curve of the Earth. However the refractive index of the ionosphere decreases with increasing frequency, so there is an upper limit to the frequency which can be used. Above this frequency the radio waves are not reflected by the ionosphere but are transmitted through it into space. The ionization of the atmosphere varies with time of day and season as well as with solar conditions, so the upper frequency limit for skywave communication varies throughout the day. MUF is a median frequency, defined as the highest frequency at which skywave communication is possible 50% of the days in a month, as opposed to the lowest usable high frequency (LUF) which is the frequency at which communication is possible 90% of the days, and the frequency of optimum transmission (FOT). Typically the MUF is a predicted number. Given the maximum observed frequency (MOF) for a mode on each day of the month at a given hour, the MUF is the highest frequency for which an ionospheric communications path is predicted on 50% of the days of the month. On a given day, communications may or may not succeed at the MUF. Commonly, the optimal operating frequency for a given path is estimated at 80 to 90% of the MUF. As a rule of thumb the MUF is approximately 3 times the critical frequency. where the critical frequency is the highest frequency reflected for a signal propagating directly upward and θ is the angle of incidence. Optimum Working Frequency Another important parameter used in skywave propagation is the optimum working frequency (OWF), which estimates the maximum frequency that must be used for a given critical frequency and incident angle. It is the frequency chosen to avoid the irregularities of the atmosphere. See also DX communication E-layer E-skip F-layer Lowest usable high frequency MW DX Near vertical incidence skywave Radio propagation Skip distance TV-FM DX Sources External links MUF Basics Radio frequency propagation
Maximum usable frequency
[ "Physics" ]
502
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,402
https://en.wikipedia.org/wiki/Neper
The neper (symbol: Np) is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The unit's name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit defined in the international standard ISO 80000. It is not part of the International System of Units (SI), but is accepted for use alongside the SI. Definition Like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the decadic (base-10) logarithm to compute ratios, the neper uses the natural logarithm, based on Euler's number (). The level of a ratio of two signal amplitudes or root-power quantities, with the unit neper, is given by where and are the signal amplitudes, and is the natural logarithm. The level of a ratio of two power quantities, with the unit neper, is given by where and are the signal powers. In the International System of Quantities, the neper is defined as . Units The neper is defined in terms of ratios of field quantities — also called root-power quantities — (for example, voltage or current amplitudes in electrical circuits, or pressure in acoustics), whereas the decibel was originally defined in terms of power ratios. A power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power in a linear system is proportional to the square (Joule's laws) of the amplitude. Hence the decibel and the neper have a fixed ratio to each other: and The (voltage) level ratio is Like the decibel, the neper is a dimensionless unit. The International Telecommunication Union (ITU) recognizes both units. Only the neper is coherent with the SI. Applications The neper is a natural linear unit of relative difference, meaning in nepers (logarithmic units) relative differences add rather than multiply. This property is shared with logarithmic units in other bases, such as the bel. The derived units decineper (1 dNp = 0.1 neper) and centineper (1 cNp = 0.01 neper) are also used. The centineper for root-power quantities corresponds to a log point or log percentage, see . See also Nat (unit) Nepers per metre References Works Further reading External links What's a neper? Conversion of level gain and loss: neper, decibel, and bel Calculating transmission line losses Units of level
Neper
[ "Physics", "Mathematics" ]
566
[ "Physical quantities", "Units of level", "Quantity", "Logarithmic scales of measurement", "Units of measurement" ]
41,432
https://en.wikipedia.org/wiki/Numerical%20aperture
In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution), and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it. General optics In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by where is the index of refraction of the medium in which the lens is working (1.00 for air, 1.33 for pure water, and typically 1.52 for immersion oil; see also list of refractive indices), and is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that is constant across an interface. In air, the angular aperture of the lens is approximately twice this value (within the paraxial approximation). The is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, generally refers to object-space numerical aperture unless otherwise noted. In microscopy, is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved (the resolution) is proportional to , where is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality (diffraction-limited) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image, but will provide shallower depth of field. Numerical aperture is used to define the "pit size" in optical disc formats. Increasing the magnification and the numerical aperture of the objective reduces the working distance, i.e. the distance between front lens and specimen. Numerical aperture versus f-number Numerical aperture is not typically used in photography. Instead, the angular aperture of a lens (or an imaging mirror) is expressed by the f-number, written , where is the f-number given by the ratio of the focal length to the diameter of the entrance pupil : This ratio is related to the image-space numerical aperture when the lens is focused at infinity. Based on the diagram at the right, the image-space numerical aperture of the lens is: thus , assuming normal use in air (). The approximation holds when the numerical aperture is small, but it turns out that for well-corrected optical systems such as camera lenses, a more detailed analysis shows that is almost exactly equal to even at large numerical apertures. As Rudolf Kingslake explains, "It is a common error to suppose that the ratio [] is actually equal to , and not ... The tangent would, of course, be correct if the principal planes were really plane. However, the complete theory of the Abbe sine condition shows that if a lens is corrected for coma and spherical aberration, as all good photographic objectives must be, the second principal plane becomes a portion of a sphere of radius centered about the focal point". In this sense, the traditional thin-lens definition and illustration of f-number is misleading, and defining it in terms of numerical aperture may be more meaningful. Working (effective) f-number The f-number describes the light-gathering ability of the lens in the case where the marginal rays on the object side are parallel to the axis of the lens. This case is commonly encountered in photography, where objects being photographed are often far from the camera. When the object is not distant from the lens, however, the image is no longer formed in the lens's focal plane, and the f-number no longer accurately describes the light-gathering ability of the lens or the image-side numerical aperture. In this case, the numerical aperture is related to what is sometimes called the "working f-number" or "effective f-number". The working f-number is defined by modifying the relation above, taking into account the magnification from object to image: where is the working f-number, is the lens's magnification for an object a particular distance away, is the pupil magnification, and the is defined in terms of the angle of the marginal ray as before. The magnification here is typically negative, and the pupil magnification is most often assumed to be 1 — as Allen R. Greenleaf explains, "Illuminance varies inversely as the square of the distance between the exit pupil of the lens and the position of the plate or film. Because the position of the exit pupil usually is unknown to the user of a lens, the rear conjugate focal distance is used instead; the resultant theoretical error so introduced is insignificant with most types of photographic lenses." In photography, the factor is sometimes written as , where represents the absolute value of the magnification; in either case, the correction factor is 1 or greater. The two equalities in the equation above are each taken by various authors as the definition of working f-number, as the cited sources illustrate. They are not necessarily both exact, but are often treated as if they are. Conversely, the object-side numerical aperture is related to the f-number by way of the magnification (tending to zero for a distant object): Laser physics In laser physics, numerical aperture is defined slightly differently. Laser beams spread out as they propagate, but slowly. Far away from the narrowest part of the beam, the spread is roughly linear with distance—the laser beam forms a cone of light in the "far field". The relation used to define the of the laser beam is the same as that used for an optical system, but is defined differently. Laser beams typically do not have sharp edges like the cone of light that passes through the aperture of a lens does. Instead, the irradiance falls off gradually away from the center of the beam. It is very common for the beam to have a Gaussian profile. Laser physicists typically choose to make the divergence of the beam: the far-field angle between the beam axis and the distance from the axis at which the irradiance drops to times the on-axis irradiance. The of a Gaussian laser beam is then related to its minimum spot size ("beam waist") by where is the vacuum wavelength of the light, and is the diameter of the beam at its narrowest spot, measured between the irradiance points ("Full width at maximum of the intensity"). This means that a laser beam that is focused to a small spot will spread out quickly as it moves away from the focus, while a large-diameter laser beam can stay roughly the same size over a very long distance. See also: Gaussian beam width. Fiber optics A multi-mode optical fiber will only propagate light that enters the fiber within a certain range of angles, known as the acceptance cone of the fiber. The half-angle of this cone is called the acceptance angle, . For step-index multimode fiber in a given medium, the acceptance angle is determined only by the indices of refraction of the core, the cladding, and the medium: where is the refractive index of the medium around the fiber, is the refractive index of the fiber core, and is the refractive index of the cladding. While the core will accept light at higher angles, those rays will not totally reflect off the core–cladding interface, and so will not be transmitted to the other end of the fiber. The derivation of this formula is given below. When a light ray is incident from a medium of refractive index to the core of index at the maximum acceptance angle, Snell's law at the medium–core interface gives From the geometry of the above figure we have: where is the critical angle for total internal reflection. Substituting for in Snell's law we get: By squaring both sides Solving, we find the formula stated above: This has the same form as the numerical aperture in other optical systems, so it has become common to define the of any type of fiber to be where is the refractive index along the central axis of the fiber. Note that when this definition is used, the connection between the numerical aperture and the acceptance angle of the fiber becomes only an approximation. In particular, "" defined this way is not relevant for single-mode fiber. One cannot define an acceptance angle for single-mode fiber based on the indices of refraction alone. The number of bound modes, the mode volume, is related to the normalized frequency and thus to the numerical aperture. In multimode fibers, the term equilibrium numerical aperture is sometimes used. This refers to the numerical aperture with respect to the extreme exit angle of a ray emerging from a fiber in which equilibrium mode distribution has been established. See also f-number Launch numerical aperture Guided ray, optic fibre context Acceptance angle (solar concentrator), further context References External links "Microscope Objectives: Numerical Aperture and Resolution" by Mortimer Abramowitz and Michael W. Davidson, Molecular Expressions: Optical Microscopy Primer (website), Florida State University, April 22, 2004. "Basic Concepts and Formulas in Microscopy: Numerical Aperture" by Michael W. Davidson, Nikon MicroscopyU (website). "Numerical aperture", Encyclopedia of Laser Physics and Technology (website). "Numerical Aperture and Resolution", UCLA Brain Research Institute Microscopy Core Facilities (website), 2007. Optics Fiber optics Microscopy Dimensionless numbers of physics
Numerical aperture
[ "Physics", "Chemistry" ]
2,098
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Microscopy", "Atomic", " and optical physics" ]
41,461
https://en.wikipedia.org/wiki/Optical%20path%20length
In optics, optical path length (OPL, denoted Λ in equations), also known as optical length or optical distance, is the length that light needs to travel through a vacuum to create the same phase difference as it would have when traveling through a given medium. It is calculated by taking the product of the geometric length of the optical path followed by light and the refractive index of the homogeneous medium through which the light ray propagates; for inhomogeneous optical media, the product above is generalized as a path integral as part of the ray tracing procedure. A difference in OPL between two paths is often called the optical path difference (OPD). OPL and OPD are important because they determine the phase of the light and govern interference and diffraction of light as it propagates. In a medium of constant refractive index, n, the OPL for a path of geometrical length s is just If the refractive index varies along the path, the OPL is given by a line integral where n is the local refractive index as a function of distance along the path C. An electromagnetic wave propagating along a path C has the phase shift over C as if it was propagating a path in a vacuum, length of which, is equal to the optical path length of C. Thus, if a wave is traveling through several different media, then the optical path length of each medium can be added to find the total optical path length. The optical path difference between the paths taken by two identical waves can then be used to find the phase change. Finally, using the phase change, the interference between the two waves can be calculated. Fermat's principle states that the path light takes between two points is the path that has the minimum optical path length. Optical path difference The OPD corresponds to the phase shift undergone by the light emitted from two previously coherent sources when passed through mediums of different refractive indices. For example, a wave passing through air appears to travel a shorter distance than an identical wave traveling the same distance in glass. This is because a larger number of wavelengths fit in the same distance due to the higher refractive index of the glass. The OPD can be calculated from the following equation: where d1 and d2 are the distances of the ray passing through medium 1 or 2, n1 is the greater refractive index (e.g., glass) and n2 is the smaller refractive index (e.g., air). See also Air mass (astronomy) Lagrangian optics Hamiltonian optics Fermat's principle Optical depth References Geometrical optics Physical optics Optical quantities
Optical path length
[ "Physics", "Mathematics" ]
542
[ "Optical quantities", "Quantity", "Physical quantities" ]
41,494
https://en.wikipedia.org/wiki/Path%20quality%20analysis
Path quality analysis: In a communications path, an analysis that (a) includes the overall evaluation of the component quality measures, the individual link quality measures, and the aggregate path quality measures, and (b) is performed by evaluating communications parameters, such as bit error ratio, signal-plus-noise-plus-distortion to noise-plus-distortion ratio, and spectral distortion. References Radio frequency propagation
Path quality analysis
[ "Physics" ]
82
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
41,496
https://en.wikipedia.org/wiki/Pseudo%20bit%20error%20ratio
Pseudo bit error ratio (PBER) in adaptive high-frequency (HF) radio, is a bit error ratio derived by a majority decoder that processes redundant transmissions. Note: In adaptive HF radio automatic link establishment, PBER is determined by the extent of error correction, such as by using the fraction of non-unanimous votes in the 2-of-3 majority decoder. Engineering ratios Error detection and correction
Pseudo bit error ratio
[ "Mathematics", "Engineering" ]
87
[ "Metrics", "Reliability engineering", "Engineering ratios", "Quantity", "Error detection and correction" ]
41,545
https://en.wikipedia.org/wiki/Avogadro%20constant
The Avogadro constant, commonly denoted or , is an SI defining constant with an exact value of (reciprocal moles). It is this defined number of constituent particles (usually molecules, atoms, ions, or ion pairs—in general, entities) per mole (SI unit) and used as a normalization factor in relating the amount of substance, n(X), in a sample of a substance X to the corresponding number of entities, N(X): n(X) = N(X)(1/N), an aggregate of N(X) reciprocal Avogadro constants. By setting N(X) = 1, a reciprocal Avogadro constant is seen to be equal to one entity, which means that n(X) is more easily interpreted as an aggregate of N(X) entities. In the SI dimensional analysis of measurement units, the dimension of the Avogadro constant is the reciprocal of amount of substance, denoted N−1. The Avogadro number, sometimes denoted , is the numeric value of the Avogadro constant (i.e., without a unit), namely the dimensionless number ; the value chosen based on the number of atoms in 12 grams of carbon-12 in alignment with the historical definition of a mole. The constant is named after the Italian physicist and chemist Amedeo Avogadro (1776–1856). The Avogadro constant is also the factor that converts the average mass () of one particle, in grams, to the molar mass () of the substance, in grams per mole (g/mol). That is, . The constant also relates the molar volume (the volume per mole) of a substance to the average volume nominally occupied by one of its particles, when both are expressed in the same units of volume. For example, since the molar volume of water in ordinary conditions is about , the volume occupied by one molecule of water is about , or about (cubic nanometres). For a crystalline substance, relates the volume of a crystal with one mole worth of repeating unit cells, to the volume of a single cell (both in the same units). Definition The Avogadro constant was historically derived from the old definition of the mole as the amount of substance in 12 grams of carbon-12 (12C); or, equivalently, the number of daltons in a gram, where the dalton is defined as of the mass of a 12C atom. By this old definition, the numerical value of the Avogadro constant in mol−1 (the Avogadro number) was a physical constant that had to be determined experimentally. The redefinition of the mole in 2019, as being the amount of substance containing exactly particles, meant that the mass of 1 mole of a substance is now exactly the product of the Avogadro number and the average mass of its particles. The dalton, however, is still defined as of the mass of a 12C atom, which must be determined experimentally and is known only with finite accuracy. The prior experiments that aimed to determine the Avogadro constant are now re-interpreted as measurements of the value in grams of the dalton. By the old definition of mole, the numerical value of one mole of a substance, expressed in grams, was precisely equal to the average mass of one particle in daltons. With the new definition, this numerical equivalence is no longer exact, as it is affected by the uncertainty of the value of the dalton in SI units. However, it is still applicable for all practical purposes. For example, the average mass of one molecule of water is about 18.0153 daltons, and of one mole of water is about 18.0153 grams. Also, the Avogadro number is the approximate number of nucleons (protons and neutrons) in one gram of ordinary matter. In older literature, the Avogadro number was also denoted , although that conflicts with the symbol for number of particles in statistical mechanics. History Origin of the concept The Avogadro constant is named after the Italian scientist Amedeo Avogadro (1776–1856), who, in 1811, first proposed that the volume of a gas (at a given pressure and temperature) is proportional to the number of atoms or molecules regardless of the nature of the gas. Avogadro's hypothesis was popularized four years after his death by Stanislao Cannizzaro, who advocated Avogadro's work at the Karlsruhe Congress in 1860. The name Avogadro's number was coined in 1909 by the physicist Jean Perrin, who defined it as the number of molecules in exactly 32 grams of oxygen gas. The goal of this definition was to make the mass of a mole of a substance, in grams, be numerically equal to the mass of one molecule relative to the mass of the hydrogen atom; which, because of the law of definite proportions, was the natural unit of atomic mass, and was assumed to be of the atomic mass of oxygen. First measurements The value of Avogadro's number (not yet known by that name) was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. This value, the number density of particles in an ideal gas, is now called the Loschmidt constant in his honor, and is related to the Avogadro constant, , by where is the pressure, is the gas constant, and is the absolute temperature. Because of this work, the symbol is sometimes used for the Avogadro constant, and, in German literature, that name may be used for both constants, distinguished only by the units of measurement. (However, should not be confused with the entirely different Loschmidt constant in English-language literature.) Perrin himself determined the Avogadro number by several different experimental methods. He was awarded the 1926 Nobel Prize in Physics, largely for this work. The electric charge per mole of electrons is a constant called the Faraday constant and has been known since 1834, when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan with the help of Harvey Fletcher obtained the first measurement of the charge on an electron. Dividing the charge on a mole of electrons by the charge on a single electron provided a more accurate estimate of the Avogadro number. SI definition of 1971 In 1971, in its 14th conference, the International Bureau of Weights and Measures (BIPM) decided to regard the amount of substance as an independent dimension of measurement, with the mole as its base unit in the International System of Units (SI). Specifically, the mole was defined as an amount of a substance that contains as many elementary entities as there are atoms in () of carbon-12 (12C). Thus, in particular, one mole of carbon-12 was exactly of the element. By this definition, one mole of any substance contained exactly as many elementary entities as one mole of any other substance. However, this number was a physical constant that had to be experimentally determined since it depended on the mass (in grams) of one atom of 12C, and therefore, it was known only to a limited number of decimal digits. The common rule of thumb that "one gram of matter contains nucleons" was exact for carbon-12, but slightly inexact for other elements and isotopes. In the same conference, the BIPM also named (the factor that converted moles into number of particles) the "Avogadro constant". However, the term "Avogadro number" continued to be used, especially in introductory works. As a consequence of this definition, was not a pure number, but had the metric dimension of reciprocal of amount of substance (mol−1). SI redefinition of 2019 In its 26th Conference, the BIPM adopted a different approach: effective 20 May 2019, it defined the Avogadro constant as the exact value , thus redefining the mole as exactly constituent particles of the substance under consideration. One consequence of this change is that the mass of a mole of 12C atoms is no longer exactly 0.012 kg. On the other hand, the dalton ( universal atomic mass unit) remains unchanged as of the mass of 12C. Thus, the molar mass constant remains very close to but no longer exactly equal to 1 g/mol, although the difference ( in relative terms, as of March 2019) is insignificant for all practical purposes. Connection to other constants The Avogadro constant is related to other physical constants and properties. It relates the molar gas constant and the Boltzmann constant , which in the SI is defined to be exactly :   It relates the Faraday constant and the elementary charge , which in the SI is defined as exactly :   It relates the molar mass constant and the atomic mass constant currently See also CODATA 2018 List of scientists whose names are used in physical constants Mole Day References External links 1996 definition of the Avogadro constant from the IUPAC Compendium of Chemical Terminology ("Gold Book") Some Notes on Avogadro's Number, (historical notes) An Exact Value for Avogadro's Number – American Scientist Avogadro and molar Planck constants for the redefinition of the kilogram Scanned version of "Two hypothesis of Avogadro", 1811 Avogadro's article, on BibNum Amount of substance Fundamental constants Physical constants Units of amount
Avogadro constant
[ "Physics", "Chemistry", "Mathematics" ]
1,944
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Chemical quantities", "Amount of substance", "Physical constants", "Wikipedia categories named after physical quantities", "Fundamental constants" ]
41,548
https://en.wikipedia.org/wiki/Phase-locked%20loop
A phase-locked loop or phase lock loop (PLL) is a control system that generates an output signal whose phase is fixed relative to the phase of an input signal. Keeping the input and output phase in lockstep also implies keeping the input and output frequencies the same, thus a phase-locked loop can also track an input frequency. And by incorporating a frequency divider, a PLL can generate a stable frequency that is a multiple of the input frequency. These properties are used for clock synchronization, demodulation, frequency synthesis, clock multipliers, and signal recovery from a noisy communication channel. Since 1969, a single integrated circuit can provide a complete PLL building block, and nowadays have output frequencies from a fraction of a hertz up to many gigahertz. Thus, PLLs are widely employed in radio, telecommunications, computers (e.g. to distribute precisely timed clock signals in microprocessors), grid-tie inverters (electronic power converters used to integrate DC renewable resources and storage elements such as photovoltaics and batteries with the power grid), and other electronic applications. Simple example A simple analog PLL is an electronic circuit consisting of a variable frequency oscillator and a phase detector in a feedback loop (Figure 1). The oscillator generates a periodic signal with frequency proportional to an applied voltage, hence the term voltage-controlled oscillator (VCO). The phase detector compares the phase of the VCO's output signal with the phase of periodic input reference signal and outputs a voltage (stabilized by the filter) to adjust the oscillator's frequency to match the phase of to the phase of . Clock analogy Phase can be proportional to time, so a phase difference can correspond to a time difference. Left alone, different clocks will mark time at slightly different rates. A mechanical clock, for example, might be fast or slow by a few seconds per hour compared to a reference atomic clock (such as the NIST-F2). That time difference becomes substantial over time. Instead, the owner can synchronize their mechanical clock (with varying degrees of accuracy) by phase-locking it to a reference clock. An inefficient synchronization method involves the owner resetting their clock to that more accurate clock's time every week. But, left alone, their clock will still continue to diverge from the reference clock at the same few seconds per hour rate. A more efficient synchronization method (analogous to the simple PLL in Figure 1) utilizes the fast-slow timing adjust control (analogous to how the VCO's frequency can be adjusted) available on some clocks. Analogously to the phase comparator, the owner could notice their clock's misalignment and turn its timing adjust a small proportional amount to make their clock's frequency a little slower (if their clock was fast) or faster (if their clock was slow). If they don't overcompensate, then their clock will be more accurate than before. Over a series of such weekly adjustments, their clock's notion of a second would agree close enough with the reference clock, so they could be said to be locked both in frequency and phase. An early electromechanical version of a phase-locked loop was used in 1921 in the Shortt-Synchronome clock. History Spontaneous synchronization of weakly coupled pendulum clocks was noted by the Dutch physicist Christiaan Huygens as early as 1673. Around the turn of the 19th century, Lord Rayleigh observed synchronization of weakly coupled organ pipes and tuning forks. In 1919, W. H. Eccles and J. H. Vincent found that two electronic oscillators that had been tuned to oscillate at slightly different frequencies but that were coupled to a resonant circuit would soon oscillate at the same frequency. Automatic synchronization of electronic oscillators was described in 1923 by Edward Victor Appleton. In 1925, David Robertson, first professor of electrical engineering at the University of Bristol, introduced phase locking in his clock design to control the striking of the bell Great George in the new Wills Memorial Building. Robertson's clock incorporated an electromechanical device that could vary the rate of oscillation of the pendulum, and derived correction signals from a circuit that compared the pendulum phase with that of an incoming telegraph pulse from Greenwich Observatory every morning at 10:00 GMT. Including equivalents of every element of a modern electronic PLL, Robertson's system was notably ahead of its time in that its phase detector was a relay logic implementation of the transistor circuits for phase/frequency detectors not seen until the 1970s.  Robertson's work predated research towards what was later named the phase-lock loop in 1932, when British researchers developed an alternative to Edwin Armstrong's superheterodyne receiver, the Homodyne or direct-conversion receiver. In the homodyne or synchrodyne system, a local oscillator was tuned to the desired input frequency and multiplied with the input signal. The resulting output signal included the original modulation information. The intent was to develop an alternative receiver circuit that required fewer tuned circuits than the superheterodyne receiver. Since the local oscillator would rapidly drift in frequency, an automatic correction signal was applied to the oscillator, maintaining it in the same phase and frequency of the desired signal. The technique was described in 1932, in a paper by Henri de Bellescize, in the French journal L'Onde Électrique. In analog television receivers since at least the late 1930s, phase-locked-loop horizontal and vertical sweep circuits are locked to synchronization pulses in the broadcast signal. In 1969, Signetics introduced a line of low-cost monolithic integrated circuits like the NE565 using bipolar transistors, that were complete phase-locked loop systems on a chip, and applications for the technique multiplied. A few years later, RCA introduced the CD4046 Micropower Phase-Locked Loop using CMOS, which also became a popular integrated circuit building block. Structure and function Phase-locked loop mechanisms may be implemented as either analog or digital circuits. Both implementations use the same basic structure. Analog PLL circuits include four basic elements: Phase detector Low-pass filter Voltage controlled oscillator Feedback path, which may include a frequency divider Variations There are several variations of PLLs. Some terms that are used are "analog phase-locked loop" (APLL), also referred to as a linear phase-locked loop" (LPLL), "digital phase-locked loop" (DPLL), "all digital phase-locked loop" (ADPLL), and "software phase-locked loop" (SPLL). Analog or linear PLL (APLL)Phase detector is an analog multiplier. Loop filter is active or passive. Uses a voltage-controlled oscillator (VCO). APLL is said to be a type II if its loop filter has transfer function with exactly one pole at the origin (see also Egan's conjecture on the pull-in range of type II APLL). Digital PLL (DPLL) An analog PLL with a digital phase detector (such as XOR, edge-triggered JK flip flop, phase frequency detector). May have digital divider in the loop. All digital PLL (ADPLL) Phase detector, filter and oscillator are digital. Uses a numerically controlled oscillator (NCO). Neuronal PLL (NPLL) Phase detector is implemented by neuronal non-linearity, oscillator by rate-controlled oscillating neurons. Software PLL (SPLL) Functional blocks are implemented by software rather than specialized hardware. Charge-pump PLL (CP-PLL)CP-PLL is a modification of phase-locked loops with phase-frequency detector and square waveform signals. See also Gardner's conjecture on CP-PLL. Performance parameters Type and order. Frequency ranges: hold-in range (tracking range), pull-in range (capture range, acquisition range), lock-in range. See also Gardner's problem on the lock-in range, Egan's conjecture on the pull-in range of type II APLL, Viterbi's problem on the PLL ranges coincidence. Loop bandwidth: Defining the speed of the control loop. Transient response: Like overshoot and settling time to a certain accuracy (like 50 ppm). Steady-state errors: Like remaining phase or timing error. Output spectrum purity: Like sidebands generated from a certain VCO tuning voltage ripple. Phase-noise: Defined by noise energy in a certain frequency band (like 10 kHz offset from carrier). Highly dependent on VCO phase-noise, PLL bandwidth, etc. General parameters: Such as power consumption, supply voltage range, output amplitude, etc. Applications Phase-locked loops are widely used for synchronization purposes; in space communications for coherent demodulation and threshold extension, bit synchronization, and symbol synchronization. Phase-locked loops can also be used to demodulate frequency-modulated signals. In radio transmitters, a PLL is used to synthesize new frequencies which are a multiple of a reference frequency, with the same stability as the reference frequency. Other applications include: Demodulation of frequency modulation (FM): If PLL is locked to an FM signal, the VCO tracks the instantaneous frequency of the input signal. The filtered error voltage which controls the VCO and maintains lock with the input signal is demodulated FM output. The VCO transfer characteristics determine the linearity of the demodulated out. Since the VCO used in an integrated-circuit PLL is highly linear, it is possible to realize highly linear FM demodulators. Demodulation of frequency-shift keying (FSK): In digital data communication and computer peripherals, binary data is transmitted by means of a carrier frequency which is shifted between two preset frequencies. Recovery of small signals that otherwise would be lost in noise (lock-in amplifier to track the reference frequency) Recovery of clock timing information from a data stream such as from a disk drive Clock multipliers in microprocessors that allow internal processor elements to run faster than external connections, while maintaining precise timing relationships Demodulation of modems and other tone signals for telecommunications and remote control. DSP of video signals; Phase-locked loops are also used to synchronize phase and frequency to the input analog video signal so it can be sampled and digitally processed Atomic force microscopy in frequency modulation mode, to detect changes of the cantilever resonance frequency due to tip–surface interactions DC motor drive Clock recovery Some data streams, especially high-speed serial data streams (such as the raw stream of data from the magnetic head of a disk drive), are sent without an accompanying clock. The receiver generates a clock from an approximate frequency reference, and then uses a PLL to phase-align it to the data stream's signal edges. This process is referred to as clock recovery. For this scheme to work, the data stream must have edges frequently-enough to correct any drift in the PLL's oscillator. Thus a line code with a hard upper bound on the maximum time between edges (e.g. 8b/10b encoding) is typically used to encode the data. Deskewing If a clock is sent in parallel with data, that clock can be used to sample the data. Because the clock must be received and amplified before it can drive the flip-flops which sample the data, there will be a finite, and process-, temperature-, and voltage-dependent delay between the detected clock edge and the received data window. This delay limits the frequency at which data can be sent. One way of eliminating this delay is to include a deskew PLL on the receive side, so that the clock at each data flip-flop is phase-matched to the received clock. In that type of application, a special form of a PLL called a delay-locked loop (DLL) is frequently used. Clock generation Many electronic systems include processors of various sorts that operate at hundreds of megahertz to gigahertz, well above the practical frequencies of crystal oscillators. Typically, the clocks supplied to these processors come from clock generator PLLs, which multiply a lower-frequency reference clock (usually 50 or 100 MHz) up to the operating frequency of the processor. The multiplication factor can be quite large in cases where the operating frequency is multiple gigahertz and the reference crystal is just tens or hundreds of megahertz. Spread spectrum All electronic systems emit some unwanted radio frequency energy. Various regulatory agencies (such as the FCC in the United States) put limits on the emitted energy and any interference caused by it. The emitted noise generally appears at sharp spectral peaks (usually at the operating frequency of the device, and a few harmonics). A system designer can use a spread-spectrum PLL to reduce interference with high-Q receivers by spreading the energy over a larger portion of the spectrum. For example, by changing the operating frequency up and down by a small amount (about 1%), a device running at hundreds of megahertz can spread its interference evenly over a few megahertz of spectrum, which drastically reduces the amount of noise seen on broadcast FM radio channels, which have a bandwidth of several tens of kilohertz. Clock distribution Typically, the reference clock enters the chip and drives a phase locked loop (PLL), which then drives the system's clock distribution. The clock distribution is usually balanced so that the clock arrives at every endpoint simultaneously. One of those endpoints is the PLL's feedback input. The function of the PLL is to compare the distributed clock to the incoming reference clock, and vary the phase and frequency of its output until the reference and feedback clocks are phase and frequency matched. PLLs are ubiquitous—they tune clocks in systems several feet across, as well as clocks in small portions of individual chips. Sometimes the reference clock may not actually be a pure clock at all, but rather a data stream with enough transitions that the PLL is able to recover a regular clock from that stream. Sometimes the reference clock is the same frequency as the clock driven through the clock distribution, other times the distributed clock may be some rational multiple of the reference. AM detection A PLL may be used to synchronously demodulate amplitude modulated (AM) signals. The PLL recovers the phase and frequency of the incoming AM signal's carrier. The recovered phase at the VCO differs from the carrier's by 90°, so it is shifted in phase to match, and then fed to a multiplier. The output of the multiplier contains both the sum and the difference frequency signals, and the demodulated output is obtained by low-pass filtering. Since the PLL responds only to the carrier frequencies which are very close to the VCO output, a PLL AM detector exhibits a high degree of selectivity and noise immunity which is not possible with conventional peak type AM demodulators. However, the loop may lose lock where AM signals have 100% modulation depth. Jitter and noise reduction One desirable property of all PLLs is that the reference and feedback clock edges be brought into very close alignment. The average difference in time between the phases of the two signals when the PLL has achieved lock is called the static phase offset (also called the steady-state phase error). The variance between these phases is called tracking jitter. Ideally, the static phase offset should be zero, and the tracking jitter should be as low as possible. Phase noise is another type of jitter observed in PLLs, and is caused by the oscillator itself and by elements used in the oscillator's frequency control circuit. Some technologies are known to perform better than others in this regard. The best digital PLLs are constructed with emitter-coupled logic (ECL) elements, at the expense of high power consumption. To keep phase noise low in PLL circuits, it is best to avoid saturating logic families such as transistor-transistor logic (TTL) or CMOS. Another desirable property of all PLLs is that the phase and frequency of the generated clock be unaffected by rapid changes in the voltages of the power and ground supply lines, as well as the voltage of the substrate on which the PLL circuits are fabricated. This is called substrate and supply noise rejection. The higher the noise rejection, the better. To further improve the phase noise of the output, an injection locked oscillator can be employed following the VCO in the PLL. Frequency synthesis In digital wireless communication systems (GSM, CDMA etc.), PLLs are used to provide the local oscillator up-conversion during transmission and down-conversion during reception. In most cellular handsets this function has been largely integrated into a single integrated circuit to reduce the cost and size of the handset. However, due to the high performance required of base station terminals, the transmission and reception circuits are built with discrete components to achieve the levels of performance required. GSM local oscillator modules are typically built with a frequency synthesizer integrated circuit and discrete resonator VCOs. Phase angle reference Grid-tie inverters based on voltage source inverters source or sink real power into the AC electric grid as a function of the phase angle of the voltage they generate relative to the grid's voltage phase angle, which is measured using a PLL. In photovoltaic applications, the more the sine wave produced leads the grid voltage wave, the more power is injected into the grid. For battery applications, the more the sine wave produced lags the grid voltage wave, the more the battery charges from the grid, and the more the sine wave produced leads the grid voltage wave, the more the battery discharges into the grid. Block diagram The block diagram shown in the figure shows an input signal, FI, which is used to generate an output, FO. The input signal is often called the reference signal (also abbreviated FREF). At the input, a phase detector (shown as the Phase frequency detector and Charge pump blocks in the figure) compares two input signals, producing an error signal which is proportional to their phase difference. The error signal is then low-pass filtered and used to drive a VCO which creates an output phase. The output is fed through an optional divider back to the input of the system, producing a negative feedback loop. If the output phase drifts, the error signal will increase, driving the VCO phase in the opposite direction so as to reduce the error. Thus the output phase is locked to the phase of the input. Analog phase locked loops are generally built with an analog phase detector, low-pass filter and VCO placed in a negative feedback configuration. A digital phase locked loop uses a digital phase detector; it may also have a divider in the feedback path or in the reference path, or both, in order to make the PLL's output signal frequency a rational multiple of the reference frequency. A non-integer multiple of the reference frequency can also be created by replacing the simple divide-by-N counter in the feedback path with a programmable pulse swallowing counter. This technique is usually referred to as a fractional-N synthesizer or fractional-N PLL. The oscillator generates a periodic output signal. Assume that initially the oscillator is at nearly the same frequency as the reference signal. If the phase from the oscillator falls behind that of the reference, the phase detector changes the control voltage of the oscillator so that it speeds up. Likewise, if the phase creeps ahead of the reference, the phase detector changes the control voltage to slow down the oscillator. Since initially the oscillator may be far from the reference frequency, practical phase detectors may also respond to frequency differences, so as to increase the lock-in range of allowable inputs. Depending on the application, either the output of the controlled oscillator, or the control signal to the oscillator, provides the useful output of the PLL system. Elements Phase detector A phase detector (PD) generates a voltage, which represents the phase difference between two signals. In a PLL, the two inputs of the phase detector are the reference input and the feedback from the VCO. The PD output voltage is used to control the VCO such that the phase difference between the two inputs is held constant, making it a negative feedback system. Different types of phase detectors have different performance characteristics. For instance, the frequency mixer produces harmonics that adds complexity in applications where spectral purity of the VCO signal is important. The resulting unwanted (spurious) sidebands, also called "reference spurs" can dominate the filter requirements and reduce the capture range well below or increase the lock time beyond the requirements. In these applications the more complex digital phase detectors are used which do not have as severe a reference spur component on their output. Also, when in lock, the steady-state phase difference at the inputs using this type of phase detector is near 90 degrees. In PLL applications it is frequently required to know when the loop is out of lock. The more complex digital phase-frequency detectors usually have an output that allows a reliable indication of an out of lock condition. An XOR gate is often used for digital PLLs as an effective yet simple phase detector. It can also be used in an analog sense with only slight modification to the circuitry. Filter The block commonly called the PLL loop filter (usually a low-pass filter) generally has two distinct functions. The primary function is to determine loop dynamics, also called stability. This is how the loop responds to disturbances, such as changes in the reference frequency, changes of the feedback divider, or at startup. Common considerations are the range over which the loop can achieve lock (pull-in range, lock range or capture range), how fast the loop achieves lock (lock time, lock-up time or settling time) and damping behavior. Depending on the application, this may require one or more of the following: a simple proportion (gain or attenuation), an integral (low-pass filter) and/or derivative (high-pass filter). Loop parameters commonly examined for this are the loop's gain margin and phase margin. Common concepts in control theory including the PID controller are used to design this function. The second common consideration is limiting the amount of reference frequency energy (ripple) appearing at the phase detector output that is then applied to the VCO control input. This frequency modulates the VCO and produces FM sidebands commonly called "reference spurs". The design of this block can be dominated by either of these considerations, or can be a complex process juggling the interactions of the two. The typical trade-off of increasing the bandwidth is degraded stability. Conversely, the tradeoff of extra damping for better stability is reduced speed and increased settling time. Often the phase-noise is also affected. Oscillator All phase-locked loops employ an oscillator element with variable frequency capability. This can be an analog VCO either driven by analog circuitry in the case of an APLL or driven digitally through the use of a digital-to-analog converter as is the case for some DPLL designs. Pure digital oscillators such as a numerically controlled oscillator are used in ADPLLs. Feedback path and optional divider PLLs may include a divider between the oscillator and the feedback input to the phase detector to produce a frequency synthesizer. A programmable divider is particularly useful in radio transmitter applications and for computer clocking, since a large number of frequencies can be produced from a single stable, accurate, quartz crystal–controlled reference oscillator (which were expensive before commercial-scale hydrothermal synthesis provided cheap synthetic quartz). Some PLLs also include a divider between the reference clock and the reference input to the phase detector. If the divider in the feedback path divides by and the reference input divider divides by , it allows the PLL to multiply the reference frequency by . It might seem simpler to just feed the PLL a lower frequency, but in some cases the reference frequency may be constrained by other issues, and then the reference divider is useful. Frequency multiplication can also be attained by locking the VCO output to the Nth harmonic of the reference signal. Instead of a simple phase detector, the design uses a harmonic mixer (sampling mixer). The harmonic mixer turns the reference signal into an impulse train that is rich in harmonics. The VCO output is coarse tuned to be close to one of those harmonics. Consequently, the desired harmonic mixer output (representing the difference between the N harmonic and the VCO output) falls within the loop filter passband. It should also be noted that the feedback is not limited to a frequency divider. This element can be other elements such as a frequency multiplier, or a mixer. The multiplier will make the VCO output a sub-multiple (rather than a multiple) of the reference frequency. A mixer can translate the VCO frequency by a fixed offset. It may also be a combination of these. For example, a divider following a mixer allows the divider to operate at a much lower frequency than the VCO without a loss in loop gain. Modeling Time domain model of APLL The equations governing a phase-locked loop with an analog multiplier as the phase detector and linear filter may be derived as follows. Let the input to the phase detector be and the output of the VCO is with phases and . The functions and describe waveforms of signals. Then the output of the phase detector is given by The VCO frequency is usually taken as a function of the VCO input as where is the sensitivity of the VCO and is expressed in Hz / V; is a free-running frequency of VCO. The loop filter can be described by a system of linear differential equations where is an input of the filter, is an output of the filter, is -by- matrix, . represents an initial state of the filter. The star symbol is a conjugate transpose. Hence the following system describes PLL where is an initial phase shift. Phase domain model of APLL Consider the input of PLL and VCO output are high frequency signals. Then for any piecewise differentiable -periodic functions and there is a function such that the output of Filter in phase domain is asymptotically equal (the difference is small with respect to the frequencies) to the output of the Filter in time domain model. Here function is a phase detector characteristic. Denote by the phase difference Then the following dynamical system describes PLL behavior Here ; is the frequency of a reference oscillator (we assume that is constant). Example Consider sinusoidal signals and a simple one-pole RC circuit as a filter. The time-domain model takes the form PD characteristics for this signals is equal to Hence the phase domain model takes the form This system of equations is equivalent to the equation of mathematical pendulum Linearized phase domain model Phase locked loops can also be analyzed as control systems by applying the Laplace transform. The loop response can be written as Where is the output phase in radians is the input phase in radians is the phase detector gain in volts per radian is the VCO gain in radians per volt-second is the loop filter transfer function (dimensionless) The loop characteristics can be controlled by inserting different types of loop filters. The simplest filter is a one-pole RC circuit. The loop transfer function in this case is The loop response becomes: This is the form of a classic harmonic oscillator. The denominator can be related to that of a second order system: where is the damping factor and is the natural frequency of the loop. For the one-pole RC filter, The loop natural frequency is a measure of the response time of the loop, and the damping factor is a measure of the overshoot and ringing. Ideally, the natural frequency should be high and the damping factor should be near 0.707 (critical damping). With a single pole filter, it is not possible to control the loop frequency and damping factor independently. For the case of critical damping, A slightly more effective filter, the lag-lead filter includes one pole and one zero. This can be realized with two resistors and one capacitor. The transfer function for this filter is This filter has two time constants Substituting above yields the following natural frequency and damping factor The loop filter components can be calculated independently for a given natural frequency and damping factor Real world loop filter design can be much more complex e.g. using higher order filters to reduce various types or source of phase noise. (See the D Banerjee ref below) Implementing a digital phase-locked loop in software Digital phase locked loops can be implemented in hardware, using integrated circuits such as a CMOS 4046. However, with microcontrollers becoming faster, it may make sense to implement a phase locked loop in software for applications that do not require locking onto signals in the MHz range or faster, such as precisely controlling motor speeds. Software implementation has several advantages including easy customization of the feedback loop including changing the multiplication or division ratio between the signal being tracked and the output oscillator. Furthermore, a software implementation is useful to understand and experiment with. As an example of a phase-locked loop implemented using a phase frequency detector is presented in MATLAB, as this type of phase detector is robust and easy to implement. % This example is written in MATLAB % Initialize variables vcofreq = zeros(1, numiterations); ervec = zeros(1, numiterations); % Keep track of last states of reference, signal, and error signal qsig = 0; qref = 0; lref = 0; lsig = 0; lersig = 0; phs = 0; freq = 0; % Loop filter constants (proportional and derivative) % Currently powers of two to facilitate multiplication by shifts prop = 1 / 128; deriv = 64; for it = 1:numiterations % Simulate a local oscillator using a 16-bit counter phs = mod(phs + floor(freq / 2 ^ 16), 2 ^ 16); ref = phs < 32768; % Get the next digital value (0 or 1) of the signal to track sig = tracksig(it); % Implement the phase-frequency detector rst = ~ (qsig & qref); % Reset the "flip-flop" of the phase-frequency % detector when both signal and reference are high qsig = (qsig | (sig & ~ lsig)) & rst; % Trigger signal flip-flop and leading edge of signal qref = (qref | (ref & ~ lref)) & rst; % Trigger reference flip-flop on leading edge of reference lref = ref; lsig = sig; % Store these values for next iteration (for edge detection) ersig = qref - qsig; % Compute the error signal (whether frequency should increase or decrease) % Error signal is given by one or the other flip flop signal % Implement a pole-zero filter by proportional and derivative input to frequency filtered_ersig = ersig + (ersig - lersig) * deriv; % Keep error signal for proportional output lersig = ersig; % Integrate VCO frequency using the error signal freq = freq - 2 ^ 16 * filtered_ersig * prop; % Frequency is tracked as a fixed-point binary fraction % Store the current VCO frequency vcofreq(1, it) = freq / 2 ^ 16; % Store the error signal to show whether signal or reference is higher frequency ervec(1, it) = ersig; end In this example, an array tracksig is assumed to contain a reference signal to be tracked. The oscillator is implemented by a counter, with the most significant bit of the counter indicating the on/off status of the oscillator. This code simulates the two D-type flip-flops that comprise a phase-frequency comparator. When either the reference or signal has a positive edge, the corresponding flip-flop switches high. Once both reference and signal is high, both flip-flops are reset. Which flip-flop is high determines at that instant whether the reference or signal leads the other. The error signal is the difference between these two flip-flop values. The pole-zero filter is implemented by adding the error signal and its derivative to the filtered error signal. This in turn is integrated to find the oscillator frequency. In practice, one would likely insert other operations into the feedback of this phase-locked loop. For example, if the phase locked loop were to implement a frequency multiplier, the oscillator signal could be divided in frequency before it is compared to the reference signal. See also Frequency-locked loop Charge-pump phase-locked loop Carrier recovery Circle map – A simple mathematical model of the phase-locked loop showing both mode-locking and chaotic behavior. Costas loop Delay-locked loop (DLL) Direct conversion receiver Direct digital synthesizer Kalman filter PLL multibit Shortt–Synchronome clock – Slave pendulum phase-locked to master (ca 1921) Notes References Further reading . . (provides useful Matlab scripts for simulation) . (provides useful Matlab scripts for simulation) . (FM Demodulation) . An article on designing a standard PLL IC for Bluetooth applications. External links Phase locked loop primer – Includes embedded video Excel Unusual hosts an animated PLL model and the tutorials to code such a model. Articles with example MATLAB/Octave code Communication circuits Electronic design Electronic oscillators Radio electronics
Phase-locked loop
[ "Engineering" ]
7,126
[ "Radio electronics", "Telecommunications engineering", "Electronic design", "Electronic engineering", "Design", "Communication circuits" ]
41,549
https://en.wikipedia.org/wiki/Phase%20noise
In signal processing, phase noise is the frequency-domain representation of random fluctuations in the phase of a waveform, corresponding to time-domain deviations from perfect periodicity (jitter). Generally speaking, radio-frequency engineers speak of the phase noise of an oscillator, whereas digital-system engineers work with the jitter of a clock. Definitions An ideal oscillator would generate a pure sine wave. In the frequency domain, this would be represented as a single pair of Dirac delta functions (positive and negative conjugates) at the oscillator's frequency; i.e., all the signal's power is at a single frequency. All real oscillators have phase modulated noise components. The phase noise components spread the power of a signal to adjacent frequencies, resulting in noise sidebands. Consider the following noise-free signal: Phase noise is added to this signal by adding a stochastic process represented by to the signal as follows: Different phase noise processes, , possess different power Spectral density (PSD). For example, a white noise PSD follows a trend, a pink noise PSD follows a trend, and a brown noise PSD follows a trend. is the single-sided (f>0) phase noise PSD , given by the Fourier transform of the Autocorrelation of the phase noise. The noise can also be represented at the single-sided (f>0) frequency noise PSD, , or the fractional frequency stability PSD, , which defines the frequency fluctuations in terms of the deviation from the carrier frequency, . The phase noise can also be given as the spectral purity, , the single-sideband power in a 1Hz bandwidth at a frequency offset, f, from the carrier frequency, , referenced to the carrier power. Jitter conversions Phase noise is sometimes also measured and expressed as a power obtained by integrating over a certain range of offset frequencies. For example, the phase noise may be −40 dBc integrated over the range of 1 kHz to 100 kHz. This integrated phase noise (expressed in degrees) can be converted to jitter (expressed in seconds) using the following formula: In the absence of 1/f noise in a region where the phase noise displays a –20dBc/decade slope (Leeson's equation), the RMS cycle jitter can be related to the phase noise by: Likewise: Measurement Phase noise can be measured using a spectrum analyzer if the phase noise of the device under test (DUT) is large with respect to the spectrum analyzer's local oscillator. Care should be taken that observed values are due to the measured signal and not the shape factor of the spectrum analyzer's filters. Spectrum analyzer based measurement can show the phase-noise power over many decades of frequency; e.g., 1 Hz to 10 MHz. The slope with offset frequency in various offset frequency regions can provide clues as to the source of the noise; e.g., low frequency flicker noise decreasing at 30 dB per decade (= 9 dB per octave). Phase noise measurement systems are alternatives to spectrum analyzers. These systems may use internal and external references and allow measurement of both residual (additive) and absolute noise. Additionally, these systems can make low-noise, close-to-the-carrier, measurements. Linewidths The sinusoidal output of an ideal oscillator is a Dirac delta function in the power spectral density centered at the frequency of the sinusoid. Such perfect spectral purity is not achievable in a practical oscillator. Spreading of the spectrum line caused by phase noise is characterized by the fundamental linewidth and the integral linewidth. The fundamental linewidth, also known as the White noise-limited linewidth or the intrinsic linewidth, is the linewidth of an oscillator's PSD in the presence of only white noise sources (noise with a PSD that follows a trend, ie. equivalent across all frequencies). The fundamental linewidth takes Lorentzian spectral line shape. White noise provides a Allan Deviation plot at small averaging times. The integral linewidth, also known as the effective linewidth or the total linewidth, is the linewidth of an oscillator's PSD in the presence of both white noise sources (noise with a PSD that follows a trend) and pink noise sources (noise with a PSD that follows a trend). Pink noise is sometimes called Flicker noise, or simply 1/f noise. The integral linewidth takes Voigt lineshape, a convolution of the white noise-induced Lorentzian lineshape and the pink noise-induced Gaussian lineshape. Pink noise provides a Allan Deviation plot at moderate averaging times. This flat line on the Allan Deviation plot is also known as the flicker floor. Additionally, the oscillator might experience Frequency drift over long periods of time, slowly moving the center frequency of the Voigt lineshape. This drift is a brown noise source (noise with a PSD that follows a trend), and provides a Allan Deviation plot at large averaging times. Limiting System Performance A laser is a common oscillator that is characterized by its noise, and thus its Laser linewidth. The laser noise provides fundamental limitations of the systems that the laser is used in, such as loss of sensitivity in radar and communications systems, lack of definition in imaging systems, and a higher bit error rate in digital systems. Lasers with a near-Infrared center wavelength are used in many atomic, molecular, and optical physics experiments to provide photons that interact with atoms. The requirements for the spectral purity at specific frequency offsets of the lasers used in qubit operation (such as clock transition lasers and state preparation lasers) are highly stringent because the coherence time of the qubit is directly related to the linewidth of the lasers. See also Allan variance Flicker noise Leeson's equation Maximum time interval error Noise spectral density Spectral density Spectral phase Opto-electronic oscillator References Further reading Ulrich L. Rohde, A New and Efficient Method of Designing Low Noise Microwave Oscillators, https://depositonce.tu-berlin.de/bitstream/11303/1306/1/Dokument_16.pdf Ajay Poddar, Ulrich Rohde, Anisha Apte, “ How Low Can They Go, Oscillator Phase noise model, Theoretical, Experimental Validation, and Phase Noise Measurements”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 50–72, September/October 2013. Ulrich Rohde, Ajay Poddar, Anisha Apte, “Getting Its Measure”, IEEE Microwave Magazine, Vol. 14, No. 6, pp. 73–86, September/October 2013 U. L. Rohde, A. K. Poddar, Anisha Apte, “Phase noise measurement and its limitations”, Microwave Journal, pp. 22–46, May 2013 A. K. Poddar, U.L. Rohde, “Technique to Minimize Phase Noise of Crystal Oscillators”, Microwave Journal, pp. 132–150, May 2013. A. K. Poddar, U. L. Rohde, and E. Rubiola, “Phase noise measurement: Challenges and uncertainty”, 2014 IEEE IMaRC, Bangalore, Dec 2014. Oscillators Frequency-domain analysis Telecommunication theory Noise (electronics)
Phase noise
[ "Physics" ]
1,582
[ "Frequency-domain analysis", "Spectrum (physical sciences)" ]
41,586
https://en.wikipedia.org/wiki/Propagation%20constant
The propagation constant of a sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the dimensionless change in magnitude or phase per unit length. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next. The propagation constant's value is expressed logarithmically, almost universally to the base e, rather than base 10 that is used in telecommunications in other situations. The quantity measured, such as voltage, is expressed as a sinusoidal phasor. The phase of the sinusoid varies with distance which results in the propagation constant being a complex number, the imaginary part being caused by the phase change. Alternative names The term "propagation constant" is somewhat of a misnomer as it usually varies strongly with ω. It is probably the most widely used term but there are a large variety of alternative names used by various authors for this quantity. These include transmission parameter, transmission function, propagation parameter, propagation coefficient and transmission constant. If the plural is used, it suggests that α and β are being referenced separately but collectively as in transmission parameters, propagation parameters, etc. In transmission line theory, α and β are counted among the "secondary coefficients", the term secondary being used to contrast to the primary line coefficients. The primary coefficients are the physical properties of the line, namely R,C,L and G, from which the secondary coefficients may be derived using the telegrapher's equation. In the field of transmission lines, the term transmission coefficient has a different meaning despite the similarity of name: it is the companion of the reflection coefficient. Definition The propagation constant, symbol , for a given system is defined by the ratio of the complex amplitude at the source of the wave to the complex amplitude at some distance , such that, Inverting the above equation and isolating results in the quotient of the complex amplitude ratio's natural logarithm and the distance traveled: Since the propagation constant is a complex quantity we can write: where , the real part, is called the attenuation constant , the imaginary part, is called the phase constant more often is used for electrical circuits. That does indeed represent phase can be seen from Euler's formula: which is a sinusoid which varies in phase as varies but does not vary in amplitude because The reason for the use of base is also now made clear. The imaginary phase constant, , can be added directly to the attenuation constant, , to form a single complex number that can be handled in one mathematical operation provided they are to the same base. Angles measured in radians require base , so the attenuation is likewise in base . The propagation constant for conducting lines can be calculated from the primary line coefficients by means of the relationship where the series impedance of the line per unit length and, the shunt admittance of the line per unit length. Plane wave The propagation factor of a plane wave traveling in a linear media in the direction is given by where distance traveled in the direction attenuation constant in the units of nepers/meter phase constant in the units of radians/meter frequency in radians/second conductivity of the media = complex permitivity of the media = complex permeability of the media The sign convention is chosen for consistency with propagation in lossy media. If the attenuation constant is positive, then the wave amplitude decreases as the wave propagates in the direction. Wavelength, phase velocity, and skin depth have simple relationships to the components of the propagation constant: Attenuation constant In telecommunications, the term attenuation constant, also called attenuation parameter or attenuation coefficient, is the attenuation of an electromagnetic wave propagating through a medium per unit distance from the source. It is the real part of the propagation constant and is measured in nepers per metre. A neper is approximately 8.7 dB. Attenuation constant can be defined by the amplitude ratio The propagation constant per unit length is defined as the natural logarithm of the ratio of the sending end current or voltage to the receiving end current or voltage, divided by the distance x involved: Conductive lines The attenuation constant for conductive lines can be calculated from the primary line coefficients as shown above. For a line meeting the distortionless condition, with a conductance G in the insulator, the attenuation constant is given by however, a real line is unlikely to meet this condition without the addition of loading coils and, furthermore, there are some frequency dependent effects operating on the primary "constants" which cause a frequency dependence of the loss. There are two main components to these losses, the metal loss and the dielectric loss. The loss of most transmission lines are dominated by the metal loss, which causes a frequency dependency due to finite conductivity of metals, and the skin effect inside a conductor. The skin effect causes R along the conductor to be approximately dependent on frequency according to Losses in the dielectric depend on the loss tangent (tan δ) of the material divided by the wavelength of the signal. Thus they are directly proportional to the frequency. Optical fiber The attenuation constant for a particular propagation mode in an optical fiber is the real part of the axial propagation constant. Phase constant In electromagnetic theory, the phase constant, also called phase change constant, parameter or coefficient is the imaginary component of the propagation constant for a plane wave. It represents the change in phase per unit length along the path traveled by the wave at any instant and is equal to the real part of the angular wavenumber of the wave. It is represented by the symbol β and is measured in units of radians per unit length. From the definition of (angular) wavenumber for transverse electromagnetic (TEM) waves in lossless media, For a transmission line, the telegrapher's equations tells us that the wavenumber must be proportional to frequency for the transmission of the wave to be undistorted in the time domain. This includes, but is not limited to, the ideal case of a lossless line. The reason for this condition can be seen by considering that a useful signal is composed of many different wavelengths in the frequency domain. For there to be no distortion of the waveform, all these waves must travel at the same velocity so that they arrive at the far end of the line at the same time as a group. Since wave phase velocity is given by it is proved that β is required to be proportional to ω. In terms of primary coefficients of the line, this yields from the telegrapher's equation for a distortionless line the condition where L and C are, respectively, the inductance and capacitance per unit length of the line. However, practical lines can only be expected to approximately meet this condition over a limited frequency band. In particular, the phase constant is not always equivalent to the wavenumber . The relation applies to the TEM wave, which travels in free space or TEM-devices such as the coaxial cable and two parallel wires transmission lines. Nevertheless, it does not apply to the TE wave (transverse electric wave) and TM wave (transverse magnetic wave). For example, in a hollow waveguide where the TEM wave cannot exist but TE and TM waves can propagate, Here is the cutoff frequency. In a rectangular waveguide, the cutoff frequency is where are the mode numbers for the rectangle's sides of length and respectively. For TE modes, (but is not allowed), while for TM modes . The phase velocity equals Filters and two-port networks The term propagation constant or propagation function is applied to filters and other two-port networks used for signal processing. In these cases, however, the attenuation and phase coefficients are expressed in terms of nepers and radians per network section rather than per unit length. Some authors make a distinction between per unit length measures (for which "constant" is used) and per section measures (for which "function" is used). The propagation constant is a useful concept in filter design which invariably uses a cascaded section topology. In a cascaded topology, the propagation constant, attenuation constant and phase constant of individual sections may be simply added to find the total propagation constant etc. Cascaded networks The ratio of output to input voltage for each network is given by The terms are impedance scaling terms and their use is explained in the image impedance article. The overall voltage ratio is given by Thus for n cascaded sections all having matching impedances facing each other, the overall propagation constant is given by See also The concept of penetration depth is one of many ways to describe the absorption of electromagnetic waves. For the others, and their interrelationships, see the article: Mathematical descriptions of opacity. Propagation speed Notes References . Matthaei, Young, Jones Microwave Filters, Impedance-Matching Networks, and Coupling Structures McGraw-Hill 1964. External links Free PDF download is available. There is an updated version dated August 6, 2002. Filter theory Physical quantities Telecommunication theory Electromagnetism Electromagnetic radiation Analog circuits Image impedance filters
Propagation constant
[ "Physics", "Mathematics", "Engineering" ]
1,932
[ "Physical phenomena", "Electromagnetism", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Quantity", "Analog circuits", "Filter theory", "Electronic engineering", "Radiation", "Fundamental interactions", "Physical properties" ]
41,625
https://en.wikipedia.org/wiki/Radiometry
Radiometry is a set of techniques for measuring electromagnetic radiation, including visible light. Radiometric techniques in optics characterize the distribution of the radiation's power in space, as opposed to photometric techniques, which characterize the light's interaction with the human eye. The fundamental difference between radiometry and photometry is that radiometry gives the entire optical radiation spectrum, while photometry is limited to the visible spectrum. Radiometry is distinct from quantum techniques such as photon counting. The use of radiometers to determine the temperature of objects and gasses by measuring radiation flux is called pyrometry. Handheld pyrometer devices are often marketed as infrared thermometers. Radiometry is important in astronomy, especially radio astronomy, and plays a significant role in Earth remote sensing. The measurement techniques categorized as radiometry in optics are called photometry in some astronomical applications, contrary to the optics usage of the term. Spectroradiometry is the measurement of absolute radiometric quantities in narrow bands of wavelength. Radiometric quantities Integral and spectral radiometric quantities Integral quantities (like radiant flux) describe the total effect of radiation of all wavelengths or frequencies, while spectral quantities (like spectral power) describe the effect of radiation of a single wavelength or frequency . To each integral quantity there are corresponding spectral quantities, defined as the quotient of the integrated quantity by the range of frequency or wavelength considered. For example, the radiant flux Φe corresponds to the spectral power Φe, and Φe,. Getting an integral quantity's spectral counterpart requires a limit transition. This comes from the idea that the precisely requested wavelength photon existence probability is zero. Let us show the relation between them using the radiant flux as an example: Integral flux, whose unit is W: Spectral flux by wavelength, whose unit is : where is the radiant flux of the radiation in a small wavelength interval . The area under a plot with wavelength horizontal axis equals to the total radiant flux. Spectral flux by frequency, whose unit is : where is the radiant flux of the radiation in a small frequency interval . The area under a plot with frequency horizontal axis equals to the total radiant flux. The spectral quantities by wavelength and frequency are related to each other, since the product of the two variables is the speed of light (): or or The integral quantity can be obtained by the spectral quantity's integration: See also Reflectivity Microwave radiometer Measurement of ionizing radiation Radiometric calibration Radiometric resolution References External links Radiometry and photometry FAQ Professor Jim Palmer's Radiometry FAQ page (The University of Arizona College of Optical Sciences). Measurement Optical metrology Telecommunications engineering Observational astronomy Electromagnetic radiation
Radiometry
[ "Physics", "Astronomy", "Mathematics", "Engineering" ]
541
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Electromagnetic radiation", "Quantity", "Observational astronomy", "Measurement", "Size", "Radiation", "Electrical engineering", "Astronomical sub-disciplines", "Radiometry" ]
41,644
https://en.wikipedia.org/wiki/Reflectance
The reflectance of the surface of a material is its effectiveness in reflecting radiant energy. It is the fraction of incident electromagnetic power that is reflected at the boundary. Reflectance is a component of the response of the electronic structure of the material to the electromagnetic field of light, and is in general a function of the frequency, or wavelength, of the light, its polarization, and the angle of incidence. The dependence of reflectance on the wavelength is called a reflectance spectrum or spectral reflectance curve. Mathematical definitions Hemispherical reflectance The hemispherical reflectance of a surface, denoted , is defined as where is the radiant flux reflected by that surface and is the radiant flux received by that surface. Spectral hemispherical reflectance The spectral hemispherical reflectance in frequency and spectral hemispherical reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiant flux in frequency reflected by that surface; is the spectral radiant flux in frequency received by that surface; is the spectral radiant flux in wavelength reflected by that surface; is the spectral radiant flux in wavelength received by that surface. Directional reflectance The directional reflectance of a surface, denoted RΩ, is defined as where is the radiance reflected by that surface; is the radiance received by that surface. This depends on both the reflected direction and the incoming direction. In other words, it has a value for every combination of incoming and outgoing directions. It is related to the bidirectional reflectance distribution function and its upper limit is 1. Another measure of reflectance, depending only on the outgoing direction, is I/F, where I is the radiance reflected in a given direction and F is the incoming radiance averaged over all directions, in other words, the total flux of radiation hitting the surface per unit area, divided by π. This can be greater than 1 for a glossy surface illuminated by a source such as the sun, with the reflectance measured in the direction of maximum radiance (see also Seeliger effect). Spectral directional reflectance The spectral directional reflectance in frequency and spectral directional reflectance in wavelength of a surface, denoted and respectively, are defined as where is the spectral radiance in frequency reflected by that surface; is the spectral radiance received by that surface; is the spectral radiance in wavelength reflected by that surface; is the spectral radiance in wavelength received by that surface. Again, one can also define a value of (see above) for a given wavelength. Reflectivity For homogeneous and semi-infinite (see halfspace) materials, reflectivity is the same as reflectance. Reflectivity is the square of the magnitude of the Fresnel reflection coefficient, which is the ratio of the reflected to incident electric field; as such the reflection coefficient can be expressed as a complex number as determined by the Fresnel equations for a single layer, whereas the reflectance is always a positive real number. For layered and finite media, according to the CIE, reflectivity is distinguished from reflectance by the fact that reflectivity is a value that applies to thick reflecting objects. When reflection occurs from thin layers of material, internal reflection effects can cause the reflectance to vary with surface thickness. Reflectivity is the limit value of reflectance as the sample becomes thick; it is the intrinsic reflectance of the surface, hence irrespective of other parameters such as the reflectance of the rear surface. Another way to interpret this is that the reflectance is the fraction of electromagnetic power reflected from a specific sample, while reflectivity is a property of the material itself, which would be measured on a perfect machine if the material filled half of all space. Surface type Given that reflectance is a directional property, most surfaces can be divided into those that give specular reflection and those that give diffuse reflection. For specular surfaces, such as glass or polished metal, reflectance is nearly zero at all angles except at the appropriate reflected angle; that is the same angle with respect to the surface normal in the plane of incidence, but on the opposing side. When the radiation is incident normal to the surface, it is reflected back into the same direction. For diffuse surfaces, such as matte white paint, reflectance is uniform; radiation is reflected in all angles equally or near-equally. Such surfaces are said to be Lambertian. Most practical objects exhibit a combination of diffuse and specular reflective properties. Water reflectance Reflection occurs when light moves from a medium with one index of refraction into a second medium with a different index of refraction. Specular reflection from a body of water is calculated by the Fresnel equations.<ref name="Ottav">Ottaviani, M. and Stamnes, K. and Koskulics, J. and Eide, H. and Long, S.R. and Su, W. and Wiscombe, W., 2008: 'Light Reflection from Water Waves: Suitable Setup for a Polarimetric Investigation under Controlled Laboratory Conditions. Journal of Atmospheric and Oceanic Technology, 25 (5), 715--728.</ref> Fresnel reflection is directional and therefore does not contribute significantly to albedo which primarily diffuses reflection. A real water surface may be wavy. Reflectance, which assumes a flat surface as given by the Fresnel equations, can be adjusted to account for waviness. Grating efficiency The generalization of reflectance to a diffraction grating, which disperses light by wavelength, is called diffraction efficiency''. Other radiometric coefficients See also Bidirectional reflectance distribution function Colorimetry Emissivity Lambert's cosine law Transmittance Sun path Light Reflectance Value Albedo Reststrahlen effect Lyddane–Sachs–Teller relation References External links Reflectivity of metals . Reflectance Data. Physical quantities Radiometry Dimensionless numbers of physics
Reflectance
[ "Physics", "Mathematics", "Engineering" ]
1,222
[ "Physical phenomena", "Telecommunications engineering", "Physical quantities", "Quantity", "Physical properties", "Radiometry" ]
41,660
https://en.wikipedia.org/wiki/Resonance
Resonance is a phenomenon that occurs when an object or system is subjected to an external force or vibration that matches its natural frequency. When this happens, the object or system absorbs energy from the external force and starts vibrating with a larger amplitude. Resonance can occur in various systems, such as mechanical, electrical, or acoustic systems, and it is often desirable in certain applications, such as musical instruments or radio receivers. However, resonance can also be detrimental, leading to excessive vibrations or even structural failure in some cases. All systems, including molecular systems and particles, tend to vibrate at a natural frequency depending upon their structure; this frequency is known as a resonant frequency or resonance frequency. When an oscillating force, an external vibration, is applied at a resonant frequency of a dynamic system, object, or particle, the outside vibration will cause the system to oscillate at a higher amplitude (with more force) than when the same force is applied at other, non-resonant frequencies. The resonant frequencies of a system can be identified when the response to an external vibration creates an amplitude that is a relative maximum within the system. Small periodic forces that are near a resonant frequency of the system have the ability to produce large amplitude oscillations in the system due to the storage of vibrational energy. Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, orbital resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR) and resonance of quantum wave functions. Resonant systems can be used to generate vibrations of a specific frequency (e.g., musical instruments), or pick out specific frequencies from a complex vibration containing many frequencies (e.g., filters). The term resonance (from Latin resonantia, 'echo', from resonare, 'resound') originated from the field of acoustics, particularly the sympathetic resonance observed in musical instruments, e.g., when one string starts to vibrate and produce sound after a different one is struck. Overview Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a simple pendulum). However, there are some losses from cycle to cycle, called damping. When damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple and distinct resonant frequencies. Examples A familiar example is a playground swing, which acts as a pendulum. Pushing a person in a swing in time with the natural interval of the swing (its resonant frequency) makes the swing go higher and higher (maximum amplitude), while attempts to push the swing at a faster or slower tempo produce smaller arcs. This is because the energy the swing absorbs is maximized when the pushes match the swing's natural oscillations. Resonance occurs widely in nature, and is exploited in many devices. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. For example, when hard objects like metal, glass, or wood are struck, there are brief resonant vibrations in the object. Light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. Other examples of resonance include: Timekeeping mechanisms of modern clocks and watches, e.g., the balance wheel in a mechanical watch and the quartz crystal in a quartz watch Tidal resonance of the Bay of Fundy Acoustic resonances of musical instruments and the human vocal tract Shattering of a crystal wineglass when exposed to a musical tone of the right pitch (its resonant frequency) Friction idiophones, such as making a glass object (glass, bottle, vase) vibrate by rubbing around its rim with a fingertip Electrical resonance of tuned circuits in radios and TVs that allow radio frequencies to be selectively received Creation of coherent light by optical resonance in a laser cavity Orbital resonance as exemplified by some moons of the Solar System's giant planets and resonant groups such as the plutinos Material resonances in atomic scale are the basis of several spectroscopic techniques that are used in condensed matter physics Electron spin resonance Mössbauer effect Nuclear magnetic resonance Linear systems Resonance manifests itself in many linear and nonlinear systems as oscillations around an equilibrium point. When the system is driven by a sinusoidal external input, a measured output of the system may oscillate in response. The ratio of the amplitude of the output's steady-state oscillations to the input's oscillations is called the gain, and the gain can be a function of the frequency of the sinusoidal external input. Peaks in the gain at certain frequencies correspond to resonances, where the amplitude of the measured output's oscillations are disproportionately large. Since many linear and nonlinear systems that oscillate are modeled as harmonic oscillators near their equilibria, a derivation of the resonant frequency for a driven, damped harmonic oscillator is shown. An RLC circuit is used to illustrate connections between resonance and a system's transfer function, frequency response, poles, and zeroes. Building off the RLC circuit example, these connections for higher-order linear systems with multiple inputs and outputs are generalized. The driven, damped harmonic oscillator Consider a damped mass on a spring driven by a sinusoidal, externally applied force. Newton's second law takes the form where m is the mass, x is the displacement of the mass from the equilibrium point, F0 is the driving amplitude, ω is the driving angular frequency, k is the spring constant, and c is the viscous damping coefficient. This can be rewritten in the form where is called the undamped angular frequency of the oscillator or the natural frequency, is called the damping ratio. Many sources also refer to ω0 as the resonant frequency. However, as shown below, when analyzing oscillations of the displacement x(t), the resonant frequency is close to but not the same as ω0. In general the resonant frequency is close to but not necessarily the same as the natural frequency. The RLC circuit example in the next section gives examples of different resonant frequencies for the same system. The general solution of Equation () is the sum of a transient solution that depends on initial conditions and a steady state solution that is independent of initial conditions and depends only on the driving amplitude F0, driving frequency ω, undamped angular frequency ω0, and the damping ratio ζ. The transient solution decays in a relatively short amount of time, so to study resonance it is sufficient to consider the steady state solution. It is possible to write the steady-state solution for x(t) as a function proportional to the driving force with an induced phase change φ, where The phase value is usually taken to be between −180° and 0 so it represents a phase lag for both positive and negative values of the arctan argument. Resonance occurs when, at certain driving frequencies, the steady-state amplitude of x(t) is large compared to its amplitude at other driving frequencies. For the mass on a spring, resonance corresponds physically to the mass's oscillations having large displacements from the spring's equilibrium position at certain driving frequencies. Looking at the amplitude of x(t) as a function of the driving frequency ω, the amplitude is maximal at the driving frequency ωr is the resonant frequency for this system. Again, the resonant frequency does not equal the undamped angular frequency ω0 of the oscillator. They are proportional, and if the damping ratio goes to zero they are the same, but for non-zero damping they are not the same frequency. As shown in the figure, resonance may also occur at other frequencies near the resonant frequency, including ω0, but the maximum response is at the resonant frequency. Also, ωr is only real and non-zero if , so this system can only resonate when the harmonic oscillator is significantly underdamped. For systems with a very small damping ratio and a driving frequency near the resonant frequency, the steady state oscillations can become very large. The pendulum For other driven, damped harmonic oscillators whose equations of motion do not look exactly like the mass on a spring example, the resonant frequency remains but the definitions of ω0 and ζ change based on the physics of the system. For a pendulum of length ℓ and small displacement angle θ, Equation () becomes and therefore RLC series circuits Consider a circuit consisting of a resistor with resistance R, an inductor with inductance L, and a capacitor with capacitance C connected in series with current i(t) and driven by a voltage source with voltage vin(t). The voltage drop around the circuit is Rather than analyzing a candidate solution to this equation like in the mass on a spring example above, this section will analyze the frequency response of this circuit. Taking the Laplace transform of Equation (), where I(s) and Vin(s) are the Laplace transform of the current and input voltage, respectively, and s is a complex frequency parameter in the Laplace domain. Rearranging terms, Voltage across the capacitor An RLC circuit in series presents several options for where to measure an output voltage. Suppose the output voltage of interest is the voltage drop across the capacitor. As shown above, in the Laplace domain this voltage is or Define for this circuit a natural frequency and a damping ratio, The ratio of the output voltage to the input voltage becomes H(s) is the transfer function between the input voltage and the output voltage. This transfer function has two poles–roots of the polynomial in the transfer function's denominator–at and no zeros–roots of the polynomial in the transfer function's numerator. Moreover, for , the magnitude of these poles is the natural frequency ω0 and that for , our condition for resonance in the harmonic oscillator example, the poles are closer to the imaginary axis than to the real axis. Evaluating H(s) along the imaginary axis , the transfer function describes the frequency response of this circuit. Equivalently, the frequency response can be analyzed by taking the Fourier transform of Equation () instead of the Laplace transform. The transfer function, which is also complex, can be written as a gain and phase, A sinusoidal input voltage at frequency ω results in an output voltage at the same frequency that has been scaled by G(ω) and has a phase shift Φ(ω). The gain and phase can be plotted versus frequency on a Bode plot. For the RLC circuit's capacitor voltage, the gain of the transfer function H(iω) is Note the similarity between the gain here and the amplitude in Equation (). Once again, the gain is maximized at the resonant frequency Here, the resonance corresponds physically to having a relatively large amplitude for the steady state oscillations of the voltage across the capacitor compared to its amplitude at other driving frequencies. Voltage across the inductor The resonant frequency need not always take the form given in the examples above. For the RLC circuit, suppose instead that the output voltage of interest is the voltage across the inductor. As shown above, in the Laplace domain the voltage across the inductor is using the same definitions for ω0 and ζ as in the previous example. The transfer function between Vin(s) and this new Vout(s) across the inductor is This transfer function has the same poles as the transfer function in the previous example, but it also has two zeroes in the numerator at . Evaluating H(s) along the imaginary axis, its gain becomes Compared to the gain in Equation () using the capacitor voltage as the output, this gain has a factor of ω2 in the numerator and will therefore have a different resonant frequency that maximizes the gain. That frequency is So for the same RLC circuit but with the voltage across the inductor as the output, the resonant frequency is now larger than the natural frequency, though it still tends towards the natural frequency as the damping ratio goes to zero. That the same circuit can have different resonant frequencies for different choices of output is not contradictory. As shown in Equation (), the voltage drop across the circuit is divided among the three circuit elements, and each element has different dynamics. The capacitor's voltage grows slowly by integrating the current over time and is therefore more sensitive to lower frequencies, whereas the inductor's voltage grows when the current changes rapidly and is therefore more sensitive to higher frequencies. While the circuit as a whole has a natural frequency where it tends to oscillate, the different dynamics of each circuit element make each element resonate at a slightly different frequency. Voltage across the resistor Suppose that the output voltage of interest is the voltage across the resistor. In the Laplace domain the voltage across the resistor is and using the same natural frequency and damping ratio as in the capacitor example the transfer function is This transfer function also has the same poles as the previous RLC circuit examples, but it only has one zero in the numerator at s = 0. For this transfer function, its gain is The resonant frequency that maximizes this gain is and the gain is one at this frequency, so the voltage across the resistor resonates at the circuit's natural frequency and at this frequency the amplitude of the voltage across the resistor equals the input voltage's amplitude. Antiresonance Some systems exhibit antiresonance that can be analyzed in the same way as resonance. For antiresonance, the amplitude of the response of the system at certain frequencies is disproportionately small rather than being disproportionately large. In the RLC circuit example, this phenomenon can be observed by analyzing both the inductor and the capacitor combined. Suppose that the output voltage of interest in the RLC circuit is the voltage across the inductor and the capacitor combined in series. Equation () showed that the sum of the voltages across the three circuit elements sums to the input voltage, so measuring the output voltage as the sum of the inductor and capacitor voltages combined is the same as vin minus the voltage drop across the resistor. The previous example showed that at the natural frequency of the system, the amplitude of the voltage drop across the resistor equals the amplitude of vin, and therefore the voltage across the inductor and capacitor combined has zero amplitude. We can show this with the transfer function. The sum of the inductor and capacitor voltages is Using the same natural frequency and damping ratios as the previous examples, the transfer function is This transfer has the same poles as the previous examples but has zeroes at Evaluating the transfer function along the imaginary axis, its gain is Rather than look for resonance, i.e., peaks of the gain, notice that the gain goes to zero at ω = ω0, which complements our analysis of the resistor's voltage. This is called antiresonance, which has the opposite effect of resonance. Rather than result in outputs that are disproportionately large at this frequency, this circuit with this choice of output has no response at all at this frequency. The frequency that is filtered out corresponds exactly to the zeroes of the transfer function, which were shown in Equation () and were on the imaginary axis. Relationships between resonance and frequency response in the RLC series circuit example These RLC circuit examples illustrate how resonance is related to the frequency response of the system. Specifically, these examples illustrate: How resonant frequencies can be found by looking for peaks in the gain of the transfer function between the input and output of the system, for example in a Bode magnitude plot How the resonant frequency for a single system can be different for different choices of system output The connection between the system's natural frequency, the system's damping ratio, and the system's resonant frequency The connection between the system's natural frequency and the magnitude of the transfer function's poles, pointed out in Equation (), and therefore a connection between the poles and the resonant frequency A connection between the transfer function's zeroes and the shape of the gain as a function of frequency, and therefore a connection between the zeroes and the resonant frequency that maximizes gain A connection between the transfer function's zeroes and antiresonance The next section extends these concepts to resonance in a general linear system. Generalizing resonance and antiresonance for linear systems Next consider an arbitrary linear system with multiple inputs and outputs. For example, in state-space representation a third order linear time-invariant system with three inputs and two outputs might be written as where ui(t) are the inputs, xi(t) are the state variables, yi(t) are the outputs, and A, B, C, and D are matrices describing the dynamics between the variables. This system has a transfer function matrix whose elements are the transfer functions between the various inputs and outputs. For example, Each Hij(s) is a scalar transfer function linking one of the inputs to one of the outputs. The RLC circuit examples above had one input voltage and showed four possible output voltages–across the capacitor, across the inductor, across the resistor, and across the capacitor and inductor combined in series–each with its own transfer function. If the RLC circuit were set up to measure all four of these output voltages, that system would have a 4×1 transfer function matrix linking the single input to each of the four outputs. Evaluated along the imaginary axis, each Hij(iω) can be written as a gain and phase shift, Peaks in the gain at certain frequencies correspond to resonances between that transfer function's input and output, assuming the system is stable. Each transfer function Hij(s) can also be written as a fraction whose numerator and denominator are polynomials of s. The complex roots of the numerator are called zeroes, and the complex roots of the denominator are called poles. For a stable system, the positions of these poles and zeroes on the complex plane give some indication of whether the system can resonate or antiresonate and at which frequencies. In particular, any stable or marginally stable, complex conjugate pair of poles with imaginary components can be written in terms of a natural frequency and a damping ratio as as in Equation (). The natural frequency ω0 of that pole is the magnitude of the position of the pole on the complex plane and the damping ratio of that pole determines how quickly that oscillation decays. In general, Complex conjugate pairs of poles near the imaginary axis correspond to a peak or resonance in the frequency response in the vicinity of the pole's natural frequency. If the pair of poles is on the imaginary axis, the gain is infinite at that frequency. Complex conjugate pairs of zeroes near the imaginary axis correspond to a notch or antiresonance in the frequency response in the vicinity of the zero's frequency, i.e., the frequency equal to the magnitude of the zero. If the pair of zeroes is on the imaginary axis, the gain is zero at that frequency. In the RLC circuit example, the first generalization relating poles to resonance is observed in Equation (). The second generalization relating zeroes to antiresonance is observed in Equation (). In the examples of the harmonic oscillator, the RLC circuit capacitor voltage, and the RLC circuit inductor voltage, "poles near the imaginary axis" corresponds to the significantly underdamped condition ζ < 1/. Standing waves A physical system can have as many natural frequencies as it has degrees of freedom and can resonate near each of those natural frequencies. A mass on a spring, which has one degree of freedom, has one natural frequency. A double pendulum, which has two degrees of freedom, can have two natural frequencies. As the number of coupled harmonic oscillators increases, the time it takes to transfer energy from one to the next becomes significant. Systems with very large numbers of degrees of freedom can be thought of as continuous rather than as having discrete oscillators. Energy transfers from one oscillator to the next in the form of waves. For example, the string of a guitar or the surface of water in a bowl can be modeled as a continuum of small coupled oscillators and waves can travel along them. In many cases these systems have the potential to resonate at certain frequencies, forming standing waves with large-amplitude oscillations at fixed positions. Resonance in the form of standing waves underlies many familiar phenomena, such as the sound produced by musical instruments, electromagnetic cavities used in lasers and microwave ovens, and energy levels of atoms. Standing waves on a string When a string of fixed length is driven at a particular frequency, a wave propagates along the string at the same frequency. The waves reflect off the ends of the string, and eventually a steady state is reached with waves traveling in both directions. The waveform is the superposition of the waves. At certain frequencies, the steady state waveform does not appear to travel along the string. At fixed positions called nodes, the string is never displaced. Between the nodes the string oscillates and exactly halfway between the nodes–at positions called anti-nodes–the oscillations have their largest amplitude. For a string of length with fixed ends, the displacement of the string perpendicular to the -axis at time is where is the amplitude of the left- and right-traveling waves interfering to form the standing wave, is the wave number, is the frequency. The frequencies that resonate and form standing waves relate to the length of the string as where is the speed of the wave and the integer denotes different modes or harmonics. The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. The possible modes of oscillation form a harmonic series. Resonance in complex networks A generalization to complex networks of coupled harmonic oscillators shows that such systems have a finite number of natural resonant frequencies, related to the topological structure of the network itself. In particular, such frequencies result related to the eigenvalues of the network's Laplacian matrix. Let be the adjacency matrix describing the topological structure of the network and the corresponding Laplacian matrix, where is the diagonal matrix of the degrees of the network's nodes. Then, for a network of classical and identical harmonic oscillators, when a sinusoidal driving force is applied to a specific node, the global resonant frequencies of the network are given by where are the eigenvalues of the Laplacian . Types Mechanical Mechanical resonance is the tendency of a mechanical system to absorb more energy when the frequency of its oscillations matches the system's natural frequency of vibration than it does at other frequencies. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, and aircraft. When designing objects, engineers must ensure the mechanical resonance frequencies of the component parts do not match driving vibrational frequencies of motors or other oscillating parts, a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower, and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies and thus dissipate the absorbed energy. The Taipei 101 building relies on a —a tuned mass damper—to cancel resonance. Furthermore, the structure is designed to resonate at a frequency that does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. In addition, engineers designing objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. The cadence of runners has been hypothesized to be energetically favorable due to resonance between the elastic energy stored in the lower limb and the mass of the runner. International Space Station The rocket engines for the International Space Station (ISS) are controlled by an autopilot. Ordinarily, uploaded parameters for controlling the engine control system for the Zvezda module make the rocket engines boost the International Space Station to a higher orbit. The rocket engines are hinge-mounted, and ordinarily the crew does not notice the operation. On January 14, 2009, however, the uploaded parameters made the autopilot swing the rocket engines in larger and larger oscillations, at a frequency of 0.5 Hz. These oscillations were captured on video, and lasted for 142 seconds. Acoustic Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the frequency range of human hearing, in other words sound. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz), Many objects and materials act as resonators with resonant frequencies within this range, and when struck vibrate mechanically, pushing on the surrounding air to create sound waves. This is the source of many percussive sounds we hear. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of, and tension on, a drum membrane. Like mechanical resonance, acoustic resonance can result in catastrophic failure of the object at resonance. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass, although this is difficult in practice. Electrical Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedance of the circuit is at a minimum in a series circuit or at maximum in a parallel circuit (usually when the transfer function peaks in absolute value). Resonance in circuits are used for both transmitting and receiving wireless communications such as television, cell phones and radio. Optical An optical cavity, also called an optical resonator, is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times producing standing waves for certain resonant frequencies. The standing wave patterns produced are called "modes". Longitudinal modes differ only in frequency while transverse modes differ for different frequencies and have different intensity patterns across the cross-section of the beam. Ring resonators and whispering galleries are examples of optical resonators that do not form standing waves. Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them; flat mirrors are not often used because of the difficulty of aligning them precisely. The geometry (resonator type) must be chosen so the beam remains stable, i.e., the beam size does not continue to grow with each reflection. Resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point (and therefore intense light at that point) inside the cavity. Optical cavities are designed to have a very large Q factor. A beam reflects a large number of times with little attenuation—therefore the frequency line width of the beam is small compared to the frequency of the laser. Additional optical resonances are guided-mode resonances and surface plasmon resonance, which result in anomalous reflection and high evanescent fields at resonance. In this case, the resonant modes are guided modes of a waveguide or surface plasmon modes of a dielectric-metallic interface. These modes are usually excited by a subwavelength grating. Orbital In celestial mechanics, an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies. In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self-correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa, and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet. Atomic, particle, and molecular Nuclear magnetic resonance (NMR) is the name given to a physical resonance phenomenon involving the observation of specific quantum mechanical magnetic properties of an atomic nucleus in the presence of an applied, external magnetic field. Many scientific techniques exploit NMR phenomena to study molecular physics, crystals, and non-crystalline materials through NMR spectroscopy. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). All nuclei containing odd numbers of nucleons have an intrinsic magnetic moment and angular momentum. A key feature of NMR is that the resonant frequency of a particular substance is directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonant frequencies of the sample's nuclei depend on where in the field they are located. Therefore, the particle can be located quite precisely by its resonant frequency. Electron paramagnetic resonance, otherwise known as electron spin resonance (ESR), is a spectroscopic technique similar to NMR, but uses unpaired electrons instead. Materials for which this can be applied are much more limited since the material needs to both have an unpaired spin and be paramagnetic. The Mössbauer effect is the resonant and recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form. Resonance in particle physics appears in similar circumstances to classical physics at the level of quantum mechanics and quantum field theory. Resonances can also be thought of as unstable particles, with the formula in the Universal resonance curve section of this article applying if Γ is the particle's decay rate and Ω is the particle's mass M. In that case, the formula comes from the particle's propagator, with its mass replaced by the complex number M + iΓ. The formula is further related to the particle's decay rate by the optical theorem. Disadvantages A column of soldiers marching in regular step on a narrow and structurally flexible bridge can set it into dangerously large amplitude oscillations. On April 12, 1831, the Broughton Suspension Bridge near Salford, England collapsed while a group of British soldiers were marching across. Since then, the British Army has had a standing order for soldiers to break stride when marching across bridges, to avoid resonance from their regular marching pattern affecting the bridge. Vibrations of a motor or engine can induce resonant vibration in its supporting structures if their natural frequency is close to that of the vibrations of the engine. A common example is the rattling sound of a bus body when the engine is left idling. Structural resonance of a suspension bridge induced by winds can lead to its catastrophic collapse. Several early suspension bridges in Europe and United States were destroyed by structural resonance induced by modest winds. The collapse of the Tacoma Narrows Bridge on 7 November 1940 is characterized in physics as a classic example of resonance. It has been argued by Robert H. Scanlan and others that the destruction was instead caused by aeroelastic flutter, a complicated interaction between the bridge and the winds passing through it—an example of a self oscillation, or a kind of "self-sustaining vibration" as referred to in the nonlinear theory of vibrations. Q factor The Q factor or quality factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is, and characterizes the bandwidth of a resonator relative to its center frequency. A high value for Q indicates a lower rate of energy loss relative to the stored energy, i.e., the system is lightly damped. The parameter is defined by the equation: . The higher the Q factor, the greater the amplitude at the resonant frequency, and the smaller the bandwidth, or range of frequencies around resonance occurs. In electrical resonance, a high-Q circuit in a radio receiver is more difficult to tune, but has greater selectivity, and so would be better at filtering out signals from other stations. High Q oscillators are more stable. Examples that normally have a low Q factor include door closers (Q=0.5). Systems with high Q factors include tuning forks (Q=1000), atomic clocks and lasers (Q≈1011). Universal resonance curve The exact response of a resonance, especially for frequencies far from the resonant frequency, depends on the details of the physical system, and is usually not exactly symmetric about the resonant frequency, as illustrated for the simple harmonic oscillator above. For a lightly damped linear oscillator with a resonance frequency , the intensity of oscillations when the system is driven with a driving frequency is typically approximated by the following formula that is symmetric about the resonance frequency: Where the susceptibility links the amplitude of the oscillator to the driving force in frequency space: The intensity is defined as the square of the amplitude of the oscillations. This is a Lorentzian function, or Cauchy distribution, and this response is found in many physical situations involving resonant systems. is a parameter dependent on the damping of the oscillator, and is known as the linewidth of the resonance. Heavily damped oscillators tend to have broad linewidths, and respond to a wider range of driving frequencies around the resonant frequency. The linewidth is inversely proportional to the Q factor, which is a measure of the sharpness of the resonance. In radio engineering and electronics engineering, this approximate symmetric response is known as the universal resonance curve, a concept introduced by Frederick E. Terman in 1932 to simplify the approximate analysis of radio circuits with a range of center frequencies and Q values. See also Cymatics Driven harmonic motion Earthquake engineering Electric dipole spin resonance Formant Limbic resonance Nonlinear resonance Normal mode Positive feedback Schumann resonance Simple harmonic motion Stochastic resonance Sympathetic string Resonance (chemistry) Fermi resonance Resonance (particle physics) Notes References External links The Feynman Lectures on Physics Vol. I Ch. 23: Resonance Resonance - a chapter from an online textbook Greene, Brian, "Resonance in strings". The Elegant Universe, NOVA (PBS) Hyperphysics section on resonance concepts Resonance versus resonant (usage of terms) Wood and Air Resonance in a Harpsichord Breaking glass with sound , including high-speed footage of glass breaking Antennas (radio) Oscillation
Resonance
[ "Physics", "Chemistry" ]
7,436
[ "Resonance", "Physical phenomena", "Waves", "Scattering", "Mechanics", "Oscillation" ]
41,665
https://en.wikipedia.org/wiki/Return%20loss
In telecommunications, return loss is a measure in relative terms of the power of the signal reflected by a discontinuity in a transmission line or optical fiber. This discontinuity can be caused by a mismatch between the termination or load connected to the line and the characteristic impedance of the line. It is usually expressed as a ratio in decibels (dB); where RL(dB) is the return loss in dB, Pi is the incident power and Pr is the reflected power. Return loss is related to both standing wave ratio (SWR) and reflection coefficient (Γ). Increasing return loss corresponds to lower SWR. Return loss is a measure of how well devices or lines are matched. A match is good if the return loss is high. A high return loss is desirable and results in a lower insertion loss. From a certain perspective 'Return Loss' is a misnomer. The usual function of a transmission line is to convey power from a source to a load with minimal loss. If a transmission line is correctly matched to a load, the reflected power will be zero, no power will be lost due to reflection, and 'Return Loss' will be infinite. Conversely if the line is terminated in an open circuit, the reflected power will be equal to the incident power; all of the incident power will be lost in the sense that none of it will be transferred to a load, and RL will be zero. Thus the numerical values of RL tend in the opposite sense to that expected of a 'loss'. Sign As defined above, RL will always be positive, since Pr can never exceed Pi . However, return loss has historically been expressed as a negative number, and this convention is still widely found in the literature. Strictly speaking, if a negative sign is ascribed to RL, the ratio of reflected to incident power is implied; where RL(dB) is the negative of RL(dB). In practice, the sign ascribed to RL is largely immaterial. If a transmission line includes several discontinuities along its length, the total return loss will be the sum of the RLs caused by each discontinuity, and provided all RLs are given the same sign, no error or ambiguity will result. Whichever convention is used, it will always be understood that Pr can never exceed Pi . Electrical In metallic conductor systems, reflections of a signal traveling down a conductor can occur at a discontinuity or impedance mismatch. The ratio of the amplitude of the reflected wave Vr to the amplitude of the incident wave Vi is known as the reflection coefficient . Return loss is the negative of the magnitude of the reflection coefficient in dB. Since power is proportional to the square of the voltage, return loss is given by, where the vertical bars indicate magnitude. Thus, a large positive return loss indicates the reflected power is small relative to the incident power, which indicates good impedance match between transmission line and load. If the incident power and the reflected power are expressed in 'absolute' decibel units, (e.g., dBm), then the return loss in dB can be calculated as the difference between the incident power Pi (in absolute dBm units) and the reflected power Pr (also in absolute dBm units), Optical In optics (particularly in fiber optics) a loss that takes place at discontinuities of refractive index, especially at an air-glass interface such as a fiber endface. At those interfaces, a fraction of the optical signal is reflected back toward the source. This reflection phenomenon is also called "Fresnel reflection loss," or simply "Fresnel loss'." Fiber optic transmission systems use lasers to transmit signals over optical fiber, and a low optical return loss (ORL) can cause the laser to stop transmitting correctly. The measurement of ORL is becoming more important in the characterization of optical networks as the use of wavelength-division multiplexing increases. These systems use lasers that have a lower tolerance for ORL, and introduce elements into the network that are located in close proximity to the laser. where is the reflected power and is the incident, or input, power. See also Hybrid balance Mismatch loss Signal reflection Time-domain reflectometer Optical time domain reflectometer References Notes Bibliography Federal Standard 1037C and from MIL-STD-188 Optical Return Loss Testing—Ensuring High-Quality Transmission EXFO Application note #044 Wave mechanics Radio electronics Engineering ratios Electrical parameters Fiber optics de:Rückflussdämpfung
Return loss
[ "Physics", "Mathematics", "Engineering" ]
926
[ "Radio electronics", "Physical phenomena", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Waves", "Wave mechanics", "Electrical engineering", "Electrical parameters" ]
41,667
https://en.wikipedia.org/wiki/Ringaround
In telecommunications, the term ringaround has the following meanings: The improper routing of a call back through a switching center already engaged in attempting to complete the same call. In secondary surveillance radar, the presence of false targets declared as a result of transponder interrogation by side lobes of the interrogating antenna. References Telecommunications engineering
Ringaround
[ "Engineering" ]
67
[ "Electrical engineering", "Telecommunications engineering" ]
41,700
https://en.wikipedia.org/wiki/Shot%20noise
Shot noise or Poisson noise is a type of noise which can be modeled by a Poisson process. In electronics shot noise originates from the discrete nature of electric charge. Shot noise also occurs in photon counting in optical devices, where shot noise is associated with the particle nature of light. Origin In a statistical experiment such as tossing a fair coin and counting the occurrences of heads and tails, the numbers of heads and tails after many throws will differ by only a tiny percentage, while after only a few throws outcomes with a significant excess of heads over tails or vice versa are common; if an experiment with a few throws is repeated over and over, the outcomes will fluctuate a lot. From the law of large numbers, one can show that the relative fluctuations reduce as the reciprocal square root of the number of throws, a result valid for all statistical fluctuations, including shot noise. Shot noise exists because phenomena such as light and electric current consist of the movement of discrete (also called "quantized") 'packets'. Consider light—a stream of discrete photons—coming out of a laser pointer and hitting a wall to create a visible spot. The fundamental physical processes that govern light emission are such that these photons are emitted from the laser at random times; but the many billions of photons needed to create a spot are so many that the brightness, the number of photons per unit of time, varies only infinitesimally with time. However, if the laser brightness is reduced until only a handful of photons hit the wall every second, the relative fluctuations in number of photons, i.e., brightness, will be significant, just as when tossing a coin a few times. These fluctuations are shot noise. The concept of shot noise was first introduced in 1918 by Walter Schottky who studied fluctuations of current in vacuum tubes. Shot noise may be dominant when the finite number of particles that carry energy (such as electrons in an electronic circuit or photons in an optical device) is sufficiently small so that uncertainties due to the Poisson distribution, which describes the occurrence of independent random events, are significant. It is important in electronics, telecommunications, optical detection, and fundamental physics. The term can also be used to describe any noise source, even if solely mathematical, of similar origin. For instance, particle simulations may produce a certain amount of "noise", where because of the small number of particles simulated, the simulation exhibits undue statistical fluctuations which don't reflect the real-world system. The magnitude of shot noise increases according to the square root of the expected number of events, such as the electric current or intensity of light. But since the strength of the signal itself increases more rapidly, the relative proportion of shot noise decreases and the signal-to-noise ratio (considering only shot noise) increases anyway. Thus shot noise is most frequently observed with small currents or low light intensities that have been amplified. Signal-to-Noise For large numbers, the Poisson distribution approaches a normal distribution about its mean, and the elementary events (photons, electrons, etc.) are no longer individually observed, typically making shot noise in actual observations indistinguishable from true Gaussian noise. Since the standard deviation of shot noise is equal to the square root of the average number of events N, the signal-to-noise ratio (SNR) is given by: Thus when N is very large, the signal-to-noise ratio is very large as well, and any relative fluctuations in N due to other sources are more likely to dominate over shot noise. However, when the other noise source is at a fixed level, such as thermal noise, or grows slower than , increasing N (the DC current or light level, etc.) can lead to dominance of shot noise. Properties Electronic devices Shot noise in electronic circuits consists of random fluctuations of DC current, which is due to electric current being the flow of discrete charges (electrons). Because the electron has such a tiny charge, however, shot noise is of relative insignificance in many (but not all) cases of electrical conduction. For instance 1 ampere of current consists of about electrons per second; even though this number will randomly vary by several billion in any given second, such a fluctuation is minuscule compared to the current itself. In addition, shot noise is often less significant as compared with two other noise sources in electronic circuits, flicker noise and Johnson–Nyquist noise. However, shot noise is temperature and frequency independent, in contrast to Johnson–Nyquist noise, which is proportional to temperature, and flicker noise, with the spectral density decreasing with increasing frequency. Therefore, at high frequencies and low temperatures shot noise may become the dominant source of noise. With very small currents and considering shorter time scales (thus wider bandwidths) shot noise can be significant. For instance, a microwave circuit operates on time scales of less than a nanosecond and if we were to have a current of 16 nanoamperes that would amount to only 100 electrons passing every nanosecond. According to Poisson statistics the actual number of electrons in any nanosecond would vary by 10 electrons rms, so that one sixth of the time less than 90 electrons would pass a point and one sixth of the time more than 110 electrons would be counted in a nanosecond. Now with this small current viewed on this time scale, the shot noise amounts to 1/10 of the DC current itself. The result by Schottky, based on the assumption that the statistics of electrons passage is Poissonian, reads for the spectral noise density at the frequency , where is the electron charge, and is the average current of the electron stream. The noise spectral power is frequency independent, which means the noise is white. This can be combined with the Landauer formula, which relates the average current with the transmission eigenvalues of the contact through which the current is measured ( labels transport channels). In the simplest case, these transmission eigenvalues can be taken to be energy independent and so the Landauer formula is where is the applied voltage. This provides for commonly referred to as the Poisson value of shot noise, . This is a classical result in the sense that it does not take into account that electrons obey Fermi–Dirac statistics. The correct result takes into account the quantum statistics of electrons and reads (at zero temperature) It was obtained in the 1990s by Viktor Khlus, Gordey Lesovik (independently the single-channel case), and Markus Büttiker (multi-channel case). This noise is white and is always suppressed with respect to the Poisson value. The degree of suppression, , is known as the Fano factor. Noises produced by different transport channels are independent. Fully open () and fully closed () channels produce no noise, since there are no irregularities in the electron stream. At finite temperature, a closed expression for noise can be written as well. It interpolates between shot noise (zero temperature) and Nyquist-Johnson noise (high temperature). Examples Tunnel junction is characterized by low transmission in all transport channels, therefore the electron flow is Poissonian, and the Fano factor equals one. Quantum point contact is characterized by an ideal transmission in all open channels, therefore it does not produce any noise, and the Fano factor equals zero. The exception is the step between plateaus, when one of the channels is partially open and produces noise. A metallic diffusive wire has a Fano factor of 1/3 regardless of the geometry and the details of the material. In 2DEG exhibiting fractional quantum Hall effect electric current is carried by quasiparticles moving at the sample edge whose charge is a rational fraction of the electron charge. The first direct measurement of their charge was through the shot noise in the current. Effects of interactions While this is the result when the electrons contributing to the current occur completely randomly, unaffected by each other, there are important cases in which these natural fluctuations are largely suppressed due to a charge build up. Take the previous example in which an average of 100 electrons go from point A to point B every nanosecond. During the first half of a nanosecond we would expect 50 electrons to arrive at point B on the average, but in a particular half nanosecond there might well be 60 electrons which arrive there. This will create a more negative electric charge at point B than average, and that extra charge will tend to repel the further flow of electrons from leaving point A during the remaining half nanosecond. Thus the net current integrated over a nanosecond will tend more to stay near its average value of 100 electrons rather than exhibiting the expected fluctuations (10 electrons rms) we calculated. This is the case in ordinary metallic wires and in metal film resistors, where shot noise is almost completely cancelled due to this anti-correlation between the motion of individual electrons, acting on each other through the coulomb force. However this reduction in shot noise does not apply when the current results from random events at a potential barrier which all the electrons must overcome due to a random excitation, such as by thermal activation. This is the situation in p-n junctions, for instance. A semiconductor diode is thus commonly used as a noise source by passing a particular DC current through it. In other situations interactions can lead to an enhancement of shot noise, which is the result of a super-poissonian statistics. For example, in a resonant tunneling diode the interplay of electrostatic interaction and of the density of states in the quantum well leads to a strong enhancement of shot noise when the device is biased in the negative differential resistance region of the current-voltage characteristics. Shot noise is distinct from voltage and current fluctuations expected in thermal equilibrium; this occurs without any applied DC voltage or current flowing. These fluctuations are known as Johnson–Nyquist noise or thermal noise and increase in proportion to the Kelvin temperature of any resistive component. However both are instances of white noise and thus cannot be distinguished simply by observing them even though their origins are quite dissimilar. Since shot noise is a Poisson process due to the finite charge of an electron, one can compute the root mean square current fluctuations as being of a magnitude where q is the elementary charge of an electron, Δf is the single-sided bandwidth in hertz over which the noise is considered, and I is the DC current flowing. For a current of 100 mA, measuring the current noise over a bandwidth of 1 Hz, we obtain If this noise current is fed through a resistor a noise voltage of would be generated. Coupling this noise through a capacitor, one could supply a noise power of to a matched load. Detectors The flux signal that is incident on a detector is calculated as follows, in units of photons: where c is the speed of light, and h is the Planck constant. Following Poisson statistics, the photon noise is calculated as the square root of the signal: The SNR for a CCD camera can be calculated from the following equation: where: I = photon flux (photons/pixel/second), QE = quantum efficiency, t = integration time (seconds), Nd = dark current (electrons/pixel/sec), Nr = read noise (electrons). Optics In optics, shot noise describes the fluctuations of the number of photons detected (or simply counted in the abstract) because they occur independently of each other. This is therefore another consequence of discretization, in this case of the energy in the electromagnetic field in terms of photons. In the case of photon detection, the relevant process is the random conversion of photons into photo-electrons for instance, thus leading to a larger effective shot noise level when using a detector with a quantum efficiency below unity. Only in an exotic squeezed coherent state can the number of photons measured per unit time have fluctuations smaller than the square root of the expected number of photons counted in that period of time. Of course there are other mechanisms of noise in optical signals which often dwarf the contribution of shot noise. When these are absent, however, optical detection is said to be "photon noise limited" as only the shot noise (also known as "quantum noise" or "photon noise" in this context) remains. Shot noise is easily observable in the case of photomultipliers and avalanche photodiodes used in the Geiger mode, where individual photon detections are observed. However the same noise source is present with higher light intensities measured by any photo detector, and is directly measurable when it dominates the noise of the subsequent electronic amplifier. Just as with other forms of shot noise, the fluctuations in a photo-current due to shot noise scale as the square-root of the average intensity: The shot noise of a coherent optical beam (having no other noise sources) is a fundamental physical phenomenon, reflecting quantum fluctuations in the electromagnetic field. In optical homodyne detection, the shot noise in the photodetector can be attributed to either the zero point fluctuations of the quantised electromagnetic field, or to the discrete nature of the photon absorption process. However, shot noise itself is not a distinctive feature of quantised field and can also be explained through semiclassical theory. What the semiclassical theory does not predict, however, is the squeezing of shot noise. Shot noise also sets a lower bound on the noise introduced by quantum amplifiers which preserve the phase of an optical signal. See also Johnson–Nyquist noise or thermal noise 1/f noise Burst noise Contact resistance Image noise Quantum efficiency References Electronics concepts Noise (electronics) Electrical parameters Quantum optics Poisson point processes Mesoscopic physics
Shot noise
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
2,805
[ "Point (geometry)", "Electrical parameters", "Quantum optics", "Quantum mechanics", "Point processes", "Condensed matter physics", "Electrical engineering", "Mesoscopic physics", "Poisson point processes" ]
41,706
https://en.wikipedia.org/wiki/Signal-to-noise%20ratio
Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to noise power, often expressed in decibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. SNR is an important parameter that affects the performance and quality of systems that process or transmit signals, such as communication systems, audio systems, radar systems, imaging systems, and data acquisition systems. A high SNR means that the signal is clear and easy to detect or interpret, while a low SNR means that the signal is corrupted or obscured by noise and may be difficult to distinguish or recover. SNR can be improved by various methods, such as increasing the signal strength, reducing the noise level, filtering out unwanted noise, or using error correction techniques. SNR also determines the maximum possible amount of data that can be transmitted reliably over a given channel, which depends on its bandwidth and SNR. This relationship is described by the Shannon–Hartley theorem, which is a fundamental law of information theory. SNR can be calculated using different formulas depending on how the signal and noise are measured and defined. The most common way to express SNR is in decibels, which is a logarithmic scale that makes it easier to compare large or small values. Other definitions of SNR may use different factors or bases for the logarithm, depending on the context and application. Definition One definition of signal-to-noise ratio is the ratio of the power of a signal (meaningful input) to the power of background noise (meaningless or unwanted input): where is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. The signal-to-noise ratio of a random variable () to random noise is: where E refers to the expected value, which in this case is the mean square of . If the signal is simply a constant value of , this equation simplifies to: If the noise has expected value of zero, as is common, the denominator is its variance, the square of its standard deviation . The signal and the noise must be measured the same way, for example as voltages across the same impedance. Their root mean squares can alternatively be used according to: where is root mean square (RMS) amplitude (for example, RMS voltage). Decibels Because many signals have a very wide dynamic range, signals are often expressed using the logarithmic decibel scale. Based upon the definition of decibel, signal and noise may be expressed in decibels (dB) as and In a similar manner, SNR may be expressed in decibels as Using the definition of SNR Using the quotient rule for logarithms Substituting the definitions of SNR, signal, and noise in decibels into the above equation results in an important formula for calculating the signal to noise ratio in decibels, when the signal and noise are also in decibels: In the above formula, P is measured in units of power, such as watts (W) or milliwatts (mW), and the signal-to-noise ratio is a pure number. However, when the signal and noise are measured in volts (V) or amperes (A), which are measures of amplitude, they must first be squared to obtain a quantity proportional to power, as shown below: Dynamic range The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernible signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that instantaneous signal-to-noise ratios will be considerably different. The concept can be understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Difference from conventional power In physics, the average power of an AC signal is defined as the average value of voltage times current; for resistive (non-reactive) circuits, where voltage and current are in phase, this is equivalent to the product of the rms voltage and current: But in signal processing and communication, one usually assumes that so that factor is usually not included while measuring power or energy of a signal. This may cause some confusion among readers, but the resistance factor is not significant for typical operations performed in signal processing, or for computing power ratios. For most cases, the power of a signal would be considered to be simply Alternative definition An alternative definition of SNR is as the reciprocal of the coefficient of variation, i.e., the ratio of mean to standard deviation of a signal or measurement: where is the signal mean or expected value and is the standard deviation of the noise, or an estimate thereof. Notice that such an alternative definition is only useful for variables that are always non-negative (such as photon counts and luminance), and it is only an approximation since . It is commonly used in image processing, where the SNR of an image is usually calculated as the ratio of the mean pixel value to the standard deviation of the pixel values over a given neighborhood. Sometimes SNR is defined as the square of the alternative definition above, in which case it is equivalent to the more common definition: This definition is closely related to the sensitivity index or d, when assuming that the signal has two states separated by signal amplitude , and the noise standard deviation does not change between the two states. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features with certainty. An SNR less than 5 means less than 100% certainty in identifying image details. Yet another alternative, very specific, and distinct definition of SNR is employed to characterize sensitivity of imaging systems; see Signal-to-noise ratio (imaging). Related measures are the "contrast ratio" and the "contrast-to-noise ratio". Modulation system measurements Amplitude modulation Channel signal-to-noise ratio is given by where W is the bandwidth and is modulation index Output signal-to-noise ratio (of AM receiver) is given by Frequency modulation Channel signal-to-noise ratio is given by Output signal-to-noise ratio is given by Noise reduction All real measurements are disturbed by noise. This includes electronic noise, but can also include external events that affect the measured phenomenon — wind, vibrations, the gravitational attraction of the moon, variations of temperature, variations of humidity, etc., depending on what is measured and of the sensitivity of the device. It is often possible to reduce the noise by controlling the environment. Internal electronic noise of measurement systems can be reduced through the use of low-noise amplifiers. When the characteristics of the noise are known and are different from the signal, it is possible to use a filter to reduce the noise. For example, a lock-in amplifier can extract a narrow bandwidth signal from broadband noise a million times stronger. When the signal is constant or periodic and the noise is random, it is possible to enhance the SNR by averaging the measurements. In this case the noise goes down as the square root of the number of averaged samples. Digital signals When a measurement is digitized, the number of bits used to represent the measurement determines the maximum possible signal-to-noise ratio. This is because the minimum possible noise level is the error caused by the quantization of the signal, sometimes called quantization noise. This noise level is non-linear and signal-dependent; different calculations exist for different signal models. Quantization noise is modeled as an analog error signal summed with the signal before quantization ("additive noise"). This theoretical maximum SNR assumes a perfect input signal. If the input signal is already noisy (as is usually the case), the signal's noise may be larger than the quantization noise. Real analog-to-digital converters also have other sources of noise that further decrease the SNR compared to the theoretical maximum from the idealized quantization noise, including the intentional addition of dither. Although noise levels in a digital system can be expressed using SNR, it is more common to use Eb/No, the energy per bit per noise power spectral density. The modulation error ratio (MER) is a measure of the SNR in a digitally modulated signal. Fixed point For n-bit integers with equal distance between quantization levels (uniform quantization) the dynamic range (DR) is also determined. Assuming a uniform distribution of input signal values, the quantization noise is a uniformly distributed random signal with a peak-to-peak amplitude of one quantization level, making the amplitude ratio 2n/1. The formula is then: This relationship is the origin of statements like "16-bit audio has a dynamic range of 96 dB". Each extra quantization bit increases the dynamic range by roughly 6 dB. Assuming a full-scale sine wave signal (that is, the quantizer is designed such that it has the same minimum and maximum values as the input signal), the quantization noise approximates a sawtooth wave with peak-to-peak amplitude of one quantization level and uniform distribution. In this case, the SNR is approximately Floating point Floating-point numbers provide a way to trade off signal-to-noise ratio for an increase in dynamic range. For n-bit floating-point numbers, with n-m bits in the mantissa and m bits in the exponent: The dynamic range is much larger than fixed-point but at a cost of a worse signal-to-noise ratio. This makes floating-point preferable in situations where the dynamic range is large or unpredictable. Fixed-point's simpler implementations can be used with no signal quality disadvantage in systems where dynamic range is less than 6.02m. The very large dynamic range of floating-point can be a disadvantage, since it requires more forethought in designing algorithms. Optical signals Optical signals have a carrier frequency (about and more) that is much higher than the modulation frequency. This way the noise covers a bandwidth that is much wider than the signal itself. The resulting signal influence relies mainly on the filtering of the noise. To describe the signal quality without taking the receiver into account, the optical SNR (OSNR) is used. The OSNR is the ratio between the signal power and the noise power in a given bandwidth. Most commonly a reference bandwidth of 0.1 nm is used. This bandwidth is independent of the modulation format, the frequency and the receiver. For instance an OSNR of 20 dB/0.1 nm could be given, even the signal of 40 GBit DPSK would not fit in this bandwidth. OSNR is measured with an optical spectrum analyzer. Types and abbreviations Signal to noise ratio may be abbreviated as SNR and less commonly as S/N. PSNR stands for peak signal-to-noise ratio. GSNR stands for geometric signal-to-noise ratio. SINR is the signal-to-interference-plus-noise ratio. Other uses While SNR is commonly quoted for electrical signals, it can be applied to any form of signal, for example isotope levels in an ice core, biochemical signaling between cells, or financial trading signals. The term is sometimes used metaphorically to refer to the ratio of useful information to false or irrelevant data in a conversation or exchange. For example, in online discussion forums and other online communities, off-topic posts and spam are regarded as that interferes with the of appropriate discussion. SNR can also be applied in marketing and how business professionals manage information overload. Managing a healthy signal to noise ratio can help business executives improve their KPIs (Key Performance Indicators). Similar concepts The signal-to-noise ratio is similar to Cohen's d given by the difference of estimated means divided by the standard deviation of the data and is related to the test statistic in the t-test. See also Audio system measurements Generation loss Matched filter Near–far problem Noise margin Omega ratio Pareidolia Peak signal-to-noise ratio Signal-to-noise statistic Signal-to-interference-plus-noise ratio SINAD SINADR Subjective video quality Total harmonic distortion Video quality Notes References External links ADC and DAC Glossary – Maxim Integrated Products Understand SINAD, ENOB, SNR, THD, THD + N, and SFDR so you don't get lost in the noise floor – Analog Devices The Relationship of dynamic range to data word size in digital audio processing Calculation of signal-to-noise ratio, noise voltage, and noise level Learning by simulations – a simulation showing the improvement of the SNR by time averaging Dynamic Performance Testing of Digital Audio D/A Converters Fundamental theorem of analog circuits: a minimum level of power must be dissipated to maintain a level of SNR Interactive webdemo of visualization of SNR in a QAM constellation diagram Institute of Telecommunicatons, University of Stuttgart Quantization Noise Widrow & Kollár Quantization book page with sample chapters and additional material Signal-to-noise ratio online audio demonstrator - Virtual Communications Lab Engineering ratios Error measures Measurement Electrical parameters Audio amplifier specifications Noise (electronics) Statistical ratios Acoustics Sound
Signal-to-noise ratio
[ "Physics", "Mathematics", "Engineering" ]
2,852
[ "Physical quantities", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Measurement", "Acoustics", "Size", "Electronic engineering", "Electrical engineering", "Audio engineering", "Audio amplifier specifications", "Electrical parameters" ]
41,730
https://en.wikipedia.org/wiki/Speed%20of%20service
In telecommunication, speed of service is the time for a message to be received. For example: The time between release of a message by the originator to receipt of the message by the addressee, as perceived by the end user. (originator-to-recipient speed of service) The time between entry of a message into a communications system and receipt of the message at the terminating communications facility, i.e., the communications facility serving the addressee, as measured by the system. References Telecommunications engineering
Speed of service
[ "Engineering" ]
104
[ "Electrical engineering", "Telecommunications engineering" ]
41,741
https://en.wikipedia.org/wiki/Standing%20wave
In physics, a standing wave, also known as a stationary wave, is a wave that oscillates in time but whose peak amplitude profile does not move in space. The peak amplitude of the wave oscillations at any point in space is constant with respect to time, and the oscillations at different points throughout the wave are in phase. The locations at which the absolute value of the amplitude is minimum are called nodes, and the locations where the absolute value of the amplitude is maximum are called antinodes. Standing waves were first described scientifically by Michael Faraday in 1831. Faraday observed standing waves on the surface of a liquid in a vibrating container. Franz Melde coined the term "standing wave" (German: stehende Welle or Stehwelle) around 1860 and demonstrated the phenomenon in his classic experiment with vibrating strings. This phenomenon can occur because the medium is moving in the direction opposite to the movement of the wave, or it can arise in a stationary medium as a result of interference between two waves traveling in opposite directions. The most common cause of standing waves is the phenomenon of resonance, in which standing waves occur inside a resonator due to interference between waves reflected back and forth at the resonator's resonant frequency. For waves of equal amplitude traveling in opposing directions, there is on average no net propagation of energy. Moving medium As an example of the first type, under certain meteorological conditions standing waves form in the atmosphere in the lee of mountain ranges. Such waves are often exploited by glider pilots. Standing waves and hydraulic jumps also form on fast flowing river rapids and tidal currents such as the Saltstraumen maelstrom. A requirement for this in river currents is a flowing water with shallow depth in which the inertia of the water overcomes its gravity due to the supercritical flow speed (Froude number: 1.7 – 4.5, surpassing 4.5 results in direct standing wave) and is therefore neither significantly slowed down by the obstacle nor pushed to the side. Many standing river waves are popular river surfing breaks. Opposing waves As an example of the second type, a standing wave in a transmission line is a wave in which the distribution of current, voltage, or field strength is formed by the superposition of two waves of the same frequency propagating in opposite directions. The effect is a series of nodes (zero displacement) and anti-nodes (maximum displacement) at fixed points along the transmission line. Such a standing wave may be formed when a wave is transmitted into one end of a transmission line and is reflected from the other end by an impedance mismatch, i.e., discontinuity, such as an open circuit or a short. The failure of the line to transfer power at the standing wave frequency will usually result in attenuation distortion. In practice, losses in the transmission line and other components mean that a perfect reflection and a pure standing wave are never achieved. The result is a partial standing wave, which is a superposition of a standing wave and a traveling wave. The degree to which the wave resembles either a pure standing wave or a pure traveling wave is measured by the standing wave ratio (SWR). Another example is standing waves in the open ocean formed by waves with the same wave period moving in opposite directions. These may form near storm centres, or from reflection of a swell at the shore, and are the source of microbaroms and microseisms. Mathematical description This section considers representative one- and two-dimensional cases of standing waves. First, an example of an infinite length string shows how identical waves traveling in opposite directions interfere to produce standing waves. Next, two finite length string examples with different boundary conditions demonstrate how the boundary conditions restrict the frequencies that can form standing waves. Next, the example of sound waves in a pipe demonstrates how the same principles can be applied to longitudinal waves with analogous boundary conditions. Standing waves can also occur in two- or three-dimensional resonators. With standing waves on two-dimensional membranes such as drumheads, illustrated in the animations above, the nodes become nodal lines, lines on the surface at which there is no movement, that separate regions vibrating with opposite phase. These nodal line patterns are called Chladni figures. In three-dimensional resonators, such as musical instrument sound boxes and microwave cavity resonators, there are nodal surfaces. This section includes a two-dimensional standing wave example with a rectangular boundary to illustrate how to extend the concept to higher dimensions. Standing wave on an infinite length string To begin, consider a string of infinite length along the x-axis that is free to be stretched transversely in the y direction. For a harmonic wave traveling to the right along the string, the string's displacement in the y direction as a function of position x and time t is The displacement in the y-direction for an identical harmonic wave traveling to the left is where ymax is the amplitude of the displacement of the string for each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. For identical right- and left-traveling waves on the same string, the total displacement of the string is the sum of yR and yL, Using the trigonometric sum-to-product identity , Equation () does not describe a traveling wave. At any position x, y(x,t) simply oscillates in time with an amplitude that varies in the x-direction as . The animation at the beginning of this article depicts what is happening. As the left-traveling blue wave and right-traveling green wave interfere, they form the standing red wave that does not travel and instead oscillates in place. Because the string is of infinite length, it has no boundary condition for its displacement at any point along the x-axis. As a result, a standing wave can form at any frequency. At locations on the x-axis that are even multiples of a quarter wavelength, the amplitude is always zero. These locations are called nodes. At locations on the x-axis that are odd multiples of a quarter wavelength the amplitude is maximal, with a value of twice the amplitude of the right- and left-traveling waves that interfere to produce this standing wave pattern. These locations are called anti-nodes. The distance between two consecutive nodes or anti-nodes is half the wavelength, λ/2. Standing wave on a string with two fixed ends Next, consider a string with fixed ends at and . The string will have some damping as it is stretched by traveling waves, but assume the damping is very small. Suppose that at the fixed end a sinusoidal force is applied that drives the string up and down in the y-direction with a small amplitude at some frequency f. In this situation, the driving force produces a right-traveling wave. That wave reflects off the right fixed end and travels back to the left, reflects again off the left fixed end and travels back to the right, and so on. Eventually, a steady state is reached where the string has identical right- and left-traveling waves as in the infinite-length case and the power dissipated by damping in the string equals the power supplied by the driving force so the waves have constant amplitude. Equation () still describes the standing wave pattern that can form on this string, but now Equation () is subject to boundary conditions where at and because the string is fixed at and because we assume the driving force at the fixed end has small amplitude. Checking the values of y at the two ends, This boundary condition is in the form of the Sturm–Liouville formulation. The latter boundary condition is satisfied when . L is given, so the boundary condition restricts the wavelength of the standing waves to Waves can only form standing waves on this string if they have a wavelength that satisfies this relationship with L. If waves travel with speed v along the string, then equivalently the frequency of the standing waves is restricted to The standing wave with oscillates at the fundamental frequency and has a wavelength that is twice the length of the string. Higher integer values of n correspond to modes of oscillation called harmonics or overtones. Any standing wave on the string will have n + 1 nodes including the fixed ends and n anti-nodes. To compare this example's nodes to the description of nodes for standing waves in the infinite length string, Equation () can be rewritten as In this variation of the expression for the wavelength, n must be even. Cross multiplying we see that because L is a node, it is an even multiple of a quarter wavelength, This example demonstrates a type of resonance and the frequencies that produce standing waves can be referred to as resonant frequencies. Standing wave on a string with one fixed end Next, consider the same string of length L, but this time it is only fixed at . At , the string is free to move in the y direction. For example, the string might be tied at to a ring that can slide freely up and down a pole. The string again has small damping and is driven by a small driving force at . In this case, Equation () still describes the standing wave pattern that can form on the string, and the string has the same boundary condition of at . However, at where the string can move freely there should be an anti-node with maximal amplitude of y. Equivalently, this boundary condition of the "free end" can be stated as at , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the motion of the "free end" will follow that of the point to its left. Reviewing Equation (), for the largest amplitude of y occurs when , or This leads to a different set of wavelengths than in the two-fixed-ends example. Here, the wavelength of the standing waves is restricted to Equivalently, the frequency is restricted to In this example n only takes odd values. Because L is an anti-node, it is an odd multiple of a quarter wavelength. Thus the fundamental mode in this example only has one quarter of a complete sine cycle–zero at and the first peak at –the first harmonic has three quarters of a complete sine cycle, and so on. This example also demonstrates a type of resonance and the frequencies that produce standing waves are called resonant frequencies. Standing wave in a pipe Consider a standing wave in a pipe of length L. The air inside the pipe serves as the medium for longitudinal sound waves traveling to the right or left through the pipe. While the transverse waves on the string from the previous examples vary in their displacement perpendicular to the direction of wave motion, the waves traveling through the air in the pipe vary in terms of their pressure and longitudinal displacement along the direction of wave motion. The wave propagates by alternately compressing and expanding air in segments of the pipe, which displaces the air slightly from its rest position and transfers energy to neighboring segments through the forces exerted by the alternating high and low air pressures. Equations resembling those for the wave on a string can be written for the change in pressure Δp due to a right- or left-traveling wave in the pipe. where pmax is the pressure amplitude or the maximum increase or decrease in air pressure due to each wave, ω is the angular frequency or equivalently 2π times the frequency f, λ is the wavelength of the wave. If identical right- and left-traveling waves travel through the pipe, the resulting superposition is described by the sum This formula for the pressure is of the same form as Equation (), so a stationary pressure wave forms that is fixed in space and oscillates in time. If the end of a pipe is closed, the pressure is maximal since the closed end of the pipe exerts a force that restricts the movement of air. This corresponds to a pressure anti-node (which is a node for molecular motions, because the molecules near the closed end cannot move). If the end of the pipe is open, the pressure variations are very small, corresponding to a pressure node (which is an anti-node for molecular motions, because the molecules near the open end can move freely). The exact location of the pressure node at an open end is actually slightly beyond the open end of the pipe, so the effective length of the pipe for the purpose of determining resonant frequencies is slightly longer than its physical length. This difference in length is ignored in this example. In terms of reflections, open ends partially reflect waves back into the pipe, allowing some energy to be released into the outside air. Ideally, closed ends reflect the entire wave back in the other direction. First consider a pipe that is open at both ends, for example an open organ pipe or a recorder. Given that the pressure must be zero at both open ends, the boundary conditions are analogous to the string with two fixed ends, which only occurs when the wavelength of standing waves is or equivalently when the frequency is where v is the speed of sound. Next, consider a pipe that is open at (and therefore has a pressure node) and closed at (and therefore has a pressure anti-node). The closed "free end" boundary condition for the pressure at can be stated as , which is in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the pressure of the closed end will follow that of the point to its left. Examples of this setup include a bottle and a clarinet. This pipe has boundary conditions analogous to the string with only one fixed end. Its standing waves have wavelengths restricted to or equivalently the frequency of standing waves is restricted to For the case where one end is closed, n only takes odd values just like in the case of the string fixed at only one end. So far, the wave has been written in terms of its pressure as a function of position x and time. Alternatively, the wave can be written in terms of its longitudinal displacement of air, where air in a segment of the pipe moves back and forth slightly in the x-direction as the pressure varies and waves travel in either or both directions. The change in pressure Δp and longitudinal displacement s are related as where ρ is the density of the air. In terms of longitudinal displacement, closed ends of pipes correspond to nodes since air movement is restricted and open ends correspond to anti-nodes since the air is free to move. A similar, easier to visualize phenomenon occurs in longitudinal waves propagating along a spring. We can also consider a pipe that is closed at both ends. In this case, both ends will be pressure anti-nodes or equivalently both ends will be displacement nodes. This example is analogous to the case where both ends are open, except the standing wave pattern has a phase shift along the x-direction to shift the location of the nodes and anti-nodes. For example, the longest wavelength that resonates–the fundamental mode–is again twice the length of the pipe, except that the ends of the pipe have pressure anti-nodes instead of pressure nodes. Between the ends there is one pressure node. In the case of two closed ends, the wavelength is again restricted to and the frequency is again restricted to A Rubens tube provides a way to visualize the pressure variations of the standing waves in a tube with two closed ends. 2D standing wave with a rectangular boundary Next, consider transverse waves that can move along a two dimensional surface within a rectangular boundary of length Lx in the x-direction and length Ly in the y-direction. Examples of this type of wave are water waves in a pool or waves on a rectangular sheet that has been pulled taut. The waves displace the surface in the z-direction, with defined as the height of the surface when it is still. In two dimensions and Cartesian coordinates, the wave equation is where z(x,y,t) is the displacement of the surface, c is the speed of the wave. To solve this differential equation, let's first solve for its Fourier transform, with Taking the Fourier transform of the wave equation, This is an eigenvalue problem where the frequencies correspond to eigenvalues that then correspond to frequency-specific modes or eigenfunctions. Specifically, this is a form of the Helmholtz equation and it can be solved using separation of variables. Assume Dividing the Helmholtz equation by Z, This leads to two coupled ordinary differential equations. The x term equals a constant with respect to x that we can define as Solving for X(x), This x-dependence is sinusoidal–recalling Euler's formula–with constants Akx and Bkx determined by the boundary conditions. Likewise, the y term equals a constant with respect to y that we can define as and the dispersion relation for this wave is therefore Solving the differential equation for the y term, Multiplying these functions together and applying the inverse Fourier transform, z(x,y,t) is a superposition of modes where each mode is the product of sinusoidal functions for x, y, and t, The constants that determine the exact sinusoidal functions depend on the boundary conditions and initial conditions. To see how the boundary conditions apply, consider an example like the sheet that has been pulled taut where z(x,y,t) must be zero all around the rectangular boundary. For the x dependence, z(x,y,t) must vary in a way that it can be zero at both and for all values of y and t. As in the one dimensional example of the string fixed at both ends, the sinusoidal function that satisfies this boundary condition is with kx restricted to Likewise, the y dependence of z(x,y,t) must be zero at both and , which is satisfied by Restricting the wave numbers to these values also restricts the frequencies that resonate to If the initial conditions for z(x,y,0) and its time derivative ż(x,y,0) are chosen so the t-dependence is a cosine function, then standing waves for this system take the form So, standing waves inside this fixed rectangular boundary oscillate in time at certain resonant frequencies parameterized by the integers n and m. As they oscillate in time, they do not travel and their spatial variation is sinusoidal in both the x- and y-directions such that they satisfy the boundary conditions. The fundamental mode, and , has a single antinode in the middle of the rectangle. Varying n and m gives complicated but predictable two-dimensional patterns of nodes and antinodes inside the rectangle. From the dispersion relation, in certain situations different modes–meaning different combinations of n and m–may resonate at the same frequency even though they have different shapes for their x- and y-dependence. For example, if the boundary is square, , the modes and , and , and and all resonate at Recalling that ω determines the eigenvalue in the Helmholtz equation above, the number of modes corresponding to each frequency relates to the frequency's multiplicity as an eigenvalue. Standing wave ratio, phase, and energy transfer If the two oppositely moving traveling waves are not of the same amplitude, they will not cancel completely at the nodes, the points where the waves are 180° out of phase, so the amplitude of the standing wave will not be zero at the nodes, but merely a minimum. Standing wave ratio (SWR) is the ratio of the amplitude at the antinode (maximum) to the amplitude at the node (minimum). A pure standing wave will have an infinite SWR. It will also have a constant phase at any point in space (but it may undergo a 180° inversion every half cycle). A finite, non-zero SWR indicates a wave that is partially stationary and partially travelling. Such waves can be decomposed into a superposition of two waves: a travelling wave component and a stationary wave component. An SWR of one indicates that the wave does not have a stationary component – it is purely a travelling wave, since the ratio of amplitudes is equal to 1. A pure standing wave does not transfer energy from the source to the destination. However, the wave is still subject to losses in the medium. Such losses will manifest as a finite SWR, indicating a travelling wave component leaving the source to supply the losses. Even though the SWR is now finite, it may still be the case that no energy reaches the destination because the travelling component is purely supplying the losses. However, in a lossless medium, a finite SWR implies a definite transfer of energy to the destination. Examples One easy example to understand standing waves is two people shaking either end of a jump rope. If they shake in sync the rope can form a regular pattern of waves oscillating up and down, with stationary points along the rope where the rope is almost still (nodes) and points where the arc of the rope is maximum (antinodes). Acoustic resonance Standing waves are also observed in physical media such as strings and columns of air. Any waves traveling along the medium will reflect back when they reach the end. This effect is most noticeable in musical instruments where, at various multiples of a vibrating string or air column's natural frequency, a standing wave is created, allowing harmonics to be identified. Nodes occur at fixed ends and anti-nodes at open ends. If fixed at only one end, only odd-numbered harmonics are available. At the open end of a pipe the anti-node will not be exactly at the end as it is altered by its contact with the air and so end correction is used to place it exactly. The density of a string will affect the frequency at which harmonics will be produced; the greater the density the lower the frequency needs to be to produce a standing wave of the same harmonic. Visible light Standing waves are also observed in optical media such as optical waveguides and optical cavities. Lasers use optical cavities in the form of a pair of facing mirrors, which constitute a Fabry–Pérot interferometer. The gain medium in the cavity (such as a crystal) emits light coherently, exciting standing waves of light in the cavity. The wavelength of light is very short (in the range of nanometers, 10−9 m) so the standing waves are microscopic in size. One use for standing light waves is to measure small distances, using optical flats. X-rays Interference between X-ray beams can form an X-ray standing wave (XSW) field. Because of the short wavelength of X-rays (less than 1 nanometer), this phenomenon can be exploited for measuring atomic-scale events at material surfaces. The XSW is generated in the region where an X-ray beam interferes with a diffracted beam from a nearly perfect single crystal surface or a reflection from an X-ray mirror. By tuning the crystal geometry or X-ray wavelength, the XSW can be translated in space, causing a shift in the X-ray fluorescence or photoelectron yield from the atoms near the surface. This shift can be analyzed to pinpoint the location of a particular atomic species relative to the underlying crystal structure or mirror surface. The XSW method has been used to clarify the atomic-scale details of dopants in semiconductors, atomic and molecular adsorption on surfaces, and chemical transformations involved in catalysis. Mechanical waves Standing waves can be mechanically induced into a solid medium using resonance. One easy to understand example is two people shaking either end of a jump rope. If they shake in sync, the rope will form a regular pattern with nodes and antinodes and appear to be stationary, hence the name standing wave. Similarly a cantilever beam can have a standing wave imposed on it by applying a base excitation. In this case the free end moves the greatest distance laterally compared to any location along the beam. Such a device can be used as a sensor to track changes in frequency or phase of the resonance of the fiber. One application is as a measurement device for dimensional metrology. Seismic waves Standing surface waves on the Earth are observed as free oscillations of the Earth. Faraday waves The Faraday wave is a non-linear standing wave at the air-liquid interface induced by hydrodynamic instability. It can be used as a liquid-based template to assemble microscale materials. Seiches A seiche is an example of a standing wave in an enclosed body of water. It is characterised by the oscillatory behaviour of the water level at either end of the body and typically has a nodal point near the middle of the body where very little change in water level is observed. It should be distinguished from a simple storm surge where no oscillation is present. In sizeable lakes, the period of such oscillations may be between minutes and hours, for example Lake Geneva's longitudinal period is 73 minutes and its transversal seiche has a period of around 10 minutes, while Lake Huron can be seen to have resonances with periods between 1 and 2 hours. See Lake seiches. See also Waves Electronics Notes References External links 1831 introductions 1831 in science 1860s neologisms Michael Faraday Wave mechanics Articles containing video clips
Standing wave
[ "Physics" ]
5,212
[ "Wave mechanics", "Waves", "Physical phenomena", "Classical mechanics" ]
41,742
https://en.wikipedia.org/wiki/Standing%20wave%20ratio
In radio engineering and telecommunications, standing wave ratio (SWR) is a measure of impedance matching of loads to the characteristic impedance of a transmission line or waveguide. Impedance mismatches result in standing waves along the transmission line, and SWR is defined as the ratio of the partial standing wave's amplitude at an antinode (maximum) to the amplitude at a node (minimum) along the line. Voltage standing wave ratio (VSWR) (pronounced "vizwar") is the ratio of maximum to minimum voltage on a transmission line . For example, a VSWR of 1.2 means a peak voltage 1.2 times the minimum voltage along that line, if the line is at least one half wavelength long. A SWR can be also defined as the ratio of the maximum amplitude to minimum amplitude of the transmission line's currents, electric field strength, or the magnetic field strength. Neglecting transmission line loss, these ratios are identical. The power standing wave ratio (PSWR) is defined as the square of the VSWR, however, this deprecated term has no direct physical relation to power actually involved in transmission. SWR is usually measured using a dedicated instrument called an SWR meter. Since SWR is a measure of the load impedance relative to the characteristic impedance of the transmission line in use (which together determine the reflection coefficient as described below), a given SWR meter can interpret the impedance it sees in terms of SWR only if it has been designed for the same particular characteristic impedance as the line. In practice most transmission lines used in these applications are coaxial cables with an impedance of either 50 or 75 ohms, so most SWR meters correspond to one of these. Checking the SWR is a standard procedure in a radio station. Although the same information could be obtained by measuring the load's impedance with an impedance analyzer (or "impedance bridge"), the SWR meter is simpler and more robust for this purpose. By measuring the magnitude of the impedance mismatch at the transmitter output it reveals problems due to either the antenna or the transmission line. Impedance matching SWR is used as a measure of impedance matching of a load to the characteristic impedance of a transmission line carrying radio frequency (RF) signals. This especially applies to transmission lines connecting radio transmitters and receivers with their antennas, as well as similar uses of RF cables such as cable television connections to TV receivers and distribution amplifiers. Impedance matching is achieved when the source impedance is the complex conjugate of the load impedance. The easiest way of achieving this, and the way that minimizes losses along the transmission line, is for the imaginary part of the complex impedance of both the source and load to be zero, that is, pure resistances, equal to the characteristic impedance of the transmission line. When there is a mismatch between the load impedance and the transmission line, part of the forward wave sent toward the load is reflected back along the transmission line towards the source. The source then sees a different impedance than it expects which can lead to lesser (or in some cases, more) power being supplied by it, the result being very sensitive to the electrical length of the transmission line. Such a mismatch is usually undesired and results in standing waves along the transmission line which magnifies transmission line losses (significant at higher frequencies and for longer cables). The SWR is a measure of the depth of those standing waves and is, therefore, a measure of the matching of the load to the transmission line. A matched load would result in an SWR of 1:1 implying no reflected wave. An infinite SWR represents complete reflection by a load unable to absorb electrical power, with all the incident power reflected back towards the source. It should be understood that the match of a load to the transmission line is different from the match of a source to the transmission line or the match of a source to the load seen through the transmission line. For instance, if there is a perfect match between the load impedance load and the source impedance that perfect match will remain if the source and load are connected through a transmission line with an electrical length of one half wavelength (or a multiple of one half wavelengths) using a transmission line of any characteristic impedance 0. However the SWR will generally not be 1:1, depending only on load and 0. With a different length of transmission line, the source will see a different impedance than load which may or may not be a good match to the source. Sometimes this is deliberate, as when a quarter-wave matching section is used to improve the match between an otherwise mismatched source and load. However typical RF sources such as transmitters and signal generators are designed to look into a purely resistive load impedance such as 50Ω or 75Ω, corresponding to common transmission lines' characteristic impedances. In those cases, matching the load to the transmission line, load 0, always ensures that the source will see the same load impedance as if the transmission line weren't there. This is identical to a 1:1 SWR. This condition (load 0) also means that the load seen by the source is independent of the transmission line's electrical length. Since the electrical length of a physical segment of transmission line depends on the signal frequency, violation of this condition means that the impedance seen by the source through the transmission line becomes a function of frequency (especially if the line is long), even if load is frequency-independent. So in practice, a good SWR (near 1:1) implies a transmitter's output seeing the exact impedance it expects for optimum and safe operation. Relationship to the reflection coefficient The voltage component of a standing wave in a uniform transmission line consists of the forward wave (with complex amplitude ) superimposed on the reflected wave (with complex amplitude ). A wave is partly reflected when a transmission line is terminated with an impedance unequal to its characteristic impedance. The reflection coefficient can be defined as: or is a complex number that describes both the magnitude and the phase shift of the reflection. The simplest cases with measured at the load are: : complete negative reflection, when the line is short-circuited, : no reflection, when the line is perfectly matched, : complete positive reflection, when the line is open-circuited. The SWR directly corresponds to the magnitude of . At some points along the line the forward and reflected waves interfere constructively, exactly in phase, with the resulting amplitude given by the sum of those waves' amplitudes: At other points, the waves interfere 180° out of phase with the amplitudes partially cancelling: The voltage standing wave ratio is then Since the magnitude of always falls in the range [0,1], the SWR is always greater than or equal to unity. Note that the phase of Vf and Vr vary along the transmission line in opposite directions to each other. Therefore, the complex-valued reflection coefficient varies as well, but only in phase. With the SWR dependent only on the complex magnitude of , it can be seen that the SWR measured at any point along the transmission line (neglecting transmission line losses) obtains an identical reading. Since the power of the forward and reflected waves are proportional to the square of the voltage components due to each wave, SWR can be expressed in terms of forward and reflected power: By sampling the complex voltage and current at the point of insertion, an SWR meter is able to compute the effective forward and reflected voltages on the transmission line for the characteristic impedance for which the SWR meter has been designed. Since the forward and reflected power is related to the square of the forward and reflected voltages, some SWR meters also display the forward and reflected power. In the special case of a load L, which is purely resistive but unequal to the characteristic impedance of the transmission line 0, the SWR is given simply by their ratio: with the ratio or its reciprocal is chosen to obtain a value greater than unity. The standing wave pattern Using complex notation for the voltage amplitudes, for a signal at frequency , the actual (real) voltages V as a function of time are understood to relate to the complex voltages according to: Thus taking the real part of the complex quantity inside the parenthesis, the actual voltage consists of a sine wave at frequency with a peak amplitude equal to the complex magnitude of , and with a phase given by the phase of the complex . Then with the position along a transmission line given by , with the line ending in a load located at , the complex amplitudes of the forward and reverse waves would be written as: for some complex amplitude (corresponding to the forward wave at that some treatments use phasors where the time dependence is according to and spatial dependence (for a wave in the direction) of Either convention obtains the same result for . According to the superposition principle the net voltage present at any point on the transmission line is equal to the sum of the voltages due to the forward and reflected waves: Since we are interested in the variations of the magnitude of along the line (as a function of ), we shall solve instead for the squared magnitude of that quantity, which simplifies the mathematics. To obtain the squared magnitude we multiply the above quantity by its complex conjugate: Depending on the phase of the third term, the maximum and minimum values of (the square root of the quantity in the equations) are and respectively, for a standing wave ratio of: as earlier asserted. Along the line, the above expression for is seen to oscillate sinusoidally between and with a period of  . This is half of the guided wavelength for the frequency  . That can be seen as due to interference between two waves of that frequency which are travelling in opposite directions. For example, at a frequency (free space wavelength of 15 m) in a transmission line whose velocity factor is 0.67 , the guided wavelength (distance between voltage peaks of the forward wave alone) would be At instances when the forward wave at is at zero phase (peak voltage) then at it would also be at zero phase, but at it would be at 180° phase (peak negative voltage). On the other hand, the magnitude of the voltage due to a standing wave produced by its addition to a reflected wave, would have a wavelength between peaks of only Depending on the location of the load and phase of reflection, there might be a peak in the magnitude of at Then there would be another peak found where at whereas it would find minima of the standing wave at 8.8 m, etc. Practical implications of SWR The most common case for measuring and examining SWR is when installing and tuning transmitting antennas. When a transmitter is connected to an antenna by a feed line, the driving point impedance of the antenna must match the characteristic impedance of the feed line in order for the transmitter to see the impedance it was designed for (the impedance of the feed line, usually 50 or 75 ohms). The impedance of a particular antenna design can vary due to a number of factors that cannot always be clearly identified. This includes the transmitter frequency (as compared to the antenna's design or resonant frequency), the antenna's height above and quality of the ground, proximity to large metal structures, and variations in the exact size of the conductors used to construct the antenna. When an antenna and feed line do not have matching impedances, the transmitter sees an unexpected impedance, where it might not be able to produce its full power, and can even damage the transmitter in some cases. The reflected power in the transmission line increases the average current and therefore losses in the transmission line compared to power actually delivered to the load. It is the interaction of these reflected waves with forward waves which causes standing wave patterns, with the negative repercussions we have noted. Matching the impedance of the antenna to the impedance of the feed line can sometimes be accomplished through adjusting the antenna itself, but otherwise is possible using an antenna tuner, an impedance matching device. Installing the tuner between the feed line and the antenna allows for the feed line to see a load close to its characteristic impedance, while sending most of the transmitter's power (a small amount may be dissipated within the tuner) to be radiated by the antenna despite its otherwise unacceptable feed point impedance. Installing a tuner in between the transmitter and the feed line can also transform the impedance seen at the transmitter end of the feed line to one preferred by the transmitter. However, in the latter case, the feed line still has a high SWR present, with the resulting increased feed line losses unmitigated. The magnitude of those losses are dependent on the type of transmission line, and its length. They always increase with frequency. For example, a certain antenna used well away from its resonant frequency may have an SWR of 6:1. For a frequency of 3.5 MHz, with that antenna fed through 75 meters of RG-8A coax, the loss due to standing waves would be 2.2 dB. However the same 6:1 mismatch through 75 meters of RG-8A coax would incur 10.8 dB of loss at 146 MHz. Thus, a better match of the antenna to the feed line, that is, a lower SWR, becomes increasingly important with increasing frequency, even if the transmitter is able to accommodate the impedance seen (or an antenna tuner is used between the transmitter and feed line). Certain types of transmissions can suffer other negative effects from reflected waves on a transmission line. Analog TV can experience "ghosts" from delayed signals bouncing back and forth on a long line. FM stereo can also be affected and digital signals can experience delayed pulses leading to bit errors. Whenever the delay times for a signal going back down and then again up the line are comparable to the modulation time constants, effects occur. For this reason, these types of transmissions require a low SWR on the feedline, even if SWR induced loss might be acceptable and matching is done at the transmitter. Methods of measuring standing wave ratio Many different methods can be used to measure standing wave ratio. The most intuitive method uses a slotted line which is a section of transmission line with an open slot which allows a probe to detect the actual voltage at various points along the line. Thus the maximum and minimum values can be compared directly. This method is used at VHF and higher frequencies. At lower frequencies, such lines are impractically long. Directional couplers can be used at HF through microwave frequencies. Some are a quarter wave or more long, which restricts their use to the higher frequencies. Other types of directional couplers sample the current and voltage at a single point in the transmission path and mathematically combine them in such a way as to represent the power flowing in one direction. The common type of SWR / power meter used in amateur operation may contain a dual directional coupler. Other types use a single coupler which can be rotated 180 degrees to sample power flowing in either direction. Unidirectional couplers of this type are available for many frequency ranges and power levels and with appropriate coupling values for the analog meter used. The forward and reflected power measured by directional couplers can be used to calculate SWR. The computations can be done mathematically in analog or digital form or by using graphical methods built into the meter as an additional scale or by reading from the crossing point between two needles on the same meter. The above measuring instruments can be used "in line" that is, the full power of the transmitter can pass through the measuring device so as to allow continuous monitoring of SWR. Other instruments, such as network analyzers, low power directional couplers and antenna bridges use low power for the measurement and must be connected in place of the transmitter. Bridge circuits can be used to directly measure the real and imaginary parts of a load impedance and to use those values to derive SWR. These methods can provide more information than just SWR or forward and reflected power. Stand alone antenna analyzers use various measuring methods and can display SWR and other parameters plotted against frequency. By using directional couplers and a bridge in combination, it is possible to make an in line instrument that reads directly in complex impedance or in SWR. Stand alone antenna analyzers also are available that measure multiple parameters. Power standing wave ratio The term power standing wave ratio (PSWR) is sometimes referred to, and defined as, the square of the voltage standing wave ratio. The term is widely cited as "misleading". However it does correspond to one type of measurement of SWR using what was formerly a standard measuring instrument at microwave frequencies, the slotted line. The slotted line is a waveguide (or air-filled coaxial line) in which a small sensing antenna which is part of a crystal detector or detector is placed in the electric field in the line. The voltage induced in the antenna is rectified by either a point contact diode (crystal rectifier) or a Schottky barrier diode that is incorporated in the detector. These detectors have a square law output for low levels of input. Readings therefore corresponded to the square of the electric field along the slot, E2(x), with maximum and minimum readings of E2max and E2min found as the probe is moved along the slot. The ratio of these yields the square of the SWR, the so-called PSWR. This technique of rationalization of terms is fraught with problems. The square law behavior of the detector diode is exhibited only when the voltage across the diode is below the knee of the diode. Once the detected voltage exceeds the knee, the response of the diode becomes nearly linear. In this mode the diode and its associated filtering capacitor produce a voltage that is proportional to the peak of the sampled voltage. The operator of such a detector would not have a ready indication as to the mode in which the detector diode is operating and therefore differentiating the results between SWR or so called PSWR is not practical. Perhaps even worse, is the common case where the minimum detected voltage is below the knee and the maximum voltage is above the knee. In this case, the computed results are largely meaningless. Thus the terms PSWR and Power Standing Wave Ratio are deprecated and should be considered only from a legacy measurement perspective. Implications of SWR on medical applications SWR can also have a detrimental impact upon the performance of microwave-based medical applications. In microwave electrosurgery an antenna that is placed directly into tissue may not always have an optimal match with the feedline resulting in an SWR. The presence of SWR can affect monitoring components used to measure power levels impacting the reliability of such measurements. See also References Further reading External links — A web application that draws the Standing Wave Diagram and calculates the SWR, input impedance, reflection coefficient and more — A flash demonstration of transmission line reflection and SWR — An online conversion tool between SWR, return loss and reflection coefficient — Series of pages dealing with all aspects of VSWR, reflection coefficient, return loss, practical aspects, measurement, etc. Antennas (radio) Electronics concepts Wave mechanics Radio electronics Engineering ratios
Standing wave ratio
[ "Physics", "Mathematics", "Engineering" ]
3,963
[ "Radio electronics", "Physical phenomena", "Metrics", "Engineering ratios", "Quantity", "Classical mechanics", "Waves", "Wave mechanics" ]
41,789
https://en.wikipedia.org/wiki/Thermodynamic%20temperature
Thermodynamic temperature is a quantity defined in thermodynamics as distinct from kinetic theory or statistical mechanics. Historically, thermodynamic temperature was defined by Lord Kelvin in terms of a macroscopic relation between thermodynamic work and heat transfer as defined in thermodynamics, but the kelvin was redefined by international agreement in 2019 in terms of phenomena that are now understood as manifestations of the kinetic energy of free motion of microscopic particles such as atoms, molecules, and electrons. From the thermodynamic viewpoint, for historical reasons, because of how it is defined and measured, this microscopic kinetic definition is regarded as an "empirical" temperature. It was adopted because in practice it can generally be measured more precisely than can Kelvin's thermodynamic temperature. A thermodynamic temperature of zero is of particular importance for the third law of thermodynamics. By convention, it is reported on the Kelvin scale of temperature in which the unit of measurement is the kelvin (unit symbol: K). For comparison, a temperature of 295 K corresponds to 21.85 °C and 71.33 °F. Overview Thermodynamic temperature, as distinct from SI temperature, is defined in terms of a macroscopic Carnot cycle. Thermodynamic temperature is of importance in thermodynamics because it is defined in purely thermodynamic terms. SI temperature is conceptually far different from thermodynamic temperature. Thermodynamic temperature was rigorously defined historically long before there was a fair knowledge of microscopic particles such as atoms, molecules, and electrons. The International System of Units (SI) specifies the international absolute scale for measuring temperature, and the unit of measure kelvin (unit symbol: K) for specific values along the scale. The kelvin is also used for denoting temperature intervals (a span or difference between two temperatures) as per the following example usage: "A 60/40 tin/lead solder is non-eutectic and is plastic through a range of 5 kelvins as it solidifies." A temperature interval of one degree Celsius is the same magnitude as one kelvin. The magnitude of the kelvin was redefined in 2019 in relation to the physical property underlying thermodynamic temperature: the kinetic energy of atomic free particle motion. The revision fixed the Boltzmann constant at exactly (J/K). The microscopic property that imbues material substances with a temperature can be readily understood by examining the ideal gas law, which relates, per the Boltzmann constant, how heat energy causes precisely defined changes in the pressure and temperature of certain gases. This is because monatomic gases like helium and argon behave kinetically like freely moving perfectly elastic and spherical billiard balls that move only in a specific subset of the possible motions that can occur in matter: that comprising the three translational degrees of freedom. The translational degrees of freedom are the familiar billiard ball-like movements along the X, Y, and Z axes of 3D space (see Fig. 1, below). This is why the noble gases all have the same specific heat capacity per atom and why that value is lowest of all the gases. Molecules (two or more chemically bound atoms), however, have internal structure and therefore have additional internal degrees of freedom (see Fig. 3, below), which makes molecules absorb more heat energy for any given amount of temperature rise than do the monatomic gases. Heat energy is born in all available degrees of freedom; this is in accordance with the equipartition theorem, so all available internal degrees of freedom have the same temperature as their three external degrees of freedom. However, the property that gives all gases their pressure, which is the net force per unit area on a container arising from gas particles recoiling off it, is a function of the kinetic energy borne in the freely moving atoms' and molecules' three translational degrees of freedom. Fixing the Boltzmann constant at a specific value, along with other rule making, had the effect of precisely establishing the magnitude of the unit interval of SI temperature, the kelvin, in terms of the average kinetic behavior of the noble gases. Moreover, the starting point of the thermodynamic temperature scale, absolute zero, was reaffirmed as the point at which zero average kinetic energy remains in a sample; the only remaining particle motion being that comprising random vibrations due to zero-point energy. Absolute zero of temperature Temperature scales are numerical. The numerical zero of a temperature scale is not bound to the absolute zero of temperature. Nevertheless, some temperature scales have their numerical zero coincident with the absolute zero of temperature. Examples are the International SI temperature scale, the Rankine temperature scale, and the thermodynamic temperature scale. Other temperature scales have their numerical zero far from the absolute zero of temperature. Examples are the Fahrenheit scale and the Celsius scale. At the zero point of thermodynamic temperature, absolute zero, the particle constituents of matter have minimal motion and can become no colder. Absolute zero, which is a temperature of zero kelvins (0 K), precisely corresponds to −273.15 °C and −459.67 °F. Matter at absolute zero has no remaining transferable average kinetic energy and the only remaining particle motion is due to an ever-pervasive quantum mechanical phenomenon called ZPE (zero-point energy). Though the atoms in, for instance, a container of liquid helium that was precisely at absolute zero would still jostle slightly due to zero-point energy, a theoretically perfect heat engine with such helium as one of its working fluids could never transfer any net kinetic energy (heat energy) to the other working fluid and no thermodynamic work could occur. Temperature is generally expressed in absolute terms when scientifically examining temperature's interrelationships with certain other physical properties of matter such as its volume or pressure (see Gay-Lussac's law), or the wavelength of its emitted black-body radiation. Absolute temperature is also useful when calculating chemical reaction rates (see Arrhenius equation). Furthermore, absolute temperature is typically used in cryogenics and related phenomena like superconductivity, as per the following example usage: "Conveniently, tantalum's transition temperature (T) of 4.4924 kelvin is slightly above the 4.2221 K boiling point of helium." Boltzmann constant The Boltzmann constant and its related formulas describe the realm of particle kinetics and velocity vectors whereas ZPE (zero-point energy) is an energy field that jostles particles in ways described by the mathematics of quantum mechanics. In atomic and molecular collisions in gases, ZPE introduces a degree of chaos, i.e., unpredictability, to rebound kinetics; it is as likely that there will be less ZPE-induced particle motion after a given collision as more. This random nature of ZPE is why it has no net effect upon either the pressure or volume of any bulk quantity (a statistically significant quantity of particles) of gases. However, in temperature condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 2.5 MPa (25 bar)), ZPE is very much a form of thermal energy and may properly be included when tallying a substance's internal energy. Rankine scale Though there have been many other temperature scales throughout history, there have been only two scales for measuring thermodynamic temperature which have absolute zero as their null point (0): The Kelvin scale and the Rankine scale. Throughout the scientific world where modern measurements are nearly always made using the International System of Units, thermodynamic temperature is measured using the Kelvin scale. The Rankine scale is part of English engineering units and finds use in certain engineering fields, particularly in legacy reference works. The Rankine scale uses the degree Rankine (symbol: °R) as its unit, which is the same magnitude as the degree Fahrenheit (symbol: °F). A unit increment of one kelvin is exactly 1.8 times one degree Rankine; thus, to convert a specific temperature on the Kelvin scale to the Rankine scale, , and to convert from a temperature on the Rankine scale to the Kelvin scale, . Consequently, absolute zero is "0" for both scales, but the melting point of water ice (0 °C and 273.15 K) is 491.67 °R. To convert temperature intervals (a span or difference between two temperatures), the formulas from the preceding paragraph are applicable; for instance, an interval of 5 kelvin is precisely equal to an interval of 9 degrees Rankine. Modern redefinition of the kelvin For 65 years, between 1954 and the 2019 revision of the SI, a temperature interval of one kelvin was defined as the difference between the triple point of water and absolute zero. The 1954 resolution by the International Bureau of Weights and Measures (known by the French-language acronym BIPM), plus later resolutions and publications, defined the triple point of water as precisely 273.16 K and acknowledged that it was "common practice" to accept that due to previous conventions (namely, that 0 °C had long been defined as the melting point of water and that the triple point of water had long been experimentally determined to be indistinguishably close to 0.01 °C), the difference between the Celsius scale and Kelvin scale is accepted as 273.15 kelvins; which is to say, 0 °C corresponds to 273.15 kelvins. The net effect of this as well as later resolutions was twofold: 1) they defined absolute zero as precisely 0 K, and 2) they defined that the triple point of special isotopically controlled water called Vienna Standard Mean Ocean Water occurred at precisely 273.16 K and 0.01 °C. One effect of the aforementioned resolutions was that the melting point of water, while very close to 273.15 K and 0 °C, was not a defining value and was subject to refinement with more precise measurements. The 1954 BIPM standard did a good job of establishing—within the uncertainties due to isotopic variations between water samples—temperatures around the freezing and triple points of water, but required that intermediate values between the triple point and absolute zero, as well as extrapolated values from room temperature and beyond, to be experimentally determined via apparatus and procedures in individual labs. This shortcoming was addressed by the International Temperature Scale of 1990, or ITS90, which defined 13 additional points, from 13.8033 K, to 1,357.77 K. While definitional, ITS90 had—and still has—some challenges, partly because eight of its extrapolated values depend upon the melting or freezing points of metal samples, which must remain exceedingly pure lest their melting or freezing points be affected—usually depressed. The 2019 revision of the SI was primarily for the purpose of decoupling much of the SI system's definitional underpinnings from the kilogram, which was the last physical artifact defining an SI base unit (a platinum/iridium cylinder stored under three nested bell jars in a safe located in France) and which had highly questionable stability. The solution required that four physical constants, including the Boltzmann constant, be definitionally fixed. Assigning the Boltzmann constant a precisely defined value had no practical effect on modern thermometry except for the most exquisitely precise measurements. Before the revision, the triple point of water was exactly 273.16 K and 0.01 °C and the Boltzmann constant was experimentally determined to be , where the "(51)" denotes the uncertainty in the two least significant digits (the 03) and equals a relative standard uncertainty of 0.37 ppm. Afterwards, by defining the Boltzmann constant as exactly , the 0.37 ppm uncertainty was transferred to the triple point of water, which became an experimentally determined value of (). That the triple point of water ended up being exceedingly close to 273.16 K after the SI revision was no accident; the final value of the Boltzmann constant was determined, in part, through clever experiments with argon and helium that used the triple point of water for their key reference temperature. Notwithstanding the 2019 revision, water triple-point cells continue to serve in modern thermometry as exceedingly precise calibration references at 273.16 K and 0.01 °C. Moreover, the triple point of water remains one of the 14 calibration points comprising ITS90, which spans from the triple point of hydrogen (13.8033 K) to the freezing point of copper (1,357.77 K), which is a nearly hundredfold range of thermodynamic temperature. Relationship of temperature, motions, conduction, and thermal energy Nature of kinetic energy, translational motion, and temperature The thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three X, Y, and Z–axis dimensions of space means the particles move in the three spatial degrees of freedom. This particular form of kinetic energy is sometimes referred to as kinetic temperature. Translational motion is but one form of heat energy and is what gives gases not only their temperature, but also their pressure and the vast majority of their volume. This relationship between the temperature, pressure, and volume of gases is established by the ideal gas law's formula and is embodied in the gas laws. Though the kinetic energy borne exclusively in the three translational degrees of freedom comprise the thermodynamic temperature of a substance, molecules, as can be seen in Fig. 3, can have other degrees of freedom, all of which fall under three categories: bond length, bond angle, and rotational. All three additional categories are not necessarily available to all molecules, and even for molecules that can experience all three, some can be "frozen out" below a certain temperature. Nonetheless, all those degrees of freedom that are available to the molecules under a particular set of conditions contribute to the specific heat capacity of a substance; which is to say, they increase the amount of heat (kinetic energy) required to raise a given amount of the substance by one kelvin or one degree Celsius. The relationship of kinetic energy, mass, and velocity is given by the formula . Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the velocity. The extent to which the kinetic energy of translational motion in a statistically significant collection of atoms or molecules in a gas contributes to the pressure and volume of that gas is a proportional function of thermodynamic temperature as established by the Boltzmann constant (symbol: ). The Boltzmann constant also relates the thermodynamic temperature of a gas to the mean kinetic energy of an individual particles' translational motion as follows: where: is the mean kinetic energy for an individual particle is the thermodynamic temperature of the bulk quantity of the substance While the Boltzmann constant is useful for finding the mean kinetic energy in a sample of particles, it is important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occurs across a wide range of speeds (see animation in Fig. 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s (0.2092 s/km). However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x-axis so the shape of the curve can easily be compared to the curves in Fig. 5 below. In both graphs, zero on the x-axis represents infinite temperature. Additionally, the x- and y-axes on both graphs are scaled proportionally. High speeds of translational motion Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool cesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second to in order to calculate their temperature. Formulas for calculating the velocity and speed of translational motion are given in the following footnote. It is neither difficult to imagine atomic motions due to kinetic temperature, nor distinguish between such motions and those due to zero-point energy. Consider the following hypothetical thought experiment, as illustrated in Fig. 2.5 at left, with an atom that is exceedingly close to absolute zero. Imagine peering through a common optical microscope set to 400 power, which is about the maximum practical magnification for optical microscopes. Such microscopes generally provide fields of view a bit over 0.4 mm in diameter. At the center of the field of view is a single levitated argon atom (argon comprises about 0.93% of air) that is illuminated and glowing against a dark backdrop. If this argon atom was at a beyond-record-setting one-trillionth of a kelvin above absolute zero, and was moving perpendicular to the field of view towards the right, it would require 13.9 seconds to move from the center of the image to the 200-micron tick mark; this travel distance is about the same as the width of the period at the end of this sentence on modern computer monitors. As the argon atom slowly moved, the positional jitter due to zero-point energy would be much less than the 200-nanometer (0.0002 mm) resolution of an optical microscope. Importantly, the atom's translational velocity of 14.43 microns per second constitutes all its retained kinetic energy due to not being precisely at absolute zero. Were the atom precisely at absolute zero, imperceptible jostling due to zero-point energy would cause it to very slightly wander, but the atom would perpetually be located, on average, at the same spot within the field of view. This is analogous to a boat that has had its motor turned off and is now bobbing slightly in relatively calm and windless ocean waters; even though the boat randomly drifts to and fro, it stays in the same spot in the long term and makes no headway through the water. Accordingly, an atom that was precisely at absolute zero would not be "motionless", and yet, a statistically significant collection of such atoms would have zero net kinetic energy available to transfer to any other collection of atoms. This is because regardless of the kinetic temperature of the second collection of atoms, they too experience the effects of zero-point energy. Such are the consequences of statistical mechanics and the nature of thermodynamics. Internal motions of molecules and internal energy As mentioned above, there are other ways molecules can jiggle besides the three translational degrees of freedom that imbue substances with their kinetic temperature. As can be seen in the animation at right, molecules are complex objects; they are a population of atoms and thermal agitation can strain their internal chemical bonds in three different ways: via rotation, bond length, and bond angle movements; these are all types of internal degrees of freedom. This makes molecules distinct from monatomic substances (consisting of individual atoms) like the noble gases helium and argon, which have only the three translational degrees of freedom (the X, Y, and Z axis). Kinetic energy is stored in molecules' internal degrees of freedom, which gives them an internal temperature. Even though these motions are called "internal", the external portions of molecules still move—rather like the jiggling of a stationary water balloon. This permits the two-way exchange of kinetic energy between internal motions and translational motions with each molecular collision. Accordingly, as internal energy is removed from molecules, both their kinetic temperature (the kinetic energy of translational motion) and their internal temperature simultaneously diminish in equal proportions. This phenomenon is described by the equipartition theorem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the detailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, and the diffusion of hot gases in a partial vacuum. The kinetic energy stored internally in molecules causes substances to contain more heat energy at any given temperature and to absorb additional internal energy for a given temperature increase. This is because any kinetic energy that is, at a given instant, bound in internal motions, is not contributing to the molecules' translational motions at that same instant. This extra kinetic energy simply increases the amount of internal energy that substance absorbs for a given temperature rise. This property is known as a substance's specific heat capacity. Different molecules absorb different amounts of internal energy for each incremental increase in temperature; that is, they have different specific heat capacities. High specific heat capacity arises, in part, because certain substances' molecules possess more internal degrees of freedom than others do. For instance, room-temperature nitrogen, which is a diatomic molecule, has five active degrees of freedom: the three comprising translational motion plus two rotational degrees of freedom internally. Not surprisingly, in accordance with the equipartition theorem, nitrogen has five-thirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases. Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of heat energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom. Diffusion of thermal energy: entropy, phonons, and mobile conduction electrons Heat conduction is the diffusion of thermal energy from hot parts of a system to cold parts. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases). One particular heat conduction mechanism occurs when translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) diffuse throughout the volume of the gas through serial collisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more. Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at the speed of sound of a given substance. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, phonon-based heat conduction is usually inefficient and such solids are considered thermal insulators (such as glass, plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam. Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct molecule-to-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals' thermal conductivity and their electrical conductivity. Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion, However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they are delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abundant conduction electrons. Diffusion of thermal energy: black-body radiation Thermal radiation is a byproduct of the collisions arising from various vibrational motions of atoms. These collisions cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see ). Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process. As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system. Table of thermodynamic temperatures The table below shows various points on the thermodynamic scale, in order of increasing temperature. Heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin. Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box heading from blue to green. At one specific thermodynamic point, the melting point (which is 0 °C across a wide pressure range in the case of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules, converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy cannot make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance. As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it is called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements. If the substance is one of the monatomic gases (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole. Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals' ratios are even greater, typically in the range of 400 to 1200 times. The phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase. Water's sizable enthalpy of vaporization is why one's skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above); water vapors (gas phase) are liquefied on the skin with releasing a large amount of energy (enthalpy) to the environment including the skin, resulting in skin damage. In the opposite direction, this is why one's skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wet-bulb temperature that is dependent on relative humidity); the water evaporation on the skin takes a large amount of energy from the environment including the skin, reducing the skin temperature. Water's highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when the pools are not in use) are so effective at reducing heating costs: they prevent evaporation. (In other words, taking energy from water when it is evaporated is limited.) For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water . Internal energy The total energy of all translational and internal particle motions, including that of conduction electrons, plus the potential energy of phase changes, plus zero-point energy of a substance comprise the internal energy of it. Internal energy at absolute zero As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules diminish (their kinetic energy or temperature decreases); the internal motions of molecules diminish (their internal energy or temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower; and black-body radiation's peak emittance wavelength increases (the photons' energy decreases). When particles of a substance are as close as possible to complete rest and retain only ZPE (zero-point energy)-induced quantum mechanical motion, the substance is at the temperature of absolute zero ( = 0). Whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero internal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably,  = 0 helium remains liquid at room pressure (Fig. 9 at right) and must be under a pressure of at least to crystallize. This is because helium's heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid–solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one. The above complexities make for rather cumbersome blanket statements regarding the internal energy in  = 0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy. One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration). Lastly, all  = 0 substances contain zero kinetic thermal energy. Practical applications for thermodynamic temperature Thermodynamic temperature is useful not only for scientists, it can also be useful for lay-people in many disciplines involving gases. By expressing variables in absolute terms and applying Gay-Lussac's law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a cold pressure of 200 kPa, then its absolute pressure is 300 kPa. Room temperature ("cold" in tire terms) is 296 K. If the tire temperature is 20 °C hotter (20 kelvins), the solution is calculated as  = 6.8% greater thermodynamic temperature and absolute pressure; that is, an absolute pressure of 320 kPa, which is a of 220 kPa. Relationship to ideal gas law The thermodynamic temperature is closely linked to the ideal gas law and its consequences. It can be linked also to the second law of thermodynamics. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio of two temperatures and is the same in all absolute scales. Strictly speaking, the temperature of a system is well-defined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena. Loosely stated, temperature differences dictate the direction of heat between two systems such that their combined energy is maximally distributed among their lowest possible states. We call this distribution "entropy". To better understand the relationship between temperature and entropy, consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, , and a lower temperature heat sink, , through a gas filled piston. The work done per cycle is equal in magnitude to net heat taken up, which is sum of the heat taken up by the engine from the high-temperature source, plus the waste heat given off by the engine, < 0. The efficiency of the engine is the work divided by the heat put into the system or where is the work done per cycle. Thus the efficiency depends only on . Carnot's theorem states that all reversible engines operating between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures and must have the same efficiency, that is to say, the efficiency is the function of only temperatures In addition, a reversible heat engine operating between a pair of thermal reservoirs at temperatures and must have the same efficiency as one consisting of two cycles, one between and another (intermediate) temperature , and the second between and . If this were not the case, then energy (in the form of ) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles as an engine design choice, and any reversible engine between the same reservoir at and must be equally efficient regardless of the engine design. If we choose engines such that work done by the one cycle engine and the two cycle engine are same, then the efficiency of each heat engine is written as below. Here, the engine 1 is the one cycle engine, and the engines 2 and 3 make the two cycle engine where there is the intermediate reservoir at . We also have used the fact that the heat passes through the intermediate thermal reservoir at without losing its energy. (I.e., is not lost during its passage through the reservoir at .) This fact can be proved by the following. In order to have the consistency in the last equation, the heat flown from the engine 2 to the intermediate reservoir must be equal to the heat flown out from the reservoir to the engine 3. With this understanding of , and , mathematically, But since the first function is not a function of , the product of the final two functions must result in the removal of as a variable. The only way is therefore to define the function as follows: and so that I.e. the ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our ; it is a matter of convenience and convention that we choose . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale. Such a definition coincides with that of the ideal gas derivation; also it is this definition of the thermodynamic temperature that enables us to represent the Carnot efficiency in terms of and , and hence derive that the (complete) Carnot cycle is isentropic: Substituting this back into our first formula for efficiency yields a relationship in terms of temperature: Note that for the efficiency is 100% and that efficiency becomes greater than 100% for , which is unrealistic. Subtracting 1 from the right hand side of the Equation (4) and the middle portion gives and thus The generalization of this equation is the Clausius theorem, which proposes the existence of a state function (i.e., a function which depends only on the state of the system, not on how it reached that state) defined (up to an additive constant) by where the subscript rev indicates heat transfer in a reversible process. The function is the entropy of the system, mentioned previously, and the change of around any cycle is zero (as is necessary for any state function). The Equation 5 can be rearranged to get an alternative definition for temperature in terms of entropy and heat (to avoid a logic loop, we should first define entropy through statistical mechanics): For a constant-volume system (so no mechanical work ) in which the entropy is a function of its internal energy , and the thermodynamic temperature is therefore given by so that the reciprocal of the thermodynamic temperature is the rate of change of entropy with respect to the internal energy at the constant volume. History Guillaume Amontons (1663–1705) published two papers in 1702 and 1703 that may be used to credit him as being the first researcher to deduce the existence of a fundamental (thermodynamic) temperature scale featuring an absolute zero. He made the discovery while endeavoring to improve upon the air thermometers in use at the time. His J-tube thermometers comprised a mercury column that was supported by a fixed mass of air entrapped within the sensing portion of the thermometer. In thermodynamic terms, his thermometers relied upon the volume / temperature relationship of gas under constant pressure. His measurements of the boiling point of water and the melting point of ice showed that regardless of the mass of air trapped inside his thermometers or the weight of mercury the air was supporting, the reduction in air volume at the ice point was always the same ratio. This observation led him to posit that a sufficient reduction in temperature would reduce the air volume to zero. In fact, his calculations projected that absolute zero was equivalent to −240 °C—only 33.15 degrees short of the true value of −273.15 °C. Amonton's discovery of a one-to-one relationship between absolute temperature and absolute pressure was rediscovered a century later and popularized within the scientific community by Joseph Louis Gay-Lussac. Today, this principle of thermodynamics is commonly known as Gay-Lussac's law but is also known as Amonton's law. In 1742, Anders Celsius (1701–1744) created a "backwards" version of the modern Celsius temperature scale. In Celsius's original scale, zero represented the boiling point of water and 100 represented the melting point of ice. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that ice's melting point was effectively unaffected by pressure. He also determined with remarkable precision how water's boiling point varied as a function of atmospheric pressure. He proposed that zero on his temperature scale (water's boiling point) would be calibrated at the mean barometric pressure at mean sea level. Coincident with the death of Anders Celsius in 1744, the botanist Carl Linnaeus (1707–1778) effectively reversed Celsius's scale upon receipt of his first thermometer featuring a scale where zero represented the melting point of ice and 100 represented water's boiling point. The custom-made Linnaeus-thermometer, for use in his greenhouses, was made by Daniel Ekström, Sweden's leading maker of scientific instruments at the time. For the next 204 years, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the centigrade scale were often reported simply as degrees or, when greater specificity was desired, degrees centigrade. The symbol for temperature values on this scale was °C (in several formats over the years). Because the term centigrade was also the French-language name for a unit of angular measurement (one-hundredth of a right angle) and had a similar connotation in other languages, the term "centesimal degree" was used when very precise, unambiguous language was required by international standards bodies such as the International Bureau of Weights and Measures (BIPM). The 9th CGPM (General Conference on Weights and Measures and the CIPM (International Committee for Weights and Measures formally adopted degree Celsius (symbol: °C) in 1948. In his book Pyrometrie (1777) completed four months before his death, Johann Heinrich Lambert (1728–1777), sometimes incorrectly referred to as Joseph Lambert, proposed an absolute temperature scale based on the pressure/temperature relationship of a fixed volume of gas. This is distinct from the volume/temperature relationship of gas under constant pressure that Guillaume Amontons discovered 75 years earlier. Lambert stated that absolute zero was the point where a simple straight-line extrapolation reached zero gas pressure and was equal to −270 °C. Notwithstanding the work of Guillaume Amontons 85 years earlier, Jacques Alexandre César Charles (1746–1823) is often credited with discovering (circa 1787), but not publishing, that the volume of a gas under constant pressure is proportional to its absolute temperature. The formula he created was . Joseph Louis Gay-Lussac (1778–1850) published work in 1802 (acknowledging the unpublished lab notes of Jacques Charles fifteen years earlier) describing how the volume of gas under constant pressure changes linearly with its absolute (thermodynamic) temperature. This behavior is called Charles's law and is one of the gas laws. His are the first known formulas to use the number 273 for the expansion coefficient of gas relative to the melting point of ice (indicating that absolute zero was equivalent to −273 °C). William Thomson (1824–1907), also known as Lord Kelvin, wrote in his 1848 paper "On an Absolute Thermometric Scale" of the need for a scale whereby infinite cold (absolute zero) was the scale's zero point, and which used the degree Celsius for its unit increment. Like Gay-Lussac, Thomson calculated that absolute zero was equivalent to −273 °C on the air thermometers of the time. This absolute scale is known today as the kelvin thermodynamic temperature scale. Thomson's value of −273 was derived from 0.00366, which was the accepted expansion coefficient of gas per degree Celsius relative to the ice point. The inverse of −0.00366 expressed to five significant digits is −273.22 °C which is remarkably close to the true value of −273.15 °C. In the paper he proposed to define temperature using idealized heat engines. In detail, he proposed that, given three heat reservoirs at temperatures , if two reversible heat engines (Carnot engine), one working between and another between , can produce the same amount of mechanical work by letting the same amount of heat pass through, then define . Note that like Carnot, Kelvin worked under the assumption that heat is conserved ("the conversion of heat (or caloric) into mechanical effect is probably impossible"), and if heat goes into the heat engine, then heat must come out. Kelvin, realizing after Joule's experiments that heat is not a conserved quantity but is convertible with mechanical work, modified his scale in the 1851 work An Account of Carnot's Theory of the Motive Power of Heat. In this work, he defined as follows: The above definition fixes the ratios between absolute temperatures, but it does not fix a scale for absolute temperature. For the scale, Thomson proposed to use the Celsius degree, that is, the interval between the freezing and the boiling point of water. In 1859 Macquorn Rankine (1820–1872) proposed a thermodynamic temperature scale similar to William Thomson's but which used the degree Fahrenheit for its unit increment, that is, the interval between the freezing and the boiling point of water. This absolute scale is known today as the Rankine thermodynamic temperature scale. Ludwig Boltzmann (1844–1906) made major contributions to thermodynamics between 1877 and 1884 through an understanding of the role that particle kinetics and black body radiation played. His name is now attached to several of the formulas used today in thermodynamics. Gas thermometry experiments carefully calibrated to the melting point of ice and boiling point of water showed in the 1930s that absolute zero was equivalent to −273.15 °C. Resolution 3 of the 9th General Conference on Weights and Measures (CGPM) in 1948 fixed the triple point of water at precisely 0.01 °C. At this time, the triple point still had no formal definition for its equivalent kelvin value, which the resolution declared "will be fixed at a later date". The implication is that if the value of absolute zero measured in the 1930s was truly −273.15 °C, then the triple point of water (0.01 °C) was equivalent to 273.16 K. Additionally, both the International Committee for Weights and Measures (CIPM) and the CGPM formally adopted the name Celsius for the degree Celsius and the Celsius temperature scale. Resolution 3 of the 10th CGPM in 1954 gave the kelvin scale its modern definition by choosing the triple point of water as its upper defining point (with no change to absolute zero being the null point) and assigning it a temperature of precisely 273.16 kelvins (what was actually written 273.16 degrees Kelvin at the time). This, in combination with Resolution 3 of the 9th CGPM, had the effect of defining absolute zero as being precisely zero kelvins and −273.15 °C. Resolution 3 of the 13th CGPM in 1967/1968 renamed the unit increment of thermodynamic temperature kelvin, symbol K, replacing degree absolute, symbol . Further, feeling it useful to more explicitly define the magnitude of the unit increment, the 13th CGPM also decided in Resolution 4 that "The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water". The CIPM affirmed in 2005 that for the purposes of delineating the temperature of the triple point of water, the definition of the kelvin thermodynamic temperature scale would refer to water having an isotopic composition defined as being precisely equal to the nominal specification of Vienna Standard Mean Ocean Water. In November 2018, the 26th General Conference on Weights and Measures (CGPM) changed the definition of the Kelvin by fixing the Boltzmann constant to when expressed in the unit J/K. This change (and other changes in the definition of SI units) was made effective on the 144th anniversary of the Metre Convention, 20 May 2019. See also :Category:Thermodynamics Absolute zero Hagedorn temperature Adiabatic process Boltzmann constant Carnot heat engine Conversion of scales of temperature Energy conversion efficiency Enthalpy Enthalpy of fusion Enthalpy of vaporization Entropy Equipartition theorem Fahrenheit First law of thermodynamics Freezing Gas laws International System of Quantities International Temperature Scale of 1990 (ITS-90) Ideal gas law Kelvin Laws of thermodynamics Maxwell–Boltzmann distribution Orders of magnitude (temperature) Phase transition Planck's law of black body radiation Rankine scale Specific heat capacity Temperature Thermal radiation Thermodynamic beta Thermodynamic equations Thermodynamic equilibrium Thermodynamics Timeline of heat engine technology Timeline of temperature and pressure measurement technology Triple point Notes In the following notes, wherever numeric equalities are shown in concise form, such as , the two digits between the parentheses denotes the uncertainty at 1-σ (1 standard deviation, 68% confidence level) in the two least significant digits of the significand. External links Zero Point Energy and Zero Point Field. A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute. Temperature SI base quantities State functions
Thermodynamic temperature
[ "Physics", "Chemistry", "Mathematics" ]
10,990
[ "State functions", "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Quantity", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
41,797
https://en.wikipedia.org/wiki/Time-domain%20reflectometer
A time-domain reflectometer (TDR) is an electronic instrument used to determine the characteristics of electrical lines by observing reflected pulses. It can be used to characterize and locate faults in metallic cables (for example, twisted pair wire or coaxial cable), and to locate discontinuities in a connector, printed circuit board, or any other electrical path. Description A TDR measures reflections along a conductor. In order to measure those reflections, the TDR will transmit an incident signal onto the conductor and listen for its reflections. If the conductor is of a uniform impedance and is properly terminated, then there will be no reflections and the remaining incident signal will be absorbed at the far-end by the termination. Instead, if there are impedance variations, then some of the incident signal will be reflected back to the source. A TDR is similar in principle to radar. The impedance of the discontinuity can be determined from the amplitude of the reflected signal. The distance to the reflecting impedance can also be determined from the time that a pulse takes to return. The limitation of this method is the minimum system rise time. The total rise time consists of the combined rise time of the driving pulse and that of the oscilloscope or sampler that monitors the reflections. Method The TDR analysis begins with the propagation of a step or impulse of energy into a system and the subsequent observation of the energy reflected by the system. By analyzing the magnitude, duration and shape of the reflected waveform, the nature of the impedance variation in the transmission system can be determined. If a pure resistive load is placed on the output of the reflectometer and a step signal is applied, a step signal is observed on the display, and its height is a function of the resistance. The magnitude of the step produced by the resistive load may be expressed as a fraction of the input signal as given by: where is the characteristic impedance of the transmission line. Reflection Generally, the reflections will have the same shape as the incident signal, but their sign and magnitude depend on the change in impedance level. If there is a step increase in the impedance, then the reflection will have the same sign as the incident signal; if there is a step decrease in impedance, the reflection will have the opposite sign. The magnitude of the reflection depends not only on the amount of the impedance change, but also upon the loss in the conductor. The reflections are measured at the output/input to the TDR and displayed or plotted as a function of time. Alternatively, the display can be read as a function of cable length because the speed of signal propagation is almost constant for a given transmission medium. Because of its sensitivity to impedance variations, a TDR may be used to verify cable impedance characteristics, splice and connector locations and associated losses, and estimate cable lengths. Incident signal TDRs use different incident signals. Some TDRs transmit a pulse along the conductor; the resolution of such instruments is often the width of the pulse. Narrow pulses can offer good resolution, but they have high frequency signal components that are attenuated in long cables. The shape of the pulse is often a half cycle sinusoid. For longer cables, wider pulse widths are used. Fast rise time steps are also used. Instead of looking for the reflection of a complete pulse, the instrument is concerned with the rising edge, which can be very fast. A 1970s technology TDR used steps with a rise time of 25 ps. Still other TDRs transmit complex signals and detect reflections with correlation techniques. See spread-spectrum time-domain reflectometry. Variations and extensions The equivalent device for optical fiber is an optical time-domain reflectometer. Time-domain transmissometry (TDT) is an analogous technique that measures the transmitted (rather than reflected) impulse. Together, they provide a powerful means of analysing electrical or optical transmission media such as coaxial cable and optical fiber. Variations of TDR exist. For example, spread-spectrum time-domain reflectometry (SSTDR) is used to detect intermittent faults in complex and high-noise systems such as aircraft wiring. Coherent optical time domain reflectometry (COTDR) is another variant, used in optical systems, in which the returned signal is mixed with a local oscillator and then filtered to reduce noise. Example traces These traces were produced by a time-domain reflectometer made from common lab equipment connected to approximately of coaxial cable having a characteristic impedance of 50 ohms. The propagation velocity of this cable is approximately 66% of the speed of light in vacuum. These traces were produced by a commercial TDR using a step waveform with a 25 ps risetime, a sampling head with a 35 ps risetime, and an SMA cable. The far end of the SMA cable was left open or connected to different adapters. It takes about 3 ns for the pulse to travel down the cable, reflect, and reach the sampling head. A second reflection (at about 6 ns) can be seen in some traces; it is due to the reflection seeing a small mismatch at the sampling head and causing another "incident" wave to travel down the cable. Explanation If the far end of the cable is shorted, that is, terminated with an impedance of zero ohms, and when the rising edge of the pulse is launched down the cable, the voltage at the launching point "steps up" to a given value instantly and the pulse begins propagating in the cable towards the short. When the pulse encounters the short, no energy is absorbed at the far end. Instead, an inverted pulse reflects back from the short towards the launching end. It is only when this reflection finally reaches the launch point that the voltage at this point abruptly drops back to zero, signaling the presence of a short at the end of the cable. That is, the TDR has no indication that there is a short at the end of the cable until its emitted pulse can travel in the cable and the echo can return. It is only after this round-trip delay that the short can be detected by the TDR. With knowledge of the signal propagation speed in the particular cable-under-test, the distance to the short can be measured. A similar effect occurs if the far end of the cable is an open circuit (terminated into an infinite impedance). In this case, though, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage. Perfect termination at the far end of the cable would entirely absorb the applied pulse without causing any reflection, rendering the determination of the actual length of the cable impossible. In practice, some small reflection is nearly always observed. The magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 (open circuit) to −1 (short circuit). The value of zero means that there is no reflection. The reflection coefficient is calculated as follows: where Zo is defined as the characteristic impedance of the transmission medium and Zt is the impedance of the termination at the far end of the transmission line. Any discontinuity can be viewed as a termination impedance and substituted as Zt. This includes abrupt changes in the characteristic impedance. As an example, a trace width on a printed circuit board doubled at its midsection would constitute a discontinuity. Some of the energy will be reflected back to the driving source; the remaining energy will be transmitted. This is also known as a scattering junction. Usage Time domain reflectometers are commonly used for in-place testing of very long cable runs, where it is impractical to dig up or remove what may be a kilometers-long cable. They are indispensable for preventive maintenance of telecommunication lines, as TDRs can detect resistance on joints and connectors as they corrode, and increasing insulation leakage as it degrades and absorbs moisture, long before either leads to catastrophic failures. Using a TDR, it is possible to pinpoint a fault to within centimetres. TDRs are also very useful tools for technical surveillance counter-measures, where they help determine the existence and location of wire taps. The slight change in line impedance caused by the introduction of a tap or splice will show up on the screen of a TDR when connected to a phone line. TDR equipment is also an essential tool in the failure analysis of modern high-frequency printed circuit boards with signal traces crafted to emulate transmission lines. Observing reflections can detect any unsoldered pins of a ball grid array device. Short-circuited pins can also be detected similarly. The TDR principle is used in industrial settings, in situations as diverse as the testing of integrated circuit packages to measuring liquid levels. In the former, the time domain reflectometer is used to isolate failing sites in the same. The latter is primarily limited to the process industry. In level measurement In a TDR-based level measurement device, the device generates an impulse that propagates down a thin waveguide (referred to as a probe) – typically a metal rod or a steel cable. When this impulse hits the surface of the medium to be measured, part of the impulse reflects back up the waveguide. The device determines the fluid level by measuring the time difference between when the impulse was sent and when the reflection returned. The sensors can output the analyzed level as a continuous analog signal or switch output signals. In TDR technology, the impulse velocity is primarily affected by the permittivity of the medium through which the pulse propagates, which can vary greatly by the moisture content and temperature of the medium. In many cases, this effect can be corrected without undue difficulty. In some cases, such as in boiling and/or high temperature environments, the correction can be difficult. In particular, determining the froth (foam) height and the collapsed liquid level in a frothy / boiling medium can be very difficult. Used in anchor cables in dams The Dam Safety Interest Group of CEA Technologies, Inc. (CEATI), a consortium of electrical power organizations, has applied Spread-spectrum time-domain reflectometry to identify potential faults in concrete dam anchor cables. The key benefit of Time Domain reflectometry over other testing methods is the non-destructive method of these tests. Used in the earth and agricultural sciences A TDR is used to determine moisture content in soil and porous media. Over the last two decades, substantial advances have been made measuring moisture in soil, grain, food stuff, and sediment. The key to TDR's success is its ability to accurately determine the permittivity (dielectric constant) of a material from wave propagation, due to the strong relationship between the permittivity of a material and its water content, as demonstrated in the pioneering works of Hoekstra and Delaney (1974) and Topp et al. (1980). Recent reviews and reference work on the subject include, Topp and Reynolds (1998), Noborio (2001), Pettinellia et al. (2002), Topp and Ferre (2002) and Robinson et al. (2003). The TDR method is a transmission line technique, and determines apparent permittivity (Ka) from the travel time of an electromagnetic wave that propagates along a transmission line, usually two or more parallel metal rods embedded in soil or sediment. The probes are typically between 10 and 30 cm long and connected to the TDR via coaxial cable. In geotechnical engineering Time domain reflectometry has also been utilized to monitor slope movement in a variety of geotechnical settings, including highway cuts, rail beds, and open pit mines (Dowding & O'Connor, 1984, 2000a, 2000b; Kane & Beck, 1999). In TDR stability monitoring applications, a coaxial cable is installed in a vertical borehole passing through the region of concern. The electrical impedance at any point along a coaxial cable changes with deformation of the insulator between the conductors. A brittle grout surrounds the cable to translate earth movement into an abrupt cable deformation that shows up as a detectable peak in the reflectance trace. Until recently, the technique was relatively insensitive to small slope movements and could not be automated because it relied on human detection of changes in the reflectance trace over time. Farrington and Sargand (2004) developed a simple signal processing technique using numerical derivatives to extract reliable indications of slope movement from the TDR data much earlier than by conventional interpretation. Another application of TDRs in geotechnical engineering is to determine the soil moisture content. This can be done by placing the TDRs in different soil layers and measuring the time of start of precipitation and the time that TDR indicates an increase in the soil moisture content. The depth of the TDR (d) is a known factor and the other is the time it takes the drop of water to reach that depth (t); therefore the speed of water infiltration (v) can be determined. This is a good method to assess the effectiveness of Best Management Practices (BMPs) in reducing stormwater surface runoff. In semiconductor device analysis Time domain reflectometry is used in semiconductor failure analysis as a non-destructive method for the location of defects in semiconductor device packages. The TDR provides an electrical signature of individual conductive traces in the device package, and is useful for determining the location of opens and shorts. In aviation wiring maintenance Time domain reflectometry, specifically spread-spectrum time-domain reflectometry is used on aviation wiring for both preventive maintenance and fault location. Spread spectrum time domain reflectometry has the advantage of precisely locating the fault location within thousands of miles of aviation wiring. Additionally, this technology is worth considering for real time aviation monitoring, as spread spectrum reflectometry can be employed on live wires. This method has been shown to be useful to locating intermittent electrical faults. Multi carrier time domain reflectometry (MCTDR) has also been identified as a promising method for embedded EWIS diagnosis or troubleshooting tools. Based on the injection of a multicarrier signal (respecting EMC and harmless for the wires), this smart technology provides information for the detection, localization and characterization of electrical defects (or mechanical defects having electrical consequences) in the wiring systems. Hard fault (short, open circuit) or intermittent defects can be detected very quickly increasing the reliability of wiring systems and improving their maintenance. See also Frequency domain sensor Murray loop bridge Noise-domain reflectometry Nicolson–Ross–Weir method Optical time-domain reflectometer Return loss Standing wave ratio References Further reading Hoekstra, P. and A. Delaney, 1974. "Dielectric properties of soils at UHF and microwave frequencies". Journal of Geophysical Research 79:1699–1708. Smith, P., C. Furse, and J. Gunther, 2005. "Analysis of spread spectrum time domain reflectometry for wire fault location". IEEE Sensors Journal 5:1469–1478. Waddoups, B., C. Furse and M. Schmidt. "Analysis of Reflectometry for Detection of Chafed Aircraft Wiring Insulation". Department of Electrical and Computer Engineering. Utah State University. Noborio K. 2001. "Measurement of soil water content and electrical conductivity by time domain reflectometry: A review". Computers and Electronics in Agriculture 31:213–237. Pettinelli E., A. Cereti, A. Galli, and F. Bella, 2002. "Time domain reflectometry: Calibration techniques for accurate measurement of the dielectric properties of various materials". Review of Scientific Instruments 73:3553–3562. Robinson D.A., S.B. Jones, J.M. Wraith, D. Or and S.P. Friedman, 2003 "A review of advances in dielectric and electrical conductivity measurements in soils using time domain reflectometry". Vadose Zone Journal 2: 444–475. Robinson, D. A., C. S. Campbell, J. W. Hopmans, B. K. Hornbuckle, Scott B. Jones, R. Knight, F. Ogden, J. Selker, and O. Wendroth, 2008. "Soil moisture measurement for ecological and hydrological watershed-scale observatories: A review." Vadose Zone Journal 7: 358-389. Topp G.C., J.L. Davis and A.P. Annan, 1980. "Electromagnetic determination of soil water content: measurements in coaxial transmission lines". Water Resources Research 16:574–582. Topp G.C. and W.D. Reynolds, 1998. "Time domain reflectometry: a seminal technique for measuring mass and energy in soil". Soil Tillage Research 47:125–132. Topp, G.C. and T.P.A. Ferre, 2002. "Water content", in Methods of Soil Analysis. Part 4. (Ed. J.H. Dane and G.C. Topp), SSSA Book Series No. 5. Soil Science Society of America, Madison WI. Dowding, C.H. & O'Connor, K.M. 2000a. "Comparison of TDR and Inclinometers for Slope Monitoring". Geotechnical Measurements—Proceedings of Geo-Denver2000: 80–81. Denver, CO. Dowding, C.H. & O'Connor, K.M. 2000b. "Real Time Monitoring of Infrastructure using TDR Technology". Structural Materials Technology NDT Conference 2000 Kane, W.F. & Beck, T.J. 1999. "Advances in Slope Instrumentation: TDR and Remote Data Acquisition Systems". Field Measurements in Geomechanics, 5th International Symposium on Field Measurements in Geomechanics: 101–105. Singapore. Farrington, S.P. and Sargand, S.M., "Advanced Processing of Time Domain Reflectometry for Improved Slope Stability Monitoring", Proceedings of the Eleventh Annual Conference on Tailings and Mine Waste, October, 2004. Scarpetta, M.; Spadavecchia, M.; Adamo, F.; Ragolia, M.A.; Giaquinto, N. ″Detection and Characterization of Multiple Discontinuities in Cables with Time-Domain Reflectometry and Convolutional Neural Networks″. Sensors 2021, 21, 8032. https://doi.org/10.3390/s21238032 Duncan, D.; Trabold, T.A.; Mohr, C.L.; Berrett, M.K. "MEASUREMENT OF LOCAL VOID FRACTION AT ELEVATED TEMPERATURE AND PRESSURE". Third World Conference on Experimental Heat Transfer, Fluid Mechanics and Thermodynamics, Honolulu, Hawaii, USA, 31 October-5 November 1993. https://www.mohr-engineering.com/guided-radar-liquid-level-documents-EFP.php External links Radiodetection Extended Training – ABC's of TDR's Work begins to repair severed net TDR for Digital Cables – TDR for Microwave/RF and Digital Cables TDR vs FDR: Distance to Fault Electronic test equipment Soil physics Semiconductor analysis
Time-domain reflectometer
[ "Physics", "Technology", "Engineering" ]
4,016
[ "Applied and interdisciplinary physics", "Electronic test equipment", "Measuring instruments", "Soil physics" ]
41,811
https://en.wikipedia.org/wiki/Transmission%20line
In electrical engineering, a transmission line is a specialized cable or other structure designed to conduct electromagnetic waves in a contained manner. The term applies when the conductors are long enough that the wave nature of the transmission must be taken into account. This applies especially to radio-frequency engineering because the short wavelengths mean that wave phenomena arise over very short distances (this can be as short as millimetres depending on frequency). However, the theory of transmission lines was historically developed to explain phenomena on very long telegraph lines, especially submarine telegraph cables. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed lines or feeders), distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses. RF engineers commonly use short pieces of transmission line, usually in the form of printed planar transmission lines, arranged in certain patterns to build circuits such as filters. These circuits, known as distributed-element circuits, are an alternative to traditional circuits using discrete capacitors and inductors. Overview Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power, which reverses direction 100 to 120 times per second, and audio signals. However, they are not generally used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching, to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line (ladder line, twisted pair), coaxial cable, and planar transmission lines such as stripline and microstrip. The higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At frequencies of microwave and higher, power losses in transmission lines become excessive, and waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves. Some sources define waveguides as a type of transmission line; however, this article will not include them. History Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin, and Oliver Heaviside. In 1855, Lord Kelvin formulated a diffusion model of the current in a submarine cable. The model correctly predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885, Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. The four terminal model For the purposes of analysis, an electrical transmission line can be modelled as a two-port network (also called a quadripole), as follows: In the simplest case, the network is assumed to be linear (i.e. the complex voltage across either port is proportional to the complex current flowing into it when there are no reflections), and the two ports are assumed to be interchangeable. If the transmission line is uniform along its length, then its behaviour is largely described by a two parameters called characteristic impedance, symbol Z0 and propagation delay, symbol . Z0 is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, and about 300 ohms for a common type of untwisted pair used in radio transmission. Propagation delay is proportional to the length of the transmission line and is never less than the length divided by the speed of light. Typical delays for modern communication transmission lines vary from to . When sending power down a transmission line, it is usually desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched. Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ohmic or resistive loss (see ohmic heating). At high frequencies, another effect called dielectric loss becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating). The transmission line is modelled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line. The total loss of power in a transmission line is often specified in decibels per metre (dB/m), and usually depends on the frequency of the signal. The manufacturer often supplies a chart showing the loss in dB/m at a range of frequencies. A loss of 3 dB corresponds approximately to a halving of the power. Propagation delay is often specified in units of nanoseconds per metre. While propagation delay usually depends on the frequency of the signal, transmission lines are typically operated over frequency ranges where the propagation delay is approximately constant. Telegrapher's equations The telegrapher's equations (or just telegraph equations) are a pair of linear differential equations which describe the voltage () and current () on an electrical transmission line with distance and time. They were developed by Oliver Heaviside who created the transmission line model, and are based on Maxwell's equations. The transmission line model is an example of the distributed-element model. It represents the transmission line as an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line: The distributed resistance of the conductors is represented by a series resistor (expressed in ohms per unit length). The distributed inductance (due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (in henries per unit length). The capacitance between the two conductors is represented by a shunt capacitor (in farads per unit length). The conductance of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (in siemens per unit length). The model consists of an infinite series of the elements shown in the figure, and the values of the components are specified per unit length so the picture of the component can be misleading. , , , and may also be functions of frequency. An alternative notation is to use , , and to emphasize that the values are derivatives with respect to length. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the propagation constant, attenuation constant and phase constant. The line voltage and the current can be expressed in the frequency domain as (see differential equation, angular frequency ω and imaginary unit ) Special case of a lossless line When the elements and are negligibly small the transmission line is considered as a lossless structure. In this hypothetical case, the model depends only on the and elements which greatly simplifies the analysis. For a lossless transmission line, the second order steady-state Telegrapher's equations are: These are wave equations which have plane waves with equal propagation speed in the forward and reverse directions as solutions. The physical significance of this is that electromagnetic waves propagate down transmission lines and in general, there is a reflected component that interferes with the original signal. These equations are fundamental to transmission line theory. General case of a line with losses In the general case the loss terms, and , are both included, and the full form of the Telegrapher's equations become: where is the (complex) propagation constant. These equations are fundamental to transmission line theory. They are also wave equations, and have solutions similar to the special case, but which are a mixture of sines and cosines with exponential decay factors. Solving for the propagation constant in terms of the primary parameters , , , and gives: and the characteristic impedance can be expressed as The solutions for and are: The constants must be determined from boundary conditions. For a voltage pulse , starting at and moving in the positive  direction, then the transmitted pulse at position can be obtained by computing the Fourier Transform, , of , attenuating each frequency component by , advancing its phase by , and taking the inverse Fourier Transform. The real and imaginary parts of can be computed as with the right-hand expressions holding when neither , nor , nor is zero, and with where atan2 is the everywhere-defined form of two-parameter arctangent function, with arbitrary value zero when both arguments are zero. Alternatively, the complex square root can be evaluated algebraically, to yield: and with the plus or minus signs chosen opposite to the direction of the wave's motion through the conducting medium. ( is usually negative, since and are typically much smaller than and , respectively, so is usually positive. is always positive.) Special, low loss case For small losses and high frequencies, the general equations can be simplified: If and then Since an advance in phase by is equivalent to a time delay by , can be simply computed as Heaviside condition The Heaviside condition is . If R, G, L, and C are constants that are not frequency dependent and the Heaviside condition is met, then waves travel down the transmission line without dispersion distortion. Input impedance of transmission line The characteristic impedance of a transmission line is the ratio of the amplitude of a single voltage wave to its current wave. Since most transmission lines also have a reflected wave, the characteristic impedance is generally not the impedance that is measured on the line. The impedance measured at a given distance from the load impedance may be expressed as , where is the propagation constant and is the voltage reflection coefficient measured at the load end of the transmission line. Alternatively, the above formula can be rearranged to express the input impedance in terms of the load impedance rather than the load voltage reflection coefficient: . Input impedance of lossless transmission line For a lossless transmission line, the propagation constant is purely imaginary, , so the above formulas can be rewritten as where is the wavenumber. In calculating the wavelength is generally different inside the transmission line to what it would be in free-space. Consequently, the velocity factor of the material the transmission line is made of needs to be taken into account when doing such a calculation. Special cases of lossless transmission lines Half wave length For the special case where where n is an integer (meaning that the length of the line is a multiple of half a wavelength), the expression reduces to the load impedance so that for all This includes the case when , meaning that the length of the transmission line is negligibly small compared to the wavelength. The physical significance of this is that the transmission line can be ignored (i.e. treated as a wire) in either case. Quarter wave length For the case where the length of the line is one quarter wavelength long, or an odd multiple of a quarter wavelength long, the input impedance becomes Matched load Another special case is when the load impedance is equal to the characteristic impedance of the line (i.e. the line is matched), in which case the impedance reduces to the characteristic impedance of the line so that for all and all . Short For the case of a shorted load (i.e. ), the input impedance is purely imaginary and a periodic function of position and wavelength (frequency) Open For the case of an open load (i.e. ), the input impedance is once again imaginary and periodic Matrix parameters The simulation of transmission lines embedded into larger systems generally utilize admittance parameters (Y matrix), impedance parameters (Z matrix), and/or scattering parameters (S matrix) that embodies the full transmission line model needed to support the simulation. Admittance parameters Admittance (Y) parameters may be defined by applying a fixed voltage to one port (V1) of a transmission line with the other end shorted to ground and measuring the resulting current running into each port (I1, I2) and computing the admittance on each port as a ratio of I/V The admittance parameter Y11 is I1/V1, and the admittance parameter Y12 is I2/V1. Since transmission lines are electrically passive and symmetric devices, Y12 = Y21, and Y11 = Y22. For lossless and lossy transmission lines respectively, the Y parameter matrix is as follows: Impedance parameters Impedance (Z) parameter may defines by applying a fixed current into one port (I1) of a transmission line with the other port open and measuring the resulting voltage on each port (V1, V2) and computing the impedance parameter Z11 is V1/I1, and the impedance parameter Z12 is V2/I1. Since transmission lines are electrically passive and symmetric devices, V12 = V21, and V11 = V22. In the Y and Z matrix definitions, and . Unlike ideal lumped 2 port elements (resistors, capacitors, inductors, etc.) which do not have defined Z parameters, transmission lines have an internal path to ground, which permits the definition of Z parameters. For lossless and lossy transmission lines respectively, the Z parameter matrix is as follows: Scattering parameters Scattering (S) matrix parameters model the electrical behavior of the transmission line with matched loads at each termination. For lossless and lossy transmission lines respectively, the S parameter matrix is as follows, using standard hyperbolic to circular complex translations. Variable definitions In all matrix parameters above, the following variable definitions apply: = characteristic impedance Zp = port impedance, or termination impedance = the propagation constant per unit length = attenuation constant in nepers per unit length = wave number or phase constant radians per unit length = frequency radians / second = Speed of propagation = wave length in unit length L = inductance per unit length C = capacitance per unit length = effective dielectric constant = 299,792,458 meters / second = Speed of light in a vacuum Coupled transmission lines Transmission lines may be placed in proximity to each other such that they electrically interact, such as two microstrip lines in close proximity. Such transmission lines are said to be coupled transmission lines. Coupled transmission lines are characterized by an even and odd mode analysis. The even mode is characterized by excitation of the two conductors with a signal of equal amplitude and phase. The odd mode is characterized by excitation with signals of equal and opposite magnitude. The even and odd modes each have their own characteristic impedances (Zoe, Zoo) and phase constants (). Lossy coupled transmission lines have their own even and odd mode attenuation constants (), which in turn leads to even and odd mode propagation constants (). Coupled matrix parameters Coupled transmission lines may be modeled using even and odd mode transmission line parameters defined in the prior paragraph as shown with ports 1 and 2 on the input and ports 3 and 4 on the output, .. Practical types Coaxial cable Coaxial lines confine virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them. In radio-frequency applications up to a few gigahertz, the wave propagates in the transverse electric and magnetic mode (TEM) only, which means that the electric and magnetic fields are both perpendicular to the direction of propagation (the electric field is radial, and the magnetic field is circumferential). However, at frequencies for which the wavelength (in the dielectric) is significantly shorter than the circumference of the cable other transverse modes can propagate. These modes are classified into two groups, transverse electric (TE) and transverse magnetic (TM) waveguide modes. When more than one mode can exist, bends and other irregularities in the cable geometry can cause power to be transferred from one mode to another. The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. In the middle 20th century they carried long distance telephone connections. Planar lines Planar transmission lines are transmission lines with conductors, or in some cases dielectric strips, that are flat, ribbon-shaped lines. They are used to interconnect components on printed circuits and integrated circuits working at microwave frequencies because the planar type fits in well with the manufacturing methods for these components. Several forms of planar transmission lines exist. Microstrip A microstrip circuit uses a thin flat conductor which is parallel to a ground plane. Microstrip can be made by having a strip of copper on one side of a printed circuit board (PCB) or ceramic substrate while the other side is a continuous ground plane. The width of the strip, the thickness of the insulating layer (PCB or ceramic) and the dielectric constant of the insulating layer determine the characteristic impedance. Microstrip is an open structure whereas coaxial cable is a closed structure. Stripline A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line. Coplanar waveguide A coplanar waveguide consists of a center strip and two adjacent outer conductors, all three of them flat structures that are deposited onto the same insulating substrate and thus are located in the same plane ("coplanar"). The width of the center conductor, the distance between inner and outer conductors, and the relative permittivity of the substrate determine the characteristic impedance of the coplanar transmission line. Balanced lines A balanced line is a transmission line consisting of two conductors of the same type, and equal impedance to ground and other circuits. There are many formats of balanced lines, amongst the most common are twisted pair, star quad and twin-lead. Twisted pair Twisted pairs are commonly used for terrestrial telephone communications. In such cables, many pairs are grouped together in a single cable, from two to several thousand. The format is also used for data network distribution inside buildings, but the cable is more expensive because the transmission line parameters are tightly controlled. Star quad Star quad is a four-conductor cable in which all four conductors are twisted together around the cable axis. It is sometimes used for two circuits, such as 4-wire telephony and other telecommunications applications. In this configuration each pair uses two non-adjacent conductors. Other times it is used for a single, balanced line, such as audio applications and 2-wire telephony. In this configuration two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together. When used for two circuits, crosstalk is reduced relative to cables with two separate twisted pairs. When used for a single, balanced line, magnetic interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by coupling transformers. The combined benefits of twisting, balanced signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low signal level applications such as microphone cables, even when installed very close to a power cable. The disadvantage is that star quad, in combining two conductors, typically has double the capacitance of similar two-conductor twisted and shielded audio cable. High capacitance causes increasing distortion and greater loss of high frequencies as distance increases. Twin-lead Twin-lead consists of a pair of conductors held apart by a continuous insulator. By holding the conductors a known distance apart, the geometry is fixed and the line characteristics are reliably consistent. It is lower loss than coaxial cable because the characteristic impedance of twin-lead is generally higher than coaxial cable, leading to lower resistive losses due to the reduced current. However, it is more susceptible to interference. Lecher lines Lecher lines are a form of parallel conductor that can be used at UHF for creating resonant circuits. They are a convenient practical format that fills the gap between lumped components (used at HF/VHF) and resonant cavities (used at UHF/SHF). Single-wire line Unbalanced lines were formerly much used for telegraph transmission, but this form of communication has now fallen into disuse. Cables are similar to twisted pair in that many cores are bundled into the same cable but only one conductor is provided per circuit and there is no twisting. All the circuits on the same route use a common path for the return current (earth return). There is a power transmission version of single-wire earth return in use in many locations. General applications Signal transfer Electrical transmission lines are very widely used to transmit high frequency signals over long or short distances with minimum power loss. One familiar example is the down lead from a TV or radio aerial to the receiver. Transmission line circuits A large variety of circuits can also be constructed with transmission lines including impedance matching circuits, filters, power dividers and directional couplers. Stepped transmission line A stepped transmission line is used for broad range impedance matching. It can be considered as multiple transmission line segments connected in series, with the characteristic impedance of each individual element to be . The input impedance can be obtained from the successive application of the chain relation where is the wave number of the -th transmission line segment and is the length of this segment, and is the front-end impedance that loads the -th segment. Because the characteristic impedance of each transmission line segment is often different from the impedance of the fourth, input cable (only shown as an arrow marked on the left side of the diagram above), the impedance transformation circle is off-centred along the axis of the Smith Chart whose impedance representation is usually normalized against . Approximating lumped elements At higher frequencies, the reactive parasitic effects of real world lumped elements, including inductors and capacitors, limits their usefulness. Therefore, it is sometimes useful to approximate the electrical characteristics of inductors and capacitors with transmission lines at the higher frequencies using Richards' Transformations and then substitute the transmission lines for the lumped elements. More accurate forms of multimode high frequency inductor modeling with transmission lines exist for advanced designers. Stub filters If a short-circuited or open-circuited transmission line is wired in parallel with a line used to transfer signals from point A to point B, then it will function as a filter. The method for making stubs is similar to the method for using Lecher lines for crude frequency measurement, but it is 'working backwards'. One method recommended in the RSGB's radiocommunication handbook is to take an open-circuited length of transmission line wired in parallel with the feeder delivering signals from an aerial. By cutting the free end of the transmission line, a minimum in the strength of the signal observed at a receiver can be found. At this stage the stub filter will reject this frequency and the odd harmonics, but if the free end of the stub is shorted then the stub will become a filter rejecting the even harmonics. Wideband filters can be achieved using multiple stubs. However, this is a somewhat dated technique. Much more compact filters can be made with other methods such as parallel-line resonators. Pulse generation Transmission lines are used as pulse generators. By charging the transmission line and then discharging it into a resistive load, a rectangular pulse equal in length to twice the electrical length of the line can be obtained, although with half the voltage. A Blumlein transmission line is a related pulse forming device that overcomes this limitation. These are sometimes used as the pulsed power sources for radar transmitters and other devices. Sound The theory of sound wave propagation is very similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are also used to build structures to conduct acoustic waves; and these are called acoustic transmission lines. See also Artificial transmission line Longitudinal electromagnetic wave Propagation velocity Radio frequency power transmission Time domain reflectometer References Part of this article was derived from Federal Standard 1037C. Further reading (May need to add "http://www.keysight.com" to your Java Exception Site list.) External links Signal cables Telecommunications engineering Transmission lines Distributed element circuits
Transmission line
[ "Engineering" ]
5,219
[ "Electrical engineering", "Electronic engineering", "Telecommunications engineering", "Distributed element circuits" ]
41,812
https://en.wikipedia.org/wiki/Transmission%20medium
A transmission medium is a system or substance that can mediate the propagation of signals for the purposes of telecommunication. Signals are typically imposed on a wave of some kind suitable for the chosen medium. For example, data can modulate sound, and a transmission medium for sounds may be air, but solids and liquids may also act as the transmission medium. Vacuum or air constitutes a good transmission medium for electromagnetic waves such as light and radio waves. While a material substance is not required for electromagnetic waves to propagate, such waves are usually affected by the transmission media they pass through, for instance, by absorption or reflection or refraction at the interfaces between media. Technical devices can therefore be employed to transmit or guide waves. Thus, an optical fiber or a copper cable is used as transmission media. Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides. It may also pass through any physical material that is transparent to the specific wavelength, such as water, air, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Historically, science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, and so can travel through the vacuum of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions. Optical medium Telecommunications A physical medium in data communications is the transmission path over which a signal propagates. Many different types of transmission media are used as communications channel. In many cases, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path; examples of guided media include phone lines, twisted pair cables, coaxial cables, and optical fibers. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include microwave, radio or infrared. Unguided media provide a means for transmitting electromagnetic waves but do not guide them; examples are propagation through air, vacuum and seawater. The term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to both guided and unguided media. Simplex versus duplex A signal transmission may be simplex, half-duplex, or full-duplex. In simplex transmission, signals are transmitted in only one direction; one station is a transmitter and the other is the receiver. In the half-duplex operation, both stations may transmit, but only one at a time. In full-duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at the same time. Types In general, a transmission medium can be classified as linear, if different waves at any particular point in the medium can be superposed; bounded, if it is finite in extent, otherwise unbounded; uniform or homogeneous, if its physical properties are unchanged at different points; isotropic, if its physical properties are the same in different directions. There are two main types of transmission media: guided media—waves are guided along a solid medium such as a transmission line; unguided media—transmission and reception are achieved by means of an antenna. One of the most common physical media used in networking is copper wire. Copper wire to carry signals to long distances using relatively low amounts of power. The unshielded twisted pair (UTP) is eight strands of copper wire, organized into four pairs. Guided media Twisted pair Twisted pair cabling is a type of wiring in which two conductors of a single circuit are twisted together for the purposes of improving electromagnetic compatibility. Compared to a single conductor or an untwisted balanced pair, a twisted pair reduces electromagnetic radiation from the pair and crosstalk between neighboring pairs and improves rejection of external electromagnetic interference. It was invented by Alexander Graham Bell. Coaxial cable Coaxial cable, or coax (pronounced ) is a type of electrical cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables also have an insulating outer sheath or jacket. The term coaxial comes from the inner conductor and the outer shield sharing a geometric axis. Coaxial cable was invented by English physicist, engineer, and mathematician Oliver Heaviside, who patented the design in 1880. Coaxial cable is a type of transmission line, used to carry high frequency electrical signals with low losses. It is used in such applications as telephone trunk lines, broadband internet networking cables, high-speed computer data busses, carrying cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line. Optical fiber Optical fiber, which has emerged as the most commonly used transmission medium for long-distance communications, is a thin strand of glass that guides light along its length. Four major factors favor optical fiber over copper: data rates, distance, installation, and costs. Optical fiber can carry huge amounts of data compared to copper. It can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost approximately a dollar a foot, the same as copper. Multimode and single mode are two types of commonly used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles. An optical fiber is a flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer excessively. Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, some of them being fiber optic sensors and fiber lasers. Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than . Being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors. The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian physicist Narinder Singh Kapany, who is widely acknowledged as the father of fiber optics. Unguided transmission media Radio Radio propagation is the behavior of radio waves as they travel, or are propagated, from one point to another, or into various parts of the atmosphere. As a form of electromagnetic radiation, like light waves, radio waves are affected by the phenomena of reflection, refraction, diffraction, absorption, polarization, and scattering. Understanding the effects of varying conditions on radio propagation has many practical applications, from choosing frequencies for international shortwave broadcasters, to designing reliable mobile telephone systems, to radio navigation, to operation of radar systems. Different types of propagation are used in practical radio transmission systems. Line-of-sight propagation means radio waves that travel in a straight line from the transmitting antenna to the receiving antenna. Line of sight transmission is used to medium-range radio transmission such as cell phones, cordless phones, walkie-talkies, wireless networks, FM radio and television broadcasting and radar, and satellite communication, such as satellite television. Line-of-sight transmission on the surface of the Earth is limited to the distance to the visual horizon, which depends on the height of transmitting and receiving antennas. It is the only propagation method possible at microwave frequencies and above. At microwave frequencies, moisture in the atmosphere (rain fade) can degrade transmission. At lower frequencies in the MF, LF, and VLF bands, due to diffraction radio waves can bend over obstacles like hills, and travel beyond the horizon as surface waves which follow the contour of the Earth. These are called ground waves. AM broadcasting stations use ground waves to cover their listening areas. As the frequency gets lower, the attenuation with distance decreases, so very low frequency (VLF) and extremely low frequency (ELF) ground waves can be used to communicate worldwide. VLF and ELF waves can penetrate significant distances through water and earth, and these frequencies are used for mine communication and military communication with submerged submarines. At medium wave and shortwave frequencies (MF and HF bands) radio waves can refract from a layer of charged particles (ions) high in the atmosphere, called the ionosphere. This means that radio waves transmitted at an angle into the sky can be reflected back to Earth beyond the horizon, at great distances, even transcontinental distances. This is called skywave propagation. It is used by amateur radio operators to talk to other countries and shortwave broadcasting stations that broadcast internationally. Skywave communication is variable, dependent on conditions in the upper atmosphere; it is most reliable at night and in the winter. Due to its unreliability, since the advent of communication satellites in the 1960s, many long-range communication that previously used skywaves now use satellites. In addition, there are several less common radio propagation mechanisms, such as tropospheric scattering (troposcatter) and near vertical incidence skywave (NVIS) which are used in specialized communication systems. Digital encoding Transmission and reception of data is typically performed in four steps: At the transmitting end, the data is encoded to a binary representation. A carrier signal is modulated as specified by the binary representation. At the receiving end, the carrier signal is demodulated into a binary representation. The data is decoded from the binary representation. See also Excitable medium Luminiferous aether References Electromagnetic radiation
Transmission medium
[ "Physics" ]
2,415
[ "Electromagnetic radiation", "Physical phenomena", "Radiation" ]