id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
6,262,881
https://en.wikipedia.org/wiki/BioMOBY
BioMOBY is a registry of web services used in bioinformatics. It allows interoperability between biological data hosts and analytical services by annotating services with terms taken from standard ontologies. BioMOBY is released under the Artistic License. The BioMOBY project The BioMoby project began at the Model Organism Bring Your own Database Interface Conference (MOBY-DIC), held in Emma Lake, Saskatchewan on September 21, 2001. It stemmed from a conversation between Mark D Wilkinson and Suzanna Lewis during a Gene Ontology developers meeting at the Carnegie Institute, Stanford, where the functionalities of the Genquire and Apollo genome annotation tools were being discussed and compared. The lack of a simple standard that would allow these tools to interact with the myriad of data-sources required to accurately annotate a genome was a critical need of both systems. Funding for the BioMOBY project was subsequently adopted by Genome Prairie (2002-2005), Genome Alberta (2005-date), in part through Genome Canada, a not-for-profit institution leading the Canadian X-omic initiatives. There are two main branches of the BioMOBY project. One is a web-service-based approach, while the other utilizes Semantic Web technologies. This article will refer only to the Web Service specifications. The other branch of the project, Semantic Moby, is described in a separate entry. Moby The Moby project defines three Ontologies that describe biological data-types, biological data-formats, and bioinformatics analysis types. Most of the interoperable behaviours seen in Moby are achieved through the Object (data-format) and Namespace (data-type) ontologies. The MOBY Namespace Ontology is derived from the Cross-Reference Abbreviations List of the Gene Ontology project. It is simply a list of abbreviations for the different types of identifiers that are used in bioinformatics. For example, Genbank has "gi" identifiers that are used to enumerate all of their sequence records - this is defined as "NCBI_gi" in the Namespace Ontology. The MOBY Object Ontology is an ontology consisting of IS-A, HAS-A, and HAS relationships between data formats. For example, a DNASequence IS-A GenericSequence and HAS-A String representing the text of the sequence. All data in Moby must be represented as some type of MOBY Object. An XML serialization of this ontology is defined in the Moby API such that any given ontology node has a predictable XML structure. Thus, between these two ontologies, a service provider and/or a client program can receive a piece of Moby XML, and immediately know both its structure, and its "intent" (semantics). The final core component of Moby is the MOBY Central web service registry. MOBY Central is aware of the Object, Namespace and Service ontologies, and thus can match consumers who have in-hand Moby data, with service providers who claim to consume that data-type (or some compatible ontological data-type) or to perform a particular operation on it. This "semantic matching" helps ensure that only relevant service providers are identified in a registry query, and moreover, ensures that the in-hand data can be passed to that service provider verbatim. As such, the interaction between a consumer and a service provider can be partially or fully automated, as shown in the Gbrowse Moby and Ahab clients respectively. BioMOBY and RDF/OWL BioMOBY does not, for its core operations, utilize the RDF or OWL standards from the W3C. This is in part because neither of these standards were stable in 2001, when the project began, and in part because the library support for these standards were not "commodity" in any of the most common languages (i.e. Perl and Java) at that time. Nevertheless, the BioMOBY system exhibits what can only be described as Semantic Web-like behaviours. The BioMOBY Object Ontology controls the valid data structures in exactly the same way as an OWL ontology defines an RDF data instance. BioMOBY Web Services consume and generate BioMOBY XML, the structure of which is defined by the BioMOBY Object Ontology. As such, BioMOBY Web Services have been acting as prototypical Semantic Web Services since 2001, despite not using the eventual RDF/OWL standards. However, BioMOBY does utilize the RDF/OWL standards, as of 2006, for the description of its Objects, Namespaces, Service, and Registry. Increasingly these ontologies are being used to govern the behaviour of all BioMOBY functions using DL reasoners. BioMOBY clients There are several client applications that can search and browse the BioMOBY registry of services. One of the most popular is the Taverna workbench built as part of the MyGrid project. The first BioMOBY client was Gbrowse Moby, written in 2001 to allow access to the prototype version of BioMoby Services. Gbrowse Moby, in addition to being a BioMoby browser, now works in tandem with the Taverna workbench to create SCUFL workflows reflecting the Gbrowse Moby browsing session that can then be run in a high-throughput environment. The Seahawk applet also provides the ability to export a session history as a Taverna workflow, in what constitutes a programming by example functionality. The Ahab client is a fully automated data mining tool. Given a starting point, it will discover, and execute, every possible BioMOBY service and provide the results in a clickable interface. See also Open Bioinformatics Foundation SADI the Semantic Automated Discovery and Integration Framework References Funther reading External links Official BioMOBY website Publications about BioMOBY tagged using Connotea Emma Lake Gene Ontology Mark D Wilkinson Genome Alberta Genome Canada Genome Prairie Namespaces Objects Services Namespace Ontology Bioinformatics Perl software Free bioinformatics software
BioMOBY
[ "Engineering", "Biology" ]
1,260
[ "Bioinformatics", "Biological engineering" ]
6,263,667
https://en.wikipedia.org/wiki/Yaw-rate%20sensor
A yaw-rate sensor is a gyroscopic device that measures a vehicle's yaw rate, its angular velocity around its vertical axis. The angle between the vehicle's heading and velocity is called its slip angle, which is related to the yaw rate. Types There are two types of yaw-rate sensors: piezoelectric and micromechanical. In the piezoelectric type, the sensor is a tuning fork-shaped structure with four piezoelectric elements, two on top and two below. When the slip angle is zero (no slip), the upper elements produce no voltage as no Coriolis force acts on them. But when cornering, the rotational movement causes the upper part of the tuning fork to leave the oscillatory plane, creating an alternating voltage (and thus an alternating current) proportional to the yaw rate and oscillatory speed. The output signal's sign depends on the direction of rotation. In the micromechanical type, the Coriolis acceleration is measured by a micromechanical capacitive acceleration sensor placed on an oscillating element. This acceleration is proportional to the product of the yaw rate and oscillatory velocity, the latter of which is maintained electronically at a constant value. Applications Yaw rate sensors are used in aircraft and electronic stability control systems in cars. References See also Attitude dynamics and control Ship motions Aircraft principal axes Gyroscopes Navigational flight instruments
Yaw-rate sensor
[ "Physics", "Technology" ]
305
[ "Navigational flight instruments", "Classical mechanics stubs", "Classical mechanics", "Aircraft instruments" ]
6,264,793
https://en.wikipedia.org/wiki/Friendly%20Floatees%20spill
Friendly Floatees are plastic bath toys (including rubber ducks) marketed by The First Years and made famous by the work of Curtis Ebbesmeyer, an oceanographer who models ocean currents on the basis of flotsam movements. Ebbesmeyer studied the movements of a consignment of 28,800 Friendly Floatees—yellow ducks, red beavers, blue turtles, and green frogs—that were washed into the Pacific Ocean in 1992. Some of the toys landed along Pacific Ocean shores, such as Hawaii. Others traveled over , floating over the site where the Titanic sank, and spent years frozen in Arctic ice before reaching the U.S. Eastern Seaboard as well as British and Irish shores, fifteen years later, in 2007. Oceanography A consignment of Friendly Floatee toys, manufactured in China for The First Years Inc., departed from Hong Kong on a container ship, the Evergreen Ever Laurel, destined for Tacoma, Washington. On 10 January 1992, during a storm in the North Pacific Ocean close to the International Date Line, twelve 40-foot (12-m) intermodal containers were washed overboard. One of these containers held 28,800 Floatees, a child's bath toy which came in a number of forms: red beavers, green frogs, blue turtles and yellow ducks. At some point, the container opened (possibly because it collided with other containers or the ship itself) and the Floatees were released. Although each toy was mounted in a cardboard housing attached to a backing card, subsequent tests showed that the cardboard quickly degraded in sea water allowing the Floatees to escape. Unlike many bath toys, Friendly Floatees have no holes in them so they do not take on water. Seattle oceanographers Curtis Ebbesmeyer and James Ingraham, who were working on an ocean surface current model, began to track their progress. The mass release of 28,800 objects into the ocean at one time offered significant advantages over the standard method of releasing 500–1000 drift bottles. The recovery rate of objects from the Pacific Ocean is typically around 2%, so rather than the 10 to 20 recoveries typically seen with a drift bottle release, the two scientists expected numbers closer to 600. They were already tracking various other spills of flotsam, including 61,000 Nike running shoes that had been lost overboard in 1990. Ten months after the incident, the first Floatees began to wash up along the Alaskan coast. The first discovery consisted of ten toys found by a beachcomber near Sitka, Alaska on 16 November 1992, about from their starting point. Ebbesmeyer and Ingraham contacted beachcombers, coastal workers, and local residents to locate hundreds of the beached Floatees over a shoreline. Another beachcomber discovered twenty of the toys on 28 November 1992, and in total 400 were found along the eastern coast of the Gulf of Alaska in the period up to August 1993. This represented a 1.4% recovery rate. The landfalls were logged in Ingraham's computer model OSCUR (Ocean Surface Currents Simulation), which uses measurements of air pressure from 1967 onwards to calculate the direction of and speed of wind across the oceans, and the consequent surface currents. Ingraham's model was built to help fisheries but it is also used to predict flotsam movements or the likely locations of those lost at sea. Using the models they had developed, the oceanographers correctly predicted further landfalls of the toys in Washington state in 1996 and theorized that many of the remaining Floatees would have traveled to Alaska, westward to Japan, back to Alaska, and then drifted northwards through the Bering Strait and become trapped in the Arctic pack ice. Moving slowly with the ice across the Pole, they predicted it would take five or six years for the toys to reach the North Atlantic where the ice would thaw and release them. Between July and December 2003, The First Years Inc. offered a $100 US savings bond reward to anybody who recovered a Floatee in New England, Canada or Iceland. More of the toys were recovered in 2004 than in any of the preceding three years. However, still, more of these toys were predicted to have headed eastward past Greenland and make landfall on the southwestern shores of the United Kingdom in 2007. In July 2007, a retired teacher found a plastic duck on the Devon coast, and British newspapers mistakenly announced that the Floatees had begun to arrive. But the day after breaking the story, the Western Morning News, the local Devon newspaper, reported that Dr. Simon Boxall of the National Oceanography Centre in Southampton had examined the toy and determined that the duck was not in fact a Floatee. Bleached by sun and seawater, the ducks and beavers had faded to white, but the turtles and frogs had kept their original colors. Legacy At least two children's books have been inspired by the Floatees. In 1997, Clarion Books published Ducky (), written by Eve Bunting and illustrated by Caldecott Medal winner David Wisniewski. Hans Christian Andersen Award winner Eric Carle wrote 10 Little Rubber Ducks (Harper Collins 2005, ). In 1997 Black Swan published That Awkward Age (Transworld 1997, ), a comedy written by Mary Selby, in which several of the ducks are found off the Isle of Lewis, one then being purchased at auction and treated as a metaphor for perseverance. In 2003, Rich Eilbert wrote a song "Yellow Rubber Ducks" commemorating the ducks' journey. In 2011, he published the song as a YouTube video, Yellow Rubber Ducks. In 2011, Donovan Hohn published Moby-Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them (Viking, ) On the 19th of February 2013, BBC mystery series Death in Paradise featured the spill as a plot point in the 7th episode of Series 2. On 20 June 2014, The Disney Channel and Disney Junior aired Lucky Duck, a Canadian-American animated TV movie that is loosely based on and inspired by the Friendly Floatees. In his 2014 poem collection The Cartographer Tries to Map a Way to Zion, poet Kei Miller dedicates a poem to the Friendly Floatees : "When Considering the Long, Long Journey of 28,000 Rubber Ducks". The spill was referenced in a 2022 game "Placid Plastic Duck Simulator" as an "accidental duck experiment", which can be heard on the radio in between music. The toys themselves have become collector's items, fetching prices as high as $1,000. See also Drifter (floating device) Great Pacific Garbage Patch Hansa Carrier Marine debris Message in a bottle Rye Riptides Footnotes References Hohn, Donovan, Moby-Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them. Viking, New York, NY 2011, External links Keith C. Heidorn, 'Of Shoes And Ships And Rubber Ducks And A Message In A Bottle', The Weather Doctor (17 March 1999). Jane Standley, 'Ducks' odyssey nears end', BBC News, (12 July 2003). Duck ahoy, The Age, (7 August 2003) Marsha Walton, 'How Nikes, toys and hockey gear help ocean science', CNN.com (26 May 2003). "Journey of the Floatees", Spiegel magazine (1 July 2007) "Timeline of Rubber Duck Voyage", Rubaduck.com Donovan Hohn, "Moby-Duck: Or, The Synthetic Wilderness of Childhood," Harper's Magazine, January (2007), pp. 39–62. Moby Duck: The True Story of 28,800 Bath Toys Lost at Sea and of the Beachcombers, Oceanographers, Environmentalists, and Fools, Including the Author, Who Went in Search of Them – follow up non-fiction book based on 2 years research after the Harper's Magazine article. Rich Eilbert, Yellow Rubber Ducks, YouTube.com, (March 2011). Water pollution Waste disposal incidents Physical oceanography Ocean currents 1990s toys 1992 in the environment Plastic toys Evergreen Group
Friendly Floatees spill
[ "Physics", "Chemistry", "Environmental_science" ]
1,711
[ "Ocean currents", "Applied and interdisciplinary physics", "Water pollution", "Physical oceanography", "Fluid dynamics" ]
6,265,929
https://en.wikipedia.org/wiki/Acene
In organic chemistry, the acenes or polyacenes are a class of organic compounds and polycyclic aromatic hydrocarbons made up of benzene () rings which have been linearly fused. They follow the general molecular formula . The larger representatives have potential interest in optoelectronic applications and are actively researched in chemistry and electrical engineering. Pentacene has been incorporated into organic field-effect transistors, reaching charge carrier mobilities as high as 5 cm2/Vs. The first 5 unsubstituted members are listed in the following table: Hexacene is not stable in air, and dimerises upon isolation. Heptacene (and larger acenes) is very reactive and has only been isolated in a matrix. However, bis(trialkylsilylethynylated) versions of heptacene have been isolated as crystalline solids. Larger acenes Due to their increased conjugation length the larger acenes are also studied. Theoretically, a number of reports are available on longer chains using density functional methods. They are also building blocks for nanotubes and graphene. Unsubstituted octacene (n=8) and nonacene (n=9) have been detected in matrix isolation. The first reports of stable nonacene derivatives claimed that due to the electronic effects of the thioaryl substituents the compound is not a diradical but a closed-shell compound with the lowest HOMO-LUMO gap reported for any acene, an observation in violation of Kasha's rule. Subsequent work by others on different derivatives included crystal structures, with no such violations. The on-surface synthesis and characterization of unsubstituted, parent nonacene (n=9) and decacene (n=10) have been reported. In 2020, scientists reported about the creation of dodecacene (n=12) for the first time. Four years later, in the beginning of 2024, Ruan et al. succeeded in synthesizing unsubstitued tridecacene (n=13) on a (111)-gold surface. The acene was characterized by STM- and STS-measurements. Related compounds The acene series have the consecutive rings linked in a linear chain, but other chain linkages are possible. The phenacenes have a zig-zag structure and the helicenes have a helical structure. Benz[a]anthracene, an isomer of tetracene, has three rings connected in a line and one ring connected at an angle. References Conductive polymers Organic semiconductors
Acene
[ "Chemistry" ]
541
[ "Semiconductor materials", "Molecular electronics", "Conductive polymers", "Organic semiconductors" ]
15,743,046
https://en.wikipedia.org/wiki/Faraday%20efficiency
In electrochemistry, Faraday efficiency (also called faradaic efficiency, faradaic yield, coulombic efficiency, or current efficiency) describes the efficiency with which charge (electrons) is transferred in a system facilitating an electrochemical reaction. The word "Faraday" in this term has two interrelated aspects: first, the historic unit for charge is the faraday (F), but has since been replaced by the coulomb (C); and secondly, the related Faraday's constant () correlates charge with moles of matter and electrons (amount of substance). This phenomenon was originally understood through Michael Faraday's work and expressed in his laws of electrolysis. Sources of faradaic loss Faradaic losses are experienced by both electrolytic and galvanic cells when electrons or ions participate in unwanted side reactions. These losses appear as heat and/or chemical byproducts. An example can be found in the oxidation of water to oxygen at the positive electrode in electrolysis. Hydrogen peroxide can also be produced. The fraction of electrons so diverted represent a faradaic loss and vary in different apparatus. Even when the proper electrolysis products are produced, losses can still occur if the products are permitted to recombine. During water electrolysis, the desired products (H2 and O2), could recombine to form water. This could realistically happen in the presence of catalytic materials such as platinum or palladium commonly used as electrodes. Failure to account for this Faraday-efficiency effect has been identified as the cause of the misidentification of positive results in cold fusion experiments. Proton exchange membrane fuel cells provide another example of faradaic losses when some of the electrons separated from hydrogen at the anode leak through the membrane and reach the cathode directly instead of passing through the load and performing useful work. Ideally, the electrolyte membrane would be a perfect insulator and prevent this from happening. An especially familiar example of faradaic loss is the self-discharge that limits battery shelf-life. Methods of measuring faradaic loss Faradaic efficiency of a cell design is usually measured through bulk electrolysis where a known quantity of reagent is stoichiometrically converted to product, as measured by the current passed. This result is then compared to the observed quantity of product measured through another analytical method. Faradaic loss vs. voltage and energy efficiency Faradaic loss is only one form of energy loss in an electrochemical system. Another is overpotential, the difference between the theoretical and actual electrode voltages needed to drive the reaction at the desired rate. Even a rechargeable battery with 100% faradaic efficiency requires charging at a higher voltage than it produces during discharge, so its overall energy efficiency is the product of voltage efficiency and faradaic efficiency. Voltage efficiencies below 100% reflect the thermodynamic irreversibility of every real-world chemical reaction. References Electrochemistry
Faraday efficiency
[ "Chemistry" ]
620
[ "Electrochemistry" ]
15,750,275
https://en.wikipedia.org/wiki/Split-Hopkinson%20pressure%20bar
The split-Hopkinson pressure bar (SHPB), named after Bertram Hopkinson, sometimes also called a Kolsky bar, is an apparatus for testing the dynamic stress–strain response of materials. History The Hopkinson pressure bar was first suggested by Bertram Hopkinson in 1914 as a way to measure stress pulse propagation in a metal bar. Later, in 1949 Herbert Kolsky refined Hopkinson's technique by using two Hopkinson bars in series, now known as the split-Hopkinson bar, to measure stress and strain, incorporating advancements in the cathode ray oscilloscope in conjunction with electrical condenser units to record the pressure wave propagation in the pressure bars as pioneered by Rhisiart Morgan Davies a year earlier in 1948. Later modifications have allowed for tensile, compression, and torsion testing. Operation Although there are various setups and techniques currently in use for the split-Hopkinson pressure bar, the underlying principles for the test and measurement are the same. The specimen is placed between the ends of two straight bars, called the incident bar and the transmitted bar. At the end of the incident bar (some distance away from the specimen, typically at the far end), a stress wave is created which propagates through the bar toward the specimen. This wave is referred to as the incident wave, and upon reaching the specimen, splits into two smaller waves. One of which, the transmitted wave, travels through the specimen and into the transmitted bar, causing plastic deformation in the specimen. The other wave, called the reflected wave, is reflected away from the specimen and travels back down the incident bar. Most modern setups use strain gauges on the bars to measure strains caused by the waves. Assuming deformation in the specimen is uniform, the stress and strain can be calculated from the amplitudes of the incident, transmitted, and reflected waves. Compression testing For compression testing, two symmetrical bars are situated in series, with the sample in between. The incident bar is struck by a striker bar during testing. The striker bar is fired from a gas gun. The transmitted bar collides with a momentum trap (typically a block of soft metal). Strain gauges are mounted on both the incident and transmitted bars. Tension testing Tension testing in a Split Hopkinson pressure bar (SHPB) is more complex due to a variation of loading methods and specimen attachment to the incident and transmission bar. The first tension bar was designed and tested by Harding et al. in 1960; the design involved using a hollow weight bar that was connected to a yoke and threaded specimen inside of the weight bar. A tensile wave was created by impacting the weight bar with a ram and having the initial compression wave reflect as a tensile wave off the free end Another breakthrough in the SHPB design was done by Nichols who used a typical compression setup and threaded metallic specimens on both the incident and transmission ends, while placing a composite collar over the specimen. The specimen had a snug fit on the incident and transmission side in order to bypass an initial compression wave. Nichols' setup would create an initial compression wave by an impact in the incident end with a striker, but when the compression wave reached the specimen, the threads would not be loaded. The compression wave would ideally pass through the composite collar and then reflect off the free end in tension. The tensile wave would then pull on the specimen. The next loading method was revolutionized by Ogawa in 1984. A hollow striker was used to impact a flange that is threaded to end on an incident bar. This striker was propelled by using either a gas gun or a rotating disk. The specimen was once again attached to the incident and transmission bar via threading. Torsion testing As with tension testing, there exist a variety of methods for specimen attachment and loading when subjecting materials to torsion on a SHPB. One way of applying loading called the stored-torque method involves clamping the midsection of the incident bar while a torque is applied to the free end. The incident wave is created by suddenly releasing the clamp, which sends a torsion wave toward the specimen. Another loading technique known as explosive-loading uses explosive charges on the free end of the incident bar to create the incident wave. This method is particularly sensitive to error because each charge must apply an equal impulse to the incident bar (to create pure torsion without bending) and must both detonate simultaneously. Explosive-loading is also unlikely to produce clean incident waves, which may cause uneven strain rates throughout the test. This method however has the advantage of having a very small rise time as compared to the stored-torque method. See also List of historic mechanical engineering landmarks References External links Papers on Theory of Split Hopkinson Bar Symposium on Experimental Investigation of the Behavior of Materials at High Strain Rates « Kolsky Bar – Fifty Years Later » A Brochure on a Resource for Split Hopkinson Bar Testing and Building for Research Facilities. Materials science Materials testing Fracture mechanics
Split-Hopkinson pressure bar
[ "Physics", "Materials_science", "Engineering" ]
1,013
[ "Structural engineering", "Applied and interdisciplinary physics", "Fracture mechanics", "Materials science", "Materials testing", "nan", "Materials degradation" ]
18,620,088
https://en.wikipedia.org/wiki/Hydrogen%20safety
Hydrogen safety covers the safe production, handling and use of hydrogen, particularly hydrogen gas fuel and liquid hydrogen. Hydrogen possesses the NFPA 704's highest rating of four on the flammability scale because it is flammable when mixed even in small amounts with ordinary air. Ignition can occur at a volumetric ratio of hydrogen to air as low as 4% due to the oxygen in the air and the simplicity and chemical properties of the reaction. However, hydrogen has no rating for innate hazard for reactivity or toxicity. The storage and use of hydrogen poses unique challenges due to its ease of leaking as a gaseous fuel, low-energy ignition, wide range of combustible fuel-air mixtures, buoyancy, and its ability to embrittle metals that must be accounted for to ensure safe operation. Liquid hydrogen poses additional challenges due to its increased density and the extremely low temperatures needed to keep it in liquid form. Moreover, its demand and use in industry—as rocket fuel, alternative energy storage source, coolant for electric generators in power stations, a feedstock in industrial and chemical processes including production of ammonia and methanol, etc.—has continued to increase, which has led to the increased importance of considerations of safety protocols in producing, storing, transferring, and using hydrogen. Hydrogen has one of the widest explosive/ignition mix range with air of all the gases with few exceptions such as acetylene, silane, and ethylene oxide, and in terms of minimum necessary ignition energy and mixture ratios has extremely low requirements for an explosion to occur. This means that whatever the mix proportion between air and hydrogen, when ignited in an enclosed space a hydrogen leak will most likely lead to an explosion, not a mere flame. There are many codes and standards regarding hydrogen safety in storage, transport, and use. These range from federal regulations, ANSI/AIAA, NFPA, and ISO standards. The Canadian Hydrogen Safety Program concluded that hydrogen fueling is as safe as, or safer than, compressed natural gas (CNG) fueling, Prevention There are a number of items to consider to help design systems and procedures to avoid accidents when dealing with hydrogen, as one of the primary dangers of hydrogen is that it is extremely flammable. Inerting and purging Inerting chambers and purging gas lines are important standard safety procedures to take when transferring hydrogen. In order to properly inert or purge, the flammability limits must be taken into account, and hydrogen's are very different from other kinds of gases. At normal atmospheric pressure it is 4% to 75%, based on the volume percent of hydrogen in oxygen it is 4% to 94%, while the limits of the detonation potential of hydrogen in air are 18.3% to 59% by volume. In fact, these flammability limits can often be more stringent than this, as the turbulence during a fire can cause a deflagration which can create detonation. For comparison the deflagration limit of gasoline in air is 1.4–7.6%, and of acetylene in air, 2.5–82%. Therefore, when equipment is open to air before or after a transfer of hydrogen, there are unique conditions to take into consideration that might have otherwise been safe with transferring other kinds of gases. Incidents have occurred because inerting or purging was not sufficient, or because the introduction of air in the equipment was underestimated (e.g., when adding powders), resulting in an explosion. For this reason, inerting or purging procedures and equipment are often unique to hydrogen, and often the fittings or marking on a hydrogen line should be completely different to ensure that this and other processes are properly followed, as many explosions have happened simply because a hydrogen line was accidentally plugged into a main line or because the hydrogen line was confused with another. Ignition source management The minimum ignition energy of hydrogen in air is one of the lowest among known substances at 0.02 mJ, and hydrogen-air mixtures can ignite with 1/10 the effort of igniting gasoline-air mixtures. Because of this, any possible ignition source has to be scrutinized. Any electrical device, bond, or ground should meet applicable hazardous area classification requirement. Any potential sources (like some ventilation system designs) for static electricity build-up should likewise be minimized, e.g. through antistatic devices. Hot-work procedures must be robust, comprehensive, and well-enforced; and they should purge and ventilate high-areas and sample the atmosphere before work. Ceiling-mounted equipment should likewise meet hazardous area requirements (NFPA 497). Finally, rupture discs should not be used as this has been a common ignition source for multiple explosions and fires. Instead other pressure relief systems such as a relief valve should be used. Mechanical integrity and reactive chemistry There are four main chemical properties to account for when dealing with hydrogen that can come into contact with other materials even in normal atmospheric pressures and temperatures: The chemistry of hydrogen is very different from traditional chemicals. E.g., with oxidation in ambient environments. And neglecting this unique chemistry has caused issues at some chemical plants. Another aspect to be considered as well is the fact that hydrogen can be generated as a byproduct of a different reaction may have been overlooked, e.g. Zirconium and steam creating a source of hydrogen. This danger can be circumvented somewhat via the use of passive autocatalytic recombiners. Another major issue to consider is the chemical compatibility of hydrogen with other common building materials like steel. Because of hydrogen embrittlement, material compatibility with hydrogen is specially considered. These considerations can further change because of special reactions at high temperatures. The diffusivity of hydrogen is very different from ordinary gases, and therefore gasketing materials have to be chosen carefully. The buoyant forces and stresses on mechanical bodies involved are often reversed from standard gases. For example, because of buoyancy, stresses are often pronounced near the top of a large storage tank. All four of these factors are considered during the initial design of a system using hydrogen, and is typically accomplished by limiting the contact between susceptible metals and hydrogen, either by spacing, electroplating, surface cleaning, material choice, and quality assurance during manufacturing, welding, and installation. Otherwise, hydrogen damage can be managed and detected by specialty monitoring equipment. Leaks and flame detection systems Locations of hydrogen sources and piping have to be chosen with care. Since hydrogen is a lighter-than-air gas, it collects under roofs and overhangs (typically referred to as trapping sites), where it forms an explosion hazard. Many individuals are familiar with protecting plants from heavier-than-air vapors, but are unfamiliar with "looking up", and is therefore of particular note. It can also enter pipes and can follow them to their destinations. Because of this, hydrogen pipes should be well-labeled and located above other pipes to prevent this occurrence. Even with proper design, hydrogen leaks can support combustion at very low flow rates, as low as 4 micrograms/s. To this end, detection is important. Hydrogen sensors or a katharometer allow for rapid detection of hydrogen leaks to ensure that the hydrogen can be vented and the source of the leak tracked down. Around certain pipes or locations special tapes can be added for hydrogen detection purposes. A traditional method is to add a hydrogen odorant with the gas as is common with natural gas. In fuel cell applications these odorants can contaminate the fuel cells, but researchers are investigating other methods that might be used for hydrogen detection: tracers, new odorant technology, advanced sensors, and others. While hydrogen flames can be hard to see with the naked eye (it can have a so-called "invisible flame"), they show up readily on UV/IR flame detectors. More recently Multi IR detectors have been developed, which have even faster detection on hydrogen-flames. This is quite important in fighting hydrogen fires, as the preferred method of fighting a fire is stopping the source of the leak, as in certain cases (namely, cryogenic hydrogen) dousing the source directly with water may cause icing, which in turn may cause a secondary rupture. Ventilation and flaring Aside from flammability concerns, in enclosed spaces, hydrogen can also act as an asphyxiant gas. Therefore, one should make sure to have proper ventilation to deal with both issues should they arise, as it is generally safe to simply vent hydrogen into the atmosphere. However, when placing and designing such ventilation systems, one must keep in mind that hydrogen will tend to accumulate towards the ceilings and peaks of structures, rather than the floor. Many dangers may be mitigated by the fact that hydrogen rapidly rises and often disperses before ignition. In certain emergency or maintenance situations, hydrogen can also be flared. For example, a safety feature in some hydrogen-powered vehicles is that they can flare the fuel if the tank is on fire, burning out completely with little damage to the vehicle, in contrast to the expected result in a gasoline-fueled vehicle. Inventory management and facility spacing Ideally, no fire or explosion will occur, but the facility should be designed so that if accidental ignition occurs, it will minimize additional damage. Minimum separation distances between hydrogen storage units should be considered, together with the pressure of said storage units (c.f., NFPA 2 and 55). Explosion venting should be laid out so that other parts of the facility will not be harmed. In certain situations, this translates to a roof that can be safely blown away from the rest of the structure in an explosion. Cryogenics Liquid hydrogen has a slightly different chemistry compared to other cryogenic chemicals, as trace accumulated air can easily contaminate liquid hydrogen and form an unstable mixture with detonative capabilities similar to TNT and other highly explosive materials. Because of this, liquid hydrogen requires complex storage technology such as the special thermally insulated containers and requires special handling common to all cryogenic substances. This is similar to, but more severe than liquid oxygen. Even with thermally insulated containers it is difficult to keep such a low temperature, and the hydrogen will gradually leak away. Typically it will evaporate at a rate of 1% per day. The main danger with cryogenic hydrogen is what is known as BLEVE (boiling liquid expanding vapor explosion). Because hydrogen is gaseous in atmospheric conditions, the rapid phase change together with the detonation energy combine to create a more hazardous situation. A secondary danger is the fact that many materials change from being to ductile to brittle at extremely cold temperatures, allowing new places for leaks to form. Human factors Along with traditional job safety training, checklists to help prevent commonly skipped steps (e.g., testing high points in the work area) are often implemented, along with instructions on the situational dangers that come inherent to working with hydrogen. Incidents Hydrogen codes and standards There exist many hydrogen codes and standards for hydrogen fuel cell vehicles, stationary fuel cell applications and portable fuel cell applications. Additional to the codes and standards for hydrogen technology products, there are codes and standards for hydrogen safety, for the safe handling of hydrogen and the storage of hydrogen. What follows is a list of some of the major codes and standards regulating hydrogen: Guidelines The current ANSI/AIAA standard for hydrogen safety guidelines is AIAA G-095-2004, Guide to Safety of Hydrogen and Hydrogen Systems. As NASA has been one of the world's largest users of hydrogen, this evolved from NASA's earlier guidelines, NSS 1740.16 (8719.16). These documents cover both the risks posed by hydrogen in its different forms and how to ameliorate them. NASA also references Safety Standard for Hydrogen and Hydrogen Systems and the Sourcebook for Hydrogen Applications. Another organization responsible for hydrogen safety guidelines is the Compressed Gas Association (CGA), which has a number of references of their own covering general hydrogen storage, piping, and venting. In 2023 CGA launched the Safe Hydrogen Project which is a collaborative global effort to develop and distribute safety information for the production, storage, transport, and use of hydrogen. See also Dissolved gas analysis Electrical equipment in hazardous areas Hydrogen economy Metallic hydrogen Oxyhydrogen Passive autocatalytic recombiner Slush hydrogen References External links Hydrogen and Fuel Cell Safety Report The International Association for Hydrogen Safety Higher Educational Programme in Hydrogen Safety Engineering Knowledge Center for Explosion and Hydrogen Safety Hydrogen Safety Process safety
Hydrogen safety
[ "Chemistry", "Engineering" ]
2,584
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
18,621,290
https://en.wikipedia.org/wiki/Earthquake-resistant%20structures
Earthquake-resistant or aseismic structures are designed to protect buildings to some or greater extent from earthquakes. While no structure can be entirely impervious to earthquake damage, the goal of earthquake engineering is to erect structures that fare better during seismic activity than their conventional counterparts. According to building codes, earthquake-resistant structures are intended to withstand the largest earthquake of a certain probability that is likely to occur at their location. This means the loss of life should be minimized by preventing collapse of the buildings for rare earthquakes while the loss of the functionality should be limited for more frequent ones. To combat earthquake destruction, the only method available to ancient architects was to build their landmark structures to last, often by making them excessively stiff and strong. Currently, there are several design philosophies in earthquake engineering, making use of experimental results, computer simulations and observations from past earthquakes to offer the required performance for the seismic threat at the site of interest. These range from appropriately sizing the structure to be strong and ductile enough to survive the shaking with an acceptable damage, to equipping it with base isolation or using structural vibration control technologies to minimize any forces and deformations. While the former is the method typically applied in most earthquake-resistant structures, important facilities, landmarks and cultural heritage buildings use the more advanced (and expensive) techniques of isolation or control to survive strong shaking with minimal damage. Examples of such applications are the Cathedral of Our Lady of the Angels and the Acropolis Museum. Trends and projects Some of the new trends and/or projects in the field of earthquake engineering structures are presented. Building materials Based on studies in New Zealand, relating to 2011 Christchurch earthquakes, precast concrete designed and installed in accordance with modern codes performed well. According to the Earthquake Engineering Research Institute, precast panel buildings had good durability during the earthquake in Armenia, compared to precast frame-panels. Earthquake shelter One Japanese construction company has developed a six-foot cubical shelter, presented as an alternative to earthquake-proofing an entire building. Concurrent shake-table testing Concurrent shake-table testing of two or more building models is a vivid, persuasive and effective way to validate earthquake engineering solutions experimentally. Thus, two wooden houses built before adoption of the 1981 Japanese Building Code were moved to E-Defense for testing. One house was reinforced to enhance its seismic resistance, while the other one was not. These two models were set on E-Defense platform and tested simultaneously. Combined vibration control solution Designed by architect Merrill W. Baird of Glendale, working in collaboration with A. C. Martin Architects of Los Angeles, the Municipal Services Building at 633 East Broadway, Glendale was completed in 1966. Prominently sited at the corner of East Broadway and Glendale Avenue, this civic building serves as a heraldic element of Glendale's civic center. In October 2004 Architectural Resources Group (ARG) was contracted by Nabih Youssef & Associates, Structural Engineers, to provide services regarding a historic resource assessment of the building due to a proposed seismic retrofit. In 2008, the Municipal Services Building of the City of Glendale, California was seismically retrofitted using an innovative combined vibration control solution: the existing elevated building foundation of the building was put on high damping rubber bearings. Steel plate walls system A steel plate shear wall (SPSW) consists of steel infill plates bounded by a column-beam system. When such infill plates occupy each level within a framed bay of a structure, they constitute a SPSW system. Whereas most earthquake resistant construction methods are adapted from older systems, SPSW was invented entirely to withstand seismic activity. SPSW behavior is analogous to a vertical plate girder cantilevered from its base. Similar to plate girders, the SPSW system optimizes component performance by taking advantage of the post-buckling behavior of the steel infill panels. The Ritz-Carlton/JW Marriott hotel building, a part of the LA Live development in Los Angeles, California, is the first building in Los Angeles that uses an advanced steel plate shear wall system to resist the lateral loads of strong earthquakes and winds. Kashiwazaki–Kariwa Nuclear Power Plant upgrade The Kashiwazaki–Kariwa Nuclear Power Plant, the largest nuclear generating station in the world by net electrical power rating, happened to be near the epicenter of the strongest Mw 6.6 July 2007 Chūetsu offshore earthquake. This initiated an extended shutdown for structural inspection which indicated that a greater earthquake-proofing was needed before operation could be resumed. On May 9, 2009, one unit (Unit 7) was restarted, after the seismic upgrades. The test run had to continue for 50 days. The plant had been completely shut down for almost 22 months following the earthquake. Seismic test of seven-story building A destructive earthquake struck a lone, wooden condominium in Japan. The experiment was webcast live on July 14, 2009, to yield insight on how to make wooden structures stronger and better able to withstand major earthquakes. The Miki shake at the Hyogo Earthquake Engineering Research Center is the capstone experiment of the four-year NEESWood project, which receives its primary support from the U.S. National Science Foundation Network for Earthquake Engineering Simulation (NEES) Program. "NEESWood aims to develop a new seismic design philosophy that will provide the necessary mechanisms to safely increase the height of wood-frame structures in active seismic zones of the United States, as well as mitigate earthquake damage to low-rise wood-frame structures," said Rosowsky, Department of Civil Engineering at Texas A&M University. This philosophy is based on the application of seismic damping systems for wooden buildings. The systems, which can be installed inside the walls of most wooden buildings, include strong metal frame, bracing and dampers filled with viscous fluid. Superframe earthquake proof structure The proposed system is composed of core walls, hat beams incorporated into the top-level, outer columns, and viscous dampers vertically installed between the tips of the hat beams and the outer columns. During an earthquake, the hat beams and outer columns act as outriggers and reduce the overturning moment in the core, and the installed dampers also reduce the moment and the lateral deflection of the structure. This innovative system can eliminate inner beams and inner columns on each floor, and thereby provide buildings with column-free floor space even in highly seismic regions. Earthquake architecture The term 'seismic architecture' or 'earthquake architecture' was first introduced in 1985 by Robert Reitherman. The phrase "earthquake architecture" is used to describe a degree of architectural expression of earthquake resistance or implication of architectural configuration, form or style in earthquake resistance. It is also used to describe buildings in which seismic design considerations impacted its architecture. It may be considered a new aesthetic approach in designing structures in seismic prone areas. History An article in Scientific American from May 1884, "Buildings that Resist Earthquakes" described early engineering efforts such as Shōsōin. Before building codes were improved, door frames were regarded as the most reinforced element of buildings and the safest place to be under during an earthquake. This is no longer general advice, despite a common misconception to the contrary. See also Earthquake Baroque Emergency management Geotechnical engineering Seismic response of landfill Seismic retrofit Tsunami-proof building References External links Design Discussion Primer – Seismic Events from BC Housing Management Commission – overview of resilient building design strategies. Building Building codes Earthquake engineering Seismic vibration control Structural engineering Sustainable building
Earthquake-resistant structures
[ "Technology", "Engineering" ]
1,529
[ "Structural engineering", "Sustainable building", "Building", "Building engineering", "Structural system", "Construction", "Civil engineering", "Seismic vibration control", "Building codes", "Earthquake engineering" ]
23,868,856
https://en.wikipedia.org/wiki/Reynolds%20number
In fluid dynamics, the Reynolds number () is a dimensionless quantity that helps predict fluid flow patterns in different situations by measuring the ratio between inertial and viscous forces. At low Reynolds numbers, flows tend to be dominated by laminar (sheet-like) flow, while at high Reynolds numbers, flows tend to be turbulent. The turbulence results from differences in the fluid's speed and direction, which may sometimes intersect or even move counter to the overall direction of the flow (eddy currents). These eddy currents begin to churn the flow, using up energy in the process, which for liquids increases the chances of cavitation. The Reynolds number has wide applications, ranging from liquid flow in a pipe to the passage of air over an aircraft wing. It is used to predict the transition from laminar to turbulent flow and is used in the scaling of similar but different-sized flow situations, such as between an aircraft model in a wind tunnel and the full-size version. The predictions of the onset of turbulence and the ability to calculate scaling effects can be used to help predict fluid behavior on a larger scale, such as in local or global air or water movement, and thereby the associated meteorological and climatological effects. The concept was introduced by George Stokes in 1851, but the Reynolds number was named by Arnold Sommerfeld in 1908 after Osborne Reynolds who popularized its use in 1883 (an example of Stigler's law of eponymy). Definition The Reynolds number is the ratio of inertial forces to viscous forces within a fluid that is subjected to relative internal movement due to different fluid velocities. A region where these forces change behavior is known as a boundary layer, such as the bounding surface in the interior of a pipe. A similar effect is created by the introduction of a stream of high-velocity fluid into a low-velocity fluid, such as the hot gases emitted from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which tends to inhibit turbulence. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full-size version. Such scaling is not linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. With respect to laminar and turbulent flow regimes: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities. The Reynolds number is defined as:where: is the density of the fluid (SI units: kg/m3) is the flow speed (m/s) is a characteristic length (m) is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)) is the kinematic viscosity of the fluid (m2/s). The Reynolds number can be defined for several different situations where a fluid is in relative motion to a surface. These definitions generally include the fluid properties of density and viscosity, plus a velocity and a characteristic length or characteristic dimension (L in the above equation). This dimension is a matter of convention—for example radius and diameter are equally valid to describe spheres or circles, but one is chosen by convention. For aircraft or ships, the length or width can be used. For flow in a pipe, or for a sphere moving in a fluid, the internal diameter is generally used today. Other shapes such as rectangular pipes or non-spherical objects have an equivalent diameter defined. For fluids of variable density such as compressible gases or fluids of variable viscosity such as non-Newtonian fluids, special rules apply. The velocity may also be a matter of convention in some circumstances, notably stirred vessels. In practice, matching the Reynolds number is not on its own sufficient to guarantee similitude. Fluid flow is generally chaotic, and very small changes to shape and surface roughness of bounding surfaces can result in very different flows. Nevertheless, Reynolds numbers are a very important guide and are widely used. Derivation If we know that the relevant physical quantities in a physical system are only , then the Reynolds number is essentially fixed by the Buckingham π theorem. In detail, since there are 4 quantities , but they have only 3 dimensions (length, time, mass), we can consider , where are real numbers. Setting the three dimensions of to zero, we obtain 3 independent linear constraints, so the solution space has 1 dimension, and it is spanned by the vector . Thus, any dimensionless quantity constructed out of is a function of , the Reynolds number. Alternatively, we can take the incompressible Navier–Stokes equations (convective form):Remove the gravity term , then the left side consists of inertial force , and viscous force . Their ratio has the order of , the Reynolds number. This argument is written out in detail on the Scallop theorem page. Alternative derivation The Reynolds number can be obtained when one uses the nondimensional form of the incompressible Navier–Stokes equations for a newtonian fluid expressed in terms of the Lagrangian derivative: Each term in the above equation has the units of a "body force" (force per unit volume) with the same dimensions of a density times an acceleration. Each term is thus dependent on the exact measurements of a flow. When one renders the equation nondimensional, that is when we multiply it by a factor with inverse units of the base equation, we obtain a form that does not depend directly on the physical sizes. One possible way to obtain a nondimensional equation is to multiply the whole equation by the factor where is the mean velocity, or , relative to the fluid (m/s), is the characteristic length (m), is the fluid density (kg/m3). If we now set we can rewrite the Navier–Stokes equation without dimensions: where the term . Finally, dropping the primes for ease of reading: This is why mathematically all Newtonian, incompressible flows with the same Reynolds number are comparable. Notice also that in the above equation, the viscous terms vanish for . Thus flows with high Reynolds numbers are approximately inviscid in the free stream. History Osborne Reynolds famously studied the conditions in which the flow of fluid in pipes transitioned from laminar flow to turbulent flow. In his 1883 paper Reynolds described the transition from laminar to turbulent flow in a classic experiment in which he examined the behaviour of water flow under different flow velocities using a small stream of dyed water introduced into the centre of clear water flow in a larger pipe. The larger pipe was glass so the behaviour of the layer of the dyed stream could be observed. At the end of this pipe, there was a flow control valve used to vary the water velocity inside the tube. When the velocity was low, the dyed layer remained distinct throughout the entire length of the large tube. When the velocity was increased, the layer broke up at a given point and diffused throughout the fluid's cross-section. The point at which this happened was the transition point from laminar to turbulent flow. From these experiments came the dimensionless Reynolds number for dynamic similarity—the ratio of inertial forces to viscous forces. Reynolds also proposed what is now known as the Reynolds averaging of turbulent flows, where quantities such as velocity are expressed as the sum of mean and fluctuating components. Such averaging allows for 'bulk' description of turbulent flow, for example using the Reynolds-averaged Navier–Stokes equations. Flow in a pipe For flow in a pipe or tube, the Reynolds number is generally defined as where is the hydraulic diameter of the pipe (the inside diameter if the pipe is circular) (m), is the volumetric flow rate (m3/s), is the pipe's cross-sectional area () (m2), is the mean velocity of the fluid (m/s), (mu) is the dynamic viscosity of the fluid (Pa·s = N·s/m2 = kg/(m·s)), (nu) is the kinematic viscosity () (m2/s), (rho) is the density of the fluid (kg/m3), is the mass flowrate of the fluid (kg/s). For shapes such as squares, rectangular or annular ducts where the height and width are comparable, the characteristic dimension for internal-flow situations is taken to be the hydraulic diameter, , defined as where is the cross-sectional area, and is the wetted perimeter. The wetted perimeter for a channel is the total perimeter of all channel walls that are in contact with the flow. This means that the length of the channel exposed to air is not included in the wetted perimeter. For a circular pipe, the hydraulic diameter is exactly equal to the inside pipe diameter: For an annular duct, such as the outer channel in a tube-in-tube heat exchanger, the hydraulic diameter can be shown algebraically to reduce to where is the inside diameter of the outer pipe, is the outside diameter of the inner pipe. For calculation involving flow in non-circular ducts, the hydraulic diameter can be substituted for the diameter of a circular duct, with reasonable accuracy, if the aspect ratio AR of the duct cross-section remains in the range < AR < 4. Laminar–turbulent transition In boundary layer flow over a flat plate, experiments confirm that, after a certain length of flow, a laminar boundary layer will become unstable and turbulent. This instability occurs across different scales and with different fluids, usually when ≈ , where is the distance from the leading edge of the flat plate, and the flow velocity is the freestream velocity of the fluid outside the boundary layer. For flow in a pipe of diameter , experimental observations show that for "fully developed" flow, laminar flow occurs when < 2300 and turbulent flow occurs when > 2900. At the lower end of this range, a continuous turbulent-flow will form, but only at a very long distance from the inlet of the pipe. The flow in between will begin to transition from laminar to turbulent and then back to laminar at irregular intervals, called intermittent flow. This is due to the different speeds and conditions of the fluid in different areas of the pipe's cross-section, depending on other factors such as pipe roughness and flow uniformity. Laminar flow tends to dominate in the fast-moving center of the pipe while slower-moving turbulent flow dominates near the wall. As the Reynolds number increases, the continuous turbulent-flow moves closer to the inlet and the intermittency in between increases, until the flow becomes fully turbulent at > 2900. This result is generalized to non-circular channels using the hydraulic diameter, allowing a transition Reynolds number to be calculated for other shapes of channel. These transition Reynolds numbers are also called critical Reynolds numbers, and were studied by Osborne Reynolds around 1895. The critical Reynolds number is different for every geometry. Flow in a wide duct For a fluid moving between two plane parallel surfaces—where the width is much greater than the space between the plates—then the characteristic dimension is equal to the distance between the plates. This is consistent with the annular duct and rectangular duct cases above, taken to a limiting aspect ratio. Flow in an open channel For calculating the flow of liquid with a free surface, the hydraulic radius must be determined. This is the cross-sectional area of the channel divided by the wetted perimeter. For a semi-circular channel, it is a quarter of the diameter (in case of full pipe flow). For a rectangular channel, the hydraulic radius is the cross-sectional area divided by the wetted perimeter. Some texts then use a characteristic dimension that is four times the hydraulic radius, chosen because it gives the same value of for the onset of turbulence as in pipe flow, while others use the hydraulic radius as the characteristic length-scale with consequently different values of for transition and turbulent flow. Flow around airfoils Reynolds numbers are used in airfoil design to (among other things) manage "scale effect" when computing/comparing characteristics (a tiny wing, scaled to be huge, will perform differently). Fluid dynamicists define the chord Reynolds number , where is the flight speed, is the chord length, and is the kinematic viscosity of the fluid in which the airfoil operates, which is for the atmosphere at sea level. In some special studies a characteristic length other than chord may be used; rare is the "span Reynolds number", which is not to be confused with span-wise stations on a wing, where chord is still used. Object in a fluid The Reynolds number for an object moving in a fluid, called the particle Reynolds number and often denoted , characterizes the nature of the surrounding flow and its fall velocity. In viscous fluids Where the viscosity is naturally high, such as polymer solutions and polymer melts, flow is normally laminar. The Reynolds number is very small and Stokes' law can be used to measure the viscosity of the fluid. Spheres are allowed to fall through the fluid and they reach the terminal velocity quickly, from which the viscosity can be determined. The laminar flow of polymer solutions is exploited by animals such as fish and dolphins, who exude viscous solutions from their skin to aid flow over their bodies while swimming. It has been used in yacht racing by owners who want to gain a speed advantage by pumping a polymer solution such as low molecular weight polyoxyethylene in water, over the wetted surface of the hull. It is, however, a problem for mixing polymers, because turbulence is needed to distribute fine filler (for example) through the material. Inventions such as the "cavity transfer mixer" have been developed to produce multiple folds into a moving melt so as to improve mixing efficiency. The device can be fitted onto extruders to aid mixing. Sphere in a fluid For a sphere in a fluid, the characteristic length-scale is the diameter of the sphere and the characteristic velocity is that of the sphere relative to the fluid some distance away from the sphere, such that the motion of the sphere does not disturb that reference parcel of fluid. The density and viscosity are those belonging to the fluid. Note that purely laminar flow only exists up to = 10 under this definition. Under the condition of low , the relationship between force and speed of motion is given by Stokes' law. At higher Reynolds numbers the drag on a sphere depends on surface roughness. Thus, for example, adding dimples on the surface of a golf ball causes the boundary layer on the upstream side of the ball to transition from laminar to turbulent. The turbulent boundary layer is able to remain attached to the surface of the ball much longer than a laminar boundary and so creates a narrower low-pressure wake and hence less pressure drag. The reduction in pressure drag causes the ball to travel farther. Rectangular object in a fluid The equation for a rectangular object is identical to that of a sphere, with the object being approximated as an ellipsoid and the axis of length being chosen as the characteristic length scale. Such considerations are important in natural streams, for example, where there are few perfectly spherical grains. For grains in which measurement of each axis is impractical, sieve diameters are used instead as the characteristic particle length-scale. Both approximations alter the values of the critical Reynolds number. Fall velocity The particle Reynolds number is important in determining the fall velocity of a particle. When the particle Reynolds number indicates laminar flow, Stokes' law can be used to calculate its fall velocity or settling velocity. When the particle Reynolds number indicates turbulent flow, a turbulent drag law must be constructed to model the appropriate settling velocity. Packed bed For fluid flow through a bed, of approximately spherical particles of diameter in contact, if the voidage is and the superficial velocity is , the Reynolds number can be defined as or or The choice of equation depends on the system involved: the first is successful in correlating the data for various types of packed and fluidized beds, the second Reynolds number suits for the liquid-phase data, while the third was found successful in correlating the fluidized bed data, being first introduced for liquid fluidized bed system. Laminar conditions apply up to = 10, fully turbulent from = 2000. Stirred vessel In a cylindrical vessel stirred by a central rotating paddle, turbine or propeller, the characteristic dimension is the diameter of the agitator . The velocity is where is the rotational speed in rad per second. Then the Reynolds number is: The system is fully turbulent for values of above . Pipe friction Pressure drops seen for fully developed flow of fluids through pipes can be predicted using the Moody diagram which plots the Darcy–Weisbach friction factor against Reynolds number and relative roughness . The diagram clearly shows the laminar, transition, and turbulent flow regimes as Reynolds number increases. The nature of pipe flow is strongly dependent on whether the flow is laminar or turbulent. Similarity of flows In order for two flows to be similar, they must have the same geometry and equal Reynolds and Euler numbers. When comparing fluid behavior at corresponding points in a model and a full-scale flow, the following holds: where is the Reynolds number for the model, and is full-scale Reynolds number, and similarly for the Euler numbers. The model numbers and design numbers should be in the same proportion, hence This allows engineers to perform experiments with reduced scale models in water channels or wind tunnels and correlate the data to the actual flows, saving on costs during experimentation and on lab time. Note that true dynamic similitude may require matching other dimensionless numbers as well, such as the Mach number used in compressible flows, or the Froude number that governs open-channel flows. Some flows involve more dimensionless parameters than can be practically satisfied with the available apparatus and fluids, so one is forced to decide which parameters are most important. For experimental flow modeling to be useful, it requires a fair amount of experience and judgment of the engineer. An example where the mere Reynolds number is not sufficient for the similarity of flows (or even the flow regime – laminar or turbulent) are bounded flows, i.e. flows that are restricted by walls or other boundaries. A classical example of this is the Taylor–Couette flow, where the dimensionless ratio of radii of bounding cylinders is also important, and many technical applications where these distinctions play an important role. Principles of these restrictions were developed by Maurice Marie Alfred Couette and Geoffrey Ingram Taylor and developed further by Floris Takens and David Ruelle. Typical values of Reynolds number Dictyostelium amoebae: ~ 1 × 10−6 Bacterium ~ 1 × 10−4 Ciliate ~ 1 × 10−1 Smallest fish ~ 1 Blood flow in brain ~ 1 × 102 Blood flow in aorta ~ 1 × 103 Onset of turbulent flow ~ 2.3 × 103 to 5.0 × 104 for pipe flow to 106 for boundary layers Typical pitch in Major League Baseball ~ 2 × 105 Person swimming ~ 4 × 106 Fastest fish ~ 1 × 108 Blue whale ~ 4 × 108 A large ship (Queen Elizabeth 2) ~ 5 × 109 Atmospheric tropical cyclone ~ 1 x 1012 Smallest scales of turbulent motion In a turbulent flow, there is a range of scales of the time-varying fluid motion. The size of the largest scales of fluid motion (sometimes called eddies) are set by the overall geometry of the flow. For instance, in an industrial smoke stack, the largest scales of fluid motion are as big as the diameter of the stack itself. The size of the smallest scales is set by the Reynolds number. As the Reynolds number increases, smaller and smaller scales of the flow are visible. In a smokestack, the smoke may appear to have many very small velocity perturbations or eddies, in addition to large bulky eddies. In this sense, the Reynolds number is an indicator of the range of scales in the flow. The higher the Reynolds number, the greater the range of scales. The largest eddies will always be the same size; the smallest eddies are determined by the Reynolds number. What is the explanation for this phenomenon? A large Reynolds number indicates that viscous forces are not important at large scales of the flow. With a strong predominance of inertial forces over viscous forces, the largest scales of fluid motion are undamped—there is not enough viscosity to dissipate their motions. The kinetic energy must "cascade" from these large scales to progressively smaller scales until a level is reached for which the scale is small enough for viscosity to become important (that is, viscous forces become of the order of inertial ones). It is at these small scales where the dissipation of energy by viscous action finally takes place. The Reynolds number indicates at what scale this viscous dissipation occurs. In physiology Poiseuille's law on blood circulation in the body is dependent on laminar flow. In turbulent flow the flow rate is proportional to the square root of the pressure gradient, as opposed to its direct proportionality to pressure gradient in laminar flow. Using the definition of the Reynolds number we can see that a large diameter with rapid flow, where the density of the blood is high, tends towards turbulence. Rapid changes in vessel diameter may lead to turbulent flow, for instance when a narrower vessel widens to a larger one. Furthermore, a bulge of atheroma may be the cause of turbulent flow, where audible turbulence may be detected with a stethoscope. Complex systems Reynolds number interpretation has been extended into the area of arbitrary complex systems. Such as financial flows, nonlinear networks, etc. In the latter case, an artificial viscosity is reduced to a nonlinear mechanism of energy distribution in complex network media. Reynolds number then represents a basic control parameter that expresses a balance between injected and dissipated energy flows for an open boundary system. It has been shown that Reynolds critical regime separates two types of phase space motion: accelerator (attractor) and decelerator. High Reynolds number leads to a chaotic regime transition only in frame of strange attractor model. Relationship to other dimensionless parameters There are many dimensionless numbers in fluid mechanics. The Reynolds number measures the ratio of advection and diffusion effects on structures in the velocity field, and is therefore closely related to Péclet numbers, which measure the ratio of these effects on other fields carried by the flow, for example, temperature and magnetic fields. Replacement of the kinematic viscosity in by the thermal or magnetic diffusivity results in respectively the thermal Péclet number and the magnetic Reynolds number. These are therefore related to by-products with ratios of diffusivities, namely the Prandtl number and magnetic Prandtl number. See also Kelvin–Helmholtz instability References Footnotes Citations Sources Further reading Brezina, Jiri, 1979, Particle size and settling rate distributions of sand-sized materials: 2nd European Symposium on Particle Characterisation (PARTEC), Nürnberg, West Germany. Brezina, Jiri, 1980, Sedimentological interpretation of errors in size analysis of sands; 1st European Meeting of the International Association of Sedimentologists, Ruhr University at Bochum, Federal Republic of Germany, March 1980. Brezina, Jiri, 1980, Size distribution of sand - sedimentological interpretation; 26th International Geological Congress, Paris, July 1980, Abstracts, vol. 2. Fouz, Infaz "Fluid Mechanics," Mechanical Engineering Dept., University of Oxford, 2001, p. 96 Hughes, Roger "Civil Engineering Hydraulics," Civil and Environmental Dept., University of Melbourne 1997, pp. 107–152 Jermy M., "Fluid Mechanics A Course Reader," Mechanical Engineering Dept., University of Canterbury, 2005, pp. d5.10. Purcell, E. M. "Life at Low Reynolds Number", American Journal of Physics vol 45, pp. 3–11 (1977) Truskey, G. A., Yuan, F, Katz, D. F. (2004). Transport Phenomena in Biological Systems Prentice Hall, pp. 7. . . Zagarola, M. V. and Smits, A. J., "Experiments in High Reynolds Number Turbulent Pipe Flow." AIAA paper #96-0654, 34th AIAA Aerospace Sciences Meeting, Reno, Nevada, January 15–18, 1996. Isobel Clark, 1977, ROKE, a Computer Program for Non-Linear Least Squares Decomposition of Mixtures of Distributions; Computer & Geosciences (Pergamon Press), vol. 3, p. 245 - 256. B. C. Colby and R. P. Christensen, 1957, Some Fundamentals of Particle Size Analysis; St. Anthony Falls Hydraulic Laboratory, Minneapolis, Minnesota, USA, Report Nr. 12/December, 55 pages. Arthur T. Corey, 1949, Influence of Shape on the Fall Velocity of Sand Grains; M. S. Thesis, Colorado Agricultural and Mechanical College, Fort Collins, Colorado, USA, December 102 pages. Joseph R. Curray, 1961, Tracing sediment masses by grain size modes; Proc. Internat. Association of Sedimentology, Report of the 21st Session Norden, Internat. Geol. Congress, p. 119 - 129. Burghard Walter Flemming & Karen Ziegler, 1995, High-resolution grain size distribution patterns and textural trends in the back-barrier environment of Spiekeroog Island (Southern North Sea); Senckenbergiana Maritima, vol. 26, No. 1+2, p. 1 - 24. Robert Louis Folk, 1962, Of skewnesses and sands; Jour. Sediment. Petrol., vol. 8, No. 3/September, p. 105 - 111 FOLK, Robert Louis & William C. WARD, 1957: Brazos River bar: a study in the significance of grain size parameters; Jour. Sediment. Petrol., vol. 27, No. 1/March, p. 3 - 26 George Herdan, M. L. Smith & W. H. Hardwick (1960): Small Particle Statistics. 2nd revised edition, Butterworths (London, Toronto, etc.), 418 pp. Douglas Inman, 1952: Measures for describing the size distribution of sediments. Jour. Sediment. Petrology, vol. 22, No. 3/September, p. 125 - 145 Miroslaw Jonasz, 1991: Size, shape, composition, and structure of microparticles from light scattering; in SYVITSKI, James P. M., 1991, Principles, Methods, and Application of Particle Size Analysis; Cambridge Univ. Press, Cambridge, 368 pp., p. 147. William C. Krumbein, 1934: Size frequency distribution of sediments; Jour. Sediment. Petrol., vol. 4, No. 2/August, p. 65 - 77. Krumbein, William Christian & Francis J. Pettijohn, 1938: Manual of Sedimentary Petrography; Appleton-Century-Crofts, Inc., New York; 549 pp. John S. McNown & Pin-Nam Lin, 1952, Sediment concentration and fall velocity; Proc. of the 2nd Midwestern Conf. on Fluid Mechanics, Ohio State University, Columbus, Ohio; State Univ. of Iowa Reprints in Engineering, Reprint No. 109/1952, p. 401 - 411. McNownn, John S. & J. Malaika, 1950, Effects of Particle Shape of Settling Velocity at Low Reynolds' Numbers; American Geophysical Union Transactions, vol. 31, No. 1/February, p. 74 - 82. Gerard V. Middleton 1967, Experiments on density and turbidity currents, III; Deposition; Canadian Jour. of Earth Science, vol. 4, p. 475 - 505 (PSI definition: p. 483 - 485). Osborne Reynolds, 1883: An experimental investigation of the circumstances which determine whether the motion of water shall be direct or sinuous, and of the law of resistance in parallel channels. Phil. Trans. Roy. Soc., 174, Papers, vol. 2, p. 935 - 982 E. F. Schultz, R. H. Wilde & M. L. Albertson, 1954, Influence of Shape on the Fall Velocity of Sedimentary Particles; Colorado Agricultural & Mechanical College, Fort Collins, Colorado, MRD Sediment Series, No. 5/July (CER 54EFS6), 161 pages. H. J. Skidmore, 1948, Development of a stratified-suspension technique for size-frequency analysis; Thesis, Department of Mechanics and Hydraulics, State Univ. of Iowa, p. 2 (? pages). James P. M. Syvitski, 1991, Principles, Methods, and Application of Particle Size Analysis; Cambridge Univ. Press, Cambridge, 368 pp. External links The Reynolds number - The Feynman Lectures on Physics The Reynolds Number at Sixty Symbols Reynolds mini-biography and picture of original apparatus at Manchester University. Reynolds Number Calculation Aerodynamics Convection Dimensionless numbers of fluid mechanics Dimensionless numbers of thermodynamics Fluid dynamics Piping
Reynolds number
[ "Physics", "Chemistry", "Engineering" ]
6,182
[ "Transport phenomena", "Thermodynamic properties", "Physical phenomena", "Physical quantities", "Dimensionless numbers of thermodynamics", "Building engineering", "Chemical engineering", "Convection", "Aerodynamics", "Thermodynamics", "Mechanical engineering", "Aerospace engineering", "Pipin...
23,870,096
https://en.wikipedia.org/wiki/Negative-index%20metamaterial
Negative-index metamaterial or negative-index material (NIM) is a metamaterial whose refractive index for an electromagnetic wave has a negative value over some frequency range. NIMs are constructed of periodic basic parts called unit cells, which are usually significantly smaller than the wavelength of the externally applied electromagnetic radiation. The unit cells of the first experimentally investigated NIMs were constructed from circuit board material, or in other words, wires and dielectrics. In general, these artificially constructed cells are stacked or planar and configured in a particular repeated pattern to compose the individual NIM. For instance, the unit cells of the first NIMs were stacked horizontally and vertically, resulting in a pattern that was repeated and intended (see below images). Specifications for the response of each unit cell are predetermined prior to construction and are based on the intended response of the entire, newly constructed, material. In other words, each cell is individually tuned to respond in a certain way, based on the desired output of the NIM. The aggregate response is mainly determined by each unit cell's geometry and substantially differs from the response of its constituent materials. In other words, the way the NIM responds is that of a new material, unlike the wires or metals and dielectrics it is made from. Hence, the NIM has become an effective medium. Also, in effect, this metamaterial has become an “ordered macroscopic material, synthesized from the bottom up”, and has emergent properties beyond its components. Metamaterials that exhibit a negative value for the refractive index are often referred to by any of several terminologies: left-handed media or left-handed material (LHM), backward-wave media (BW media), media with negative refractive index, double negative (DNG) metamaterials, and other similar names. Properties and characteristics Electrodynamics of media with negative indices of refraction were first studied by Russian theoretical physicist Victor Veselago from Moscow Institute of Physics and Technology in 1967. The proposed left-handed or negative-index materials were theorized to exhibit optical properties opposite to those of glass, air, and other transparent media. Such materials were predicted to exhibit counterintuitive properties like bending or refracting light in unusual and unexpected ways. However, the first practical metamaterial was not constructed until 33 years later and it does support Veselago's concepts. Currently, negative-index metamaterials are being developed to manipulate electromagnetic radiation in new ways. For example, optical and electromagnetic properties of natural materials are often altered through chemistry. With metamaterials, optical and electromagnetic properties can be engineered by changing the geometry of its unit cells. The unit cells are materials that are ordered in geometric arrangements with dimensions that are fractions of the wavelength of the radiated electromagnetic wave. Each artificial unit responds to the radiation from the source. The collective result is the material's response to the electromagnetic wave that is broader than normal. Subsequently, transmission is altered by adjusting the shape, size, and configurations of the unit cells. This results in control over material parameters known as permittivity and magnetic permeability. These two parameters (or quantities) determine the propagation of electromagnetic waves in matter. Therefore, controlling the values of permittivity and permeability means that the refractive index can be negative or zero as well as conventionally positive. It all depends on the intended application or desired result. So, optical properties can be expanded beyond the capabilities of lenses, mirrors, and other conventional materials. Additionally, one of the effects most studied is the negative index of refraction. Reverse propagation When a negative index of refraction occurs, propagation of the electromagnetic wave is reversed. Resolution below the diffraction limit becomes possible. This is known as subwavelength imaging. Transmitting a beam of light via an electromagnetically flat surface is another capability. In contrast, conventional materials are usually curved, and cannot achieve resolution below the diffraction limit. Also, reversing the electromagnetic waves in a material, in conjunction with other ordinary materials (including air) could result in minimizing losses that would normally occur. The reverse of the electromagnetic wave, characterized by an antiparallel phase velocity is also an indicator of negative index of refraction. Furthermore, negative-index materials are customized composites. In other words, materials are combined with a desired result in mind. Combinations of materials can be designed to achieve optical properties not seen in nature. The properties of the composite material stem from its lattice structure constructed from components smaller than the impinging electromagnetic wavelength separated by distances that are also smaller than the impinging electromagnetic wavelength. Likewise, by fabricating such metamaterials researchers are trying to overcome fundamental limits tied to the wavelength of light. The unusual and counterintuitive properties currently have practical and commercial use manipulating electromagnetic microwaves in wireless and communication systems. Lastly, research continues in the other domains of the electromagnetic spectrum, including visible light. Materials The first actual metamaterials worked in the microwave regime, or centimeter wavelengths, of the electromagnetic spectrum (about 4.3 GHz). It was constructed of split-ring resonators and conducting straight wires (as unit cells). The unit cells were sized from 7 to 10 millimeters. The unit cells were arranged in a two-dimensional (periodic) repeating pattern which produces a crystal-like geometry. Both the unit cells and the lattice spacing were smaller than the radiated electromagnetic wave. This produced the first left-handed material when both the permittivity and permeability of the material were negative. This system relies on the resonant behavior of the unit cells. Below a group of researchers develop an idea for a left-handed metamaterial that does not rely on such resonant behavior. Research in the microwave range continues with split-ring resonators and conducting wires. Research also continues in the shorter wavelengths with this configuration of materials and the unit cell sizes are scaled down. However, at around 200 terahertz issues arise which make using the split ring resonator problematic. "Alternative materials become more suitable for the terahertz and optical regimes." At these wavelengths selection of materials and size limitations become important. For example, in 2007 a 100 nanometer mesh wire design made of silver and woven in a repeating pattern transmitted beams at the 780 nanometer wavelength, the far end of the visible spectrum. The researchers believe this produced a negative refraction of 0.6. Nevertheless, this operates at only a single wavelength like its predecessor metamaterials in the microwave regime. Hence, the challenges are to fabricate metamaterials so that they "refract light at ever-smaller wavelengths" and to develop broad band capabilities. Artificial transmission-line-media In the metamaterial literature, medium or media refers to transmission medium or optical medium. In 2002, a group of researchers came up with the idea that in contrast to materials that depended on resonant behavior, non-resonant phenomena could surpass narrow bandwidth constraints of the wire/split-ring resonator configuration. This idea translated into a type of medium with broader bandwidth abilities, negative refraction, backward waves, and focusing beyond the diffraction limit. They dispensed with split-ring-resonators and instead used a network of L–C loaded transmission lines. In metamaterial literature this became known as artificial transmission-line media. At that time it had the added advantage of being more compact than a unit made of wires and split ring resonators. The network was both scalable (from the megahertz to the tens of gigahertz range) and tunable. It also includes a method for focusing the wavelengths of interest. By 2007 the negative refractive index transmission line was employed as a subwavelength focusing free-space flat lens. That this is a free-space lens is a significant advance. Part of prior research efforts targeted creating a lens that did not need to be embedded in a transmission line. The optical domain Metamaterial components shrink as research explores shorter wavelengths (higher frequencies) of the electromagnetic spectrum in the infrared and visible spectrums. For example, theory and experiment have investigated smaller horseshoe shaped split ring resonators designed with lithographic techniques, as well as paired metal nanorods or nanostrips, and nanoparticles as circuits designed with lumped element models Applications The science of negative-index materials is being matched with conventional devices that broadcast, transmit, shape, or receive electromagnetic signals that travel over cables, wires, or air. The materials, devices and systems that are involved with this work could have their properties altered or heightened. Hence, this is already happening with metamaterial antennas and related devices which are commercially available. Moreover, in the wireless domain these metamaterial apparatuses continue to be researched. Other applications are also being researched. These are electromagnetic absorbers such as radar-microwave absorbers, electrically small resonators, waveguides that can go beyond the diffraction limit, phase compensators, advancements in focusing devices (e.g. microwave lens), and improved electrically small antennas. In the optical frequency regime developing the superlens may allow for imaging below the diffraction limit. Other potential applications for negative-index metamaterials are optical nanolithography, nanotechnology circuitry, as well as a near field superlens (Pendry, 2000) that could be useful for biomedical imaging and subwavelength photolithography. Manipulating permittivity and permeability To describe any electromagnetic properties of a given achiral material such as an optical lens, there are two significant parameters. These are permittivity, , and permeability, , which allow accurate prediction of light waves traveling within materials, and electromagnetic phenomena that occur at the interface between two materials. For example, refraction is an electromagnetic phenomenon which occurs at the interface between two materials. Snell's law states that the relationship between the angle of incidence of a beam of electromagnetic radiation (light) and the resulting angle of refraction rests on the refractive indices, , of the two media (materials). The refractive index of an achiral medium is given by . Hence, it can be seen that the refractive index is dependent on these two parameters. Therefore, if designed or arbitrarily modified values can be inputs for and , then the behavior of propagating electromagnetic waves inside the material can be manipulated at will. This ability then allows for intentional determination of the refractive index. For example, in 1967, Victor Veselago analytically determined that light will refract in the reverse direction (negatively) at the interface between a material with negative refractive index and a material exhibiting conventional positive refractive index. This extraordinary material was realized on paper with simultaneous negative values for and , and could therefore be termed a double negative material. However, in Veselago's day a material which exhibits double negative parameters simultaneously seemed impossible because no natural materials exist which can produce this effect. Therefore, his work was ignored for three decades. It was nominated for the Nobel Prize later. In general the physical properties of natural materials cause limitations. Most dielectrics only have positive permittivities, > 0. Metals will exhibit negative permittivity, < 0 at optical frequencies, and plasmas exhibit negative permittivity values in certain frequency bands. Pendry et al. demonstrated that the plasma frequency can be made to occur in the lower microwave frequencies for metals with a material made of metal rods that replaces the bulk metal. However, in each of these cases permeability remains always positive. At microwave frequencies it is possible for negative μ to occur in some ferromagnetic materials. But the inherent drawback is they are difficult to find above terahertz frequencies. In any case, a natural material that can achieve negative values for permittivity and permeability simultaneously has not been found or discovered. Hence, all of this has led to constructing artificial composite materials known as metamaterials in order to achieve the desired results. Negative index of refraction due to chirality In case of chiral materials, the refractive index depends not only on permittivity and permeability , but also on the chirality parameter , resulting in distinct values for left and right circularly polarized waves, given by A negative index will occur for waves of one circular polarization if > . In this case, it is not necessary that either or both and be negative to achieve a negative index of refraction. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al., and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. Physical properties never before produced in nature Theoretical articles were published in 1996 and 1999 which showed that synthetic materials could be constructed to purposely exhibit a negative permittivity and permeability. These papers, along with Veselago's 1967 theoretical analysis of the properties of negative-index materials, provided the background to fabricate a metamaterial with negative effective permittivity and permeability. See below. A metamaterial developed to exhibit negative-index behavior is typically formed from individual components. Each component responds differently and independently to a radiated electromagnetic wave as it travels through the material. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. Composite material In the year 2000, David R. Smith's team of UCSD researchers produced a new class of composite materials by depositing a structure onto a circuit-board substrate consisting of a series of thin copper split-rings and ordinary wire segments strung parallel to the rings. This material exhibited unusual physical properties that had never been observed in nature. These materials obey the laws of physics, but behave differently from normal materials. In essence these negative-index metamaterials were noted for having the ability to reverse many of the physical properties that govern the behavior of ordinary optical materials. One of those unusual properties is the ability to reverse, for the first time, Snell's law of refraction. Until the demonstration of negative refractive index for microwaves by the UCSD team, the material had been unavailable. Advances during the 1990s in fabrication and computation abilities allowed these first metamaterials to be constructed. Thus, the "new" metamaterial was tested for the effects described by Victor Veselago 30 years earlier. Studies of this experiment, which followed shortly thereafter, announced that other effects had occurred. With antiferromagnets and certain types of insulating ferromagnets, effective negative magnetic permeability is achievable when polariton resonance exists. To achieve a negative index of refraction, however, permittivity with negative values must occur within the same frequency range. The artificially fabricated split-ring resonator is a design that accomplishes this, along with the promise of dampening high losses. With this first introduction of the metamaterial, it appears that the losses incurred were smaller than antiferromagnetic, or ferromagnetic materials. When first demonstrated in 2000, the composite material (NIM) was limited to transmitting microwave radiation at frequencies of 4 to 7 gigahertz (4.28–7.49 cm wavelengths). This range is between the frequency of household microwave ovens (~2.45 GHz, 12.23 cm) and military radars (~10 GHz, 3 cm). At demonstrated frequencies, pulses of electromagnetic radiation moving through the material in one direction are composed of constituent waves moving in the opposite direction. The metamaterial was constructed as a periodic array of copper split ring and wire conducting elements deposited onto a circuit-board substrate. The design was such that the cells, and the lattice spacing between the cells, were much smaller than the radiated electromagnetic wavelength. Hence, it behaves as an effective medium. The material has become notable because its range of (effective) permittivity εeff and permeability μeff values have exceeded those found in any ordinary material. Furthermore, the characteristic of negative (effective) permeability evinced by this medium is particularly notable, because it has not been found in ordinary materials. In addition, the negative values for the magnetic component is directly related to its left-handed nomenclature, and properties (discussed in a section below). The split-ring resonator (SRR), based on the prior 1999 theoretical article, is the tool employed to achieve negative permeability. This first composite metamaterial is then composed of split-ring resonators and electrical conducting posts. Initially, these materials were only demonstrated at wavelengths longer than those in the visible spectrum. In addition, early NIMs were fabricated from opaque materials and usually made of non-magnetic constituents. As an illustration, however, if these materials are constructed at visible frequencies, and a flashlight is shone onto the resulting NIM slab, the material should focus the light at a point on the other side. This is not possible with a sheet of ordinary opaque material. In 2007, the NIST in collaboration with the Atwater Lab at Caltech created the first NIM active at optical frequencies. More recently (), layered "fishnet" NIM materials made of silicon and silver wires have been integrated into optical fibers to create active optical elements. Simultaneous negative permittivity and permeability Negative permittivity εeff < 0 had already been discovered and realized in metals for frequencies all the way up to the plasma frequency, before the first metamaterial. There are two requirements to achieve a negative value for refraction. First, is to fabricate a material which can produce negative permeability μeff < 0. Second, negative values for both permittivity and permeability must occur simultaneously over a common range of frequencies. Therefore, for the first metamaterial, the nuts and bolts are one split-ring resonator electromagnetically combined with one (electric) conducting post. These are designed to resonate at designated frequencies to achieve the desired values. Looking at the make-up of the split ring, the associated magnetic field pattern from the SRR is dipolar. This dipolar behavior is notable because this means it mimics nature's atom, but on a much larger scale, such as in this case at 2.5 millimeters. Atoms exist on the scale of picometers. The splits in the rings create a dynamic where the SRR unit cell can be made resonant at radiated wavelengths much larger than the diameter of the rings. If the rings were closed, a half wavelength boundary would be electromagnetically imposed as a requirement for resonance. The split in the second ring is oriented opposite to the split in the first ring. It is there to generate a large capacitance, which occurs in the small gap. This capacitance substantially decreases the resonant frequency while concentrating the electric field. The individual SRR depicted on the right had a resonant frequency of 4.845 GHz, and the resonance curve, inset in the graph, is also shown. The radiative losses from absorption and reflection are noted to be small, because the unit dimensions are much smaller than the free space, radiated wavelength. When these units or cells are combined into a periodic arrangement, the magnetic coupling between the resonators is strengthened, and a strong magnetic coupling occurs. Properties unique in comparison to ordinary or conventional materials begin to emerge. For one thing, this periodic strong coupling creates a material, which now has an effective magnetic permeability μeff in response to the radiated-incident magnetic field. Composite material passband Graphing the general dispersion curve, a region of propagation occurs from zero up to a lower band edge, followed by a gap, and then an upper passband. The presence of a 400 MHz gap between 4.2 GHz and 4.6 GHz implies a band of frequencies where μeff < 0 occurs. (Please see the image in the previous section) Furthermore, when wires are added symmetrically between the split rings, a passband occurs within the previously forbidden band of the split ring dispersion curves. That this passband occurs within a previously forbidden region indicates that the negative εeff for this region has combined with the negative μeff to allow propagation, which fits with theoretical predictions. Mathematically, the dispersion relation leads to a band with negative group velocity everywhere, and a bandwidth that is independent of the plasma frequency, within the stated conditions. Mathematical modeling and experiment have both shown that periodically arrayed conducting elements (non-magnetic by nature) respond predominantly to the magnetic component of incident electromagnetic fields. The result is an effective medium and negative μeff over a band of frequencies. The permeability was verified to be the region of the forbidden band, where the gap in propagation occurred – from a finite section of material. This was combined with a negative permittivity material, εeff < 0, to form a “left-handed” medium, which formed a propagation band with negative group velocity where previously there was only attenuation. This validated predictions. In addition, a later work determined that this first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation (see ref #). Other predicted electrodynamic effects were to be investigated in other research. Describing a left-handed material From the conclusions in the above section a left-handed material (LHM) can be defined. It is a material which exhibits simultaneous negative values for permittivity, ε, and permeability, μ, in an overlapping frequency region. Since the values are derived from the effects of the composite medium system as a whole, these are defined as effective permittivity, εeff, and effective permeability, μeff. Real values are then derived to denote the value of negative index of refraction, and wave vectors. This means that in practice losses will occur for a given medium used to transmit electromagnetic radiation such as microwave, or infrared frequencies, or visible light – for example. In this instance, real values describe either the amplitude or the intensity of a transmitted wave relative to an incident wave, while ignoring the negligible loss values. Isotropic negative index in two dimensions In the above sections first fabricated metamaterial was constructed with resonating elements, which exhibited one direction of incidence and polarization. In other words, this structure exhibited left-handed propagation in one dimension. This was discussed in relation to Veselago's seminal work 33 years earlier (1967). He predicted that intrinsic to a material, which manifests negative values of effective permittivity and permeability, are several types of reversed physics phenomena. Hence, there was then a critical need for a higher-dimensional LHMs to confirm Veselago's theory, as expected. The confirmation would include reversal of Snell's law (index of refraction), along with other reversed phenomena. In the beginning of 2001 the existence of a higher-dimensional structure was reported. It was two-dimensional and demonstrated by both experiment and numerical confirmation. It was an LHM, a composite constructed of wire strips mounted behind the split-ring resonators (SRRs) in a periodic configuration. It was created for the express purpose of being suitable for further experiments to produce the effects predicted by Veselago. Experimental verification of a negative index of refraction A theoretical work published in 1967 by Soviet physicist Victor Veselago showed that a refractive index with negative values is possible and that this does not violate the laws of physics. As discussed previously (above), the first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation. It was reported in May 2000. In 2001, a team of researchers constructed a prism composed of metamaterials (negative-index metamaterials) to experimentally test for negative refractive index. The experiment used a waveguide to help transmit the proper frequency and isolate the material. This test achieved its goal because it successfully verified a negative index of refraction. The experimental demonstration of negative refractive index was followed by another demonstration, in 2003, of a reversal of Snell's law, or reversed refraction. However, in this experiment negative index of refraction material is in free space from 12.6 to 13.2 GHz. Although the radiated frequency range is about the same, a notable distinction is this experiment is conducted in free space rather than employing waveguides. Furthering the authenticity of negative refraction, the power flow of a wave transmitted through a dispersive left-handed material was calculated and compared to a dispersive right-handed material. The transmission of an incident field, composed of many frequencies, from an isotropic nondispersive material into an isotropic dispersive media is employed. The direction of power flow for both nondispersive and dispersive media is determined by the time-averaged Poynting vector. Negative refraction was shown to be possible for multiple frequency signals by explicit calculation of the Poynting vector in the LHM. Fundamental electromagnetic properties of the NIM In a slab of conventional material with an ordinary refractive index – a right-handed material (RHM) – the wave front is transmitted away from the source. In a NIM the wavefront travels toward the source. However, the magnitude and direction of the flow of energy essentially remains the same in both the ordinary material and the NIM. Since the flow of energy remains the same in both materials (media), the impedance of the NIM matches the RHM. Hence, the sign of the intrinsic impedance is still positive in a NIM. Light incident on a left-handed material, or NIM, will bend to the same side as the incident beam, and for Snell's law to hold, the refraction angle should be negative. In a passive metamaterial medium this determines a negative real and imaginary part of the refractive index. Negative refractive index in left-handed materials In 1968 Victor Veselago's paper showed that the opposite directions of EM plane waves and the flow of energy was derived from the individual Maxwell curl equations. In ordinary optical materials, the curl equation for the electric field show a "right hand rule" for the directions of the electric field E, the magnetic induction B, and wave propagation, which goes in the direction of wave vector k. However, the direction of energy flow formed by E × H is right-handed only when permeability is greater than zero. This means that when permeability is less than zero, e.g. negative, wave propagation is reversed (determined by k), and contrary to the direction of energy flow. Furthermore, the relations of vectors E, H, and k form a "left-handed" system – and it was Veselago who coined the term "left-handed" (LH) material, which is in wide use today (2011). He contended that an LH material has a negative refractive index and relied on the steady-state solutions of Maxwell's equations as a center for his argument. After a 30-year void, when LH materials were finally demonstrated, it could be said that the designation of negative refractive index is unique to LH systems; even when compared to photonic crystals. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. But, negative refraction does not occur in these systems, and not yet realistically in photonic crystals. Negative refraction at optical frequencies The negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time. In 2006, a Caltech team led by Lezec, Dionne, and Atwater achieved negative refraction in the visible spectral regime. Reversed Cherenkov radiation Besides reversed values for the index of refraction, Veselago predicted the occurrence of reversed Cherenkov radiation in a left-handed medium. Whereas ordinary Cherenkov radiation is emitted in a cone around the direction in which a charged particle is travelling through the medium, reversed Cherenkov radiation is emitted in a cone around the opposite direction. Reversed Cherenkov radiation was first experimentally demonstrated indirectly in 2009, using a phased electromagnetic dipole array to model a moving charged particle. Reversed Cherenkov radiation emitted by actual charged particles was first observed in 2017. Other optics with NIMs Theoretical work, along with numerical simulations, began in the early 2000s on the abilities of DNG slabs for subwavelength focusing. The research began with Pendry's proposed "Perfect lens." Several research investigations that followed Pendry's concluded that the "Perfect lens" was possible in theory but impractical. One direction in subwavelength focusing proceeded with the use of negative-index metamaterials, but based on the enhancements for imaging with surface plasmons. In another direction researchers explored paraxial approximations of NIM slabs. Implications of negative refractive materials The existence of negative refractive materials can result in a change in electrodynamic calculations for the case of permeability μ = 1 . A change from a conventional refractive index to a negative value gives incorrect results for conventional calculations, because some properties and effects have been altered. When permeability μ has values other than 1 this affects Snell's law, the Doppler effect, the Cherenkov radiation, Fresnel's equations, and Fermat's principle. The refractive index is basic to the science of optics. Shifting the refractive index to a negative value may be a cause to revisit or reconsider the interpretation of some norms, or basic laws. US patent on left-handed composite media The first US patent for a fabricated metamaterial, titled "Left handed composite media" by David R. Smith, Sheldon Schultz, Norman Kroll and Richard A. Shelby, was issued in 2004. The invention achieves simultaneous negative permittivity and permeability over a common band of frequencies. The material can integrate media which is already composite or continuous, but which will produce negative permittivity and permeability within the same spectrum of frequencies. Different types of continuous or composite may be deemed appropriate when combined for the desired effect. However, the inclusion of a periodic array of conducting elements is preferred. The array scatters electromagnetic radiation at wavelengths longer than the size of the element and lattice spacing. The array is then viewed as an effective medium. See also History of metamaterials Superlens Metamaterial cloaking Photonic metamaterials Metamaterial antenna Nonlinear metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Acoustic metamaterials Metamaterial absorber Metamaterial Plasmonic metamaterials Terahertz metamaterials Tunable metamaterials Transformation optics Theories of cloaking Academic journals Metamaterials Metamaterials books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations Notes -NIST References Further reading Also see the Preprint-author's copy. External links Manipulating the Near Field with Metamaterials Slide show, with audio available, by Dr. John Pendry, Imperial College, London List of science website news stories on Left Handed Materials Metamaterials Electromagnetism 2000 in science 21st century in science 20th century in science Articles containing video clips
Negative-index metamaterial
[ "Physics", "Materials_science", "Engineering" ]
6,551
[ "Electromagnetism", "Physical phenomena", "Metamaterials", "Materials science", "Fundamental interactions" ]
23,874,520
https://en.wikipedia.org/wiki/Velocity%20dispersion
In astronomy, the velocity dispersion (σ) is the statistical dispersion of velocities about the mean velocity for a group of astronomical objects, such as an open cluster, globular cluster, galaxy, galaxy cluster, or supercluster. By measuring the radial velocities of the group's members through astronomical spectroscopy, the velocity dispersion of that group can be estimated and used to derive the group's mass from the virial theorem. Radial velocity is found by measuring the Doppler width of spectral lines of a collection of objects; the more radial velocities one measures, the more accurately one knows their dispersion. A central velocity dispersion refers to the σ of the interior regions of an extended object, such as a galaxy or cluster. The relationship between velocity dispersion and matter (or the observed electromagnetic radiation emitted by this matter) takes several forms – specific correlations – in astronomy based on the object(s) being observed. Notably, the M–σ relation applies for material orbiting many black holes, the Faber–Jackson relation for elliptical galaxies, and the Tully–Fisher relation for spiral galaxies. For example, the σ found for objects about the Milky Way's supermassive black hole (SMBH) is about 100 km/s, which provides an approximation of the mass of this SMBH. The Andromeda Galaxy (Messier 31) hosts a SMBH about 10 times larger than our own, and has a . Groups and clusters of galaxies have more disparate (contrasting in degree) velocity dispersions than smaller objects. For example, while our own poor group, the Local Group, has a , rich clusters of galaxies, such as the Coma Cluster, have a . The dwarf elliptical galaxies within Coma, as with all galaxies, have their own internal velocity dispersion for their stars, which is a , typically. Normal elliptical galaxies, by comparison, have an average . For spiral galaxies, the increase in velocity dispersion in population I stars is a gradual process which likely results from the near-random incidence of momentum exchanges, specifically dynamical friction, between individual stars and large interstellar media (gas and dust clouds) with masses greater than . Face-on spiral galaxies have a central ; slightly more if viewed edge-on. See also M–σ relation – for material circling supermassive black holes Faber–Jackson relation – for elliptical galaxies Tully–Fisher relation – for spiral galaxies References Concepts in astrophysics Celestial mechanics Equations of astronomy Extragalactic astronomy Galactic astronomy
Velocity dispersion
[ "Physics", "Astronomy" ]
523
[ "Concepts in astrophysics", "Concepts in astronomy", "Galactic astronomy", "Classical mechanics", "Astrophysics", "Equations of astronomy", "Celestial mechanics", "Extragalactic astronomy", "Astronomical sub-disciplines" ]
23,875,274
https://en.wikipedia.org/wiki/Monosodium%20tartrate
Monosodium tartrate or sodium bitartrate is a sodium acid salt of tartaric acid. As a food additive it is used as an acidity regulator and is known by the E number E335. As an analytical reagent, it can be used in a test for ammonium cation which gives a white precipitate. See also Sodium tartrate, the disodium salt of tartaric acid References Organic sodium salts Tartrates
Monosodium tartrate
[ "Chemistry" ]
99
[ "Salts", "Organic compounds", "Organic sodium salts", "Organic compound stubs", "Organic chemistry stubs" ]
1,877,149
https://en.wikipedia.org/wiki/Permissible%20stress%20design
Permissible stress design is a design philosophy used by mechanical engineers and civil engineers. The civil designer ensures that the stresses developed in a structure due to service loads do not exceed the elastic limit. This limit is usually determined by ensuring that stresses remain within the limits through the use of factors of safety. In structural engineering, the permissible stress design approach has generally been replaced internationally by limit state design (also known as ultimate stress design, or in USA, Load and Resistance Factor Design, LRFD) as far as structural engineering is considered, except for some isolated cases. In USA structural engineering construction, allowable stress design (ASD) has not yet been completely superseded by limit state design except in the case of Suspension bridges, which changed from allowable stress design to limit state design in the 1960s. Wood, steel, and other materials are still frequently designed using allowable stress design, although LRFD is probably more commonly taught in the USA university system. In mechanical engineering design such as design of pressure equipment, the method uses the actual loads predicted to be experienced in practice to calculate stress and deflection. Such loads may include pressure thrusts and the weight of materials. The predicted stresses and deflections are compared with allowable values that have a "factor" against various failure mechanisms such as leakage, yield, ultimate load prior to plastic failure, buckling, brittle fracture, fatigue, and vibration/harmonic effects. However, the predicted stresses almost always assumes the material is linear elastic. The "factor" is sometimes called a factor of safety, although this is technically incorrect because the factor includes allowance for matters such as local stresses and manufacturing imperfections that are not specifically calculated; exceeding the allowable values is not considered to be good practice (i.e. is not "safe"). The permissible stress method is also known in some national standards as the working stress method because the predicted stresses are the unfactored stresses expected during operation of the equipment (e.g. AS1210, AS3990). This mechanical engineering approach differs from an ultimate design approach which factors up the predicted loads for comparison with an ultimate failure limit. One method factors up the predicted load, the other method factors down the failure stress. See also Allowable Strength Design Building code References Civil engineering Mechanical engineering Structural engineering
Permissible stress design
[ "Physics", "Engineering" ]
469
[ "Structural engineering", "Applied and interdisciplinary physics", "Construction", "Civil engineering", "Mechanical engineering" ]
1,877,286
https://en.wikipedia.org/wiki/Classical%20Heisenberg%20model
In statistical physics, the classical Heisenberg model, developed by Werner Heisenberg, is the case of the n-vector model, one of the models used to model ferromagnetism and other phenomena. Definition The classical Heisenberg model can be formulated as follows: take a d-dimensional lattice, and place a set of spins of unit length, , on each lattice node. The model is defined through the following Hamiltonian: where is a coupling between spins. Properties The general mathematical formalism used to describe and solve the Heisenberg model and certain generalizations is developed in the article on the Potts model. In the continuum limit the Heisenberg model (2) gives the following equation of motion This equation is called the continuous classical Heisenberg ferromagnet equation or, more shortly, the Heisenberg model and is integrable in the sense of soliton theory. It admits several integrable and nonintegrable generalizations like the Landau-Lifshitz equation, the Ishimori equation, and so on. One dimension In the case of a long-range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive, at a low enough temperature, if (infrared bounds). As in any 'nearest-neighbor' n-vector model with free boundary conditions, if the external field is zero, there exists a simple exact solution. Two dimensions In the case of a long-range interaction, , the thermodynamic limit is well defined if ; the magnetization remains zero if ; but the magnetization is positive at a low enough temperature if (infrared bounds). Polyakov has conjectured that, as opposed to the classical XY model, there is no dipole phase for any ; namely, at non-zero temperatures the correlations cluster exponentially fast. Three and higher dimensions Independently of the range of the interaction, at a low enough temperature the magnetization is positive. Conjecturally, in each of the low temperature extremal states the truncated correlations decay algebraically. See also Heisenberg model (quantum) Ising model Classical XY model Magnetism Ferromagnetism Landau–Lifshitz equation Ishimori equation References External links Absence of Ferromagnetism or Antiferromagnetism in One- or Two-Dimensional Isotropic Heisenberg Models The Heisenberg Model - a Bibliography Monte-Carlo simulation of the Heisenberg, XY and Ising models with 3D graphics (requires WebGL compatible browser) Magnetic ordering Spin models Lattice models Werner Heisenberg
Classical Heisenberg model
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
547
[ "Spin models", "Quantum mechanics", "Lattice models", "Computational physics", "Electric and magnetic fields in matter", "Magnetic ordering", "Materials science", "Condensed matter physics", "Statistical mechanics" ]
1,877,429
https://en.wikipedia.org/wiki/PGLO
The pGLO plasmid is an engineered plasmid used in biotechnology as a vector for creating genetically modified organisms. The plasmid contains several reporter genes, most notably the green fluorescent protein (GFP) and the ampicillin resistance gene. GFP was isolated from the jelly fish Aequorea victoria. Because it shares a bidirectional promoter with a gene for metabolizing arabinose, the GFP gene is expressed in the presence of arabinose, which makes the transgenic organism express its fluorescence under UV light. GFP can be induced in bacteria containing the pGLO plasmid by growing them on +arabinose plates. pGLO is made by Bio-Rad Laboratories. Structure pGLO is made up of three genes that are joined together using recombinant DNA technology. They are as follows: Bla, which codes for the enzyme beta-lactamase giving the transformed bacteria resistance to the beta-lactam family of antibiotics (such as of the penicillin family) araC, a promoter region that regulates the expression of GFP (specifically, the GFP gene will be expressed only in the presence of arabinose) GFP, the green fluorescent protein, which gives a green glow if cells produce this type of protein Like most other circular plasmids, the pGLO plasmid contains an origin of replication (ori), which is a region of the plasmid where replication will originate. The pGLO plasmid was made famous by researchers in France who used it to produce a green fluorescent rabbit named Alba. Other features on pGLO, like most other plasmids, include a selectable marker and an MCS (multiple cloning site) located at the end of the GFP gene. The plasmid is 5371 base pairs long. In supercoiled form, it runs on an agarose gel in the 4200–4500 range. Discovery of GFP The GFP gene was first observed by Osamu Shimomura and his team in 1962 while studying the jellyfish Aequorea victoria that have a ring of blue light under their umbrella. Shimomura and his team isolated the protein aequorin from thousands of jellyfish until they gathered enough for a full analysis of the protein. It was through the study of aequorin that Shimomura discovered small amounts of GFP which glows green when aequorin emits blue light. After successfully discovering how GFP works with aequorin in the jellyfish, he set it aside to study bioluminescence in other organisms. In 1994 Marty Chalfie and his team were able to successfully create bacteria and round worms that expressed the GFP protein. Soon after, Roger Tsien and his team were able to create mutant GFP that can emit a range of colors, not just green. The three scientists hold the Nobel Prize in Chemistry for 2008 for the discovery and development of the green fluorescent protein, GFP. References Genetically modified organisms Plasmids Fluorescence techniques
PGLO
[ "Engineering", "Biology" ]
636
[ "Genetically modified organisms", "Plasmids", "Genetic engineering", "Bacteria", "Fluorescence techniques" ]
1,877,510
https://en.wikipedia.org/wiki/Multiple%20cloning%20site
A multiple cloning site (MCS), also called a polylinker, is a short segment of DNA which contains many (up to ~20) restriction sites - a standard feature of engineered plasmids. Restriction sites within an MCS are typically unique, occurring only once within a given plasmid. The purpose of an MCS in a plasmid is to allow a piece of DNA to be inserted into that region. An MCS is found in a variety of vectors, including cloning vectors to increase the number of copies of target DNA, and in expression vectors to create a protein product. In expression vectors, the MCS is located downstream of the promoter. Creating a multiple cloning site In some instances, a vector may not contain an MCS. Rather, an MCS can be added to a vector. The first step is designing complementary oligonucleotide sequences that contain restriction enzyme sites along with additional bases on the end that are complementary to the vector after digesting. Then the oligonucleotide sequences can be annealed and ligated into the digested and purified vector. The digested vector is cut with a restriction enzyme that complements the oligonucleotide insert overhangs. After ligation, transform the vector into bacteria and verify the insert by sequencing. This method can also be used to add new restriction sites to a multiple cloning site. Uses Multiple cloning sites are a feature that allows for the insertion of foreign DNA without disrupting the rest of the plasmid which makes it extremely useful in biotechnology, bioengineering, and molecular genetics. MCS can aid in making transgenic organisms, more commonly known as a genetically modified organism (GMO) using genetic engineering. To take advantage of the MCS in genetic engineering, a gene of interest has to be added to the vector during production when the MCS is cut open. After the MCS is made and ligated it will include the gene of interest and can be amplified to increase gene copy number in a bacterium-host. After the bacterium replicates, the gene of interest can be extracted out of the bacterium. In some instances, an expression vector can be used to create a protein product. After the products are isolated, they have a wide variety of uses such as the production of insulin, the creation of vaccines, production of antibiotics, and creation of gene therapies. Example One bacterial plasmid used in genetic engineering as a plasmid cloning vector is pUC18. Its polylinker region is composed of several restriction enzyme recognition sites, that have been engineered into a single cluster (the polylinker). It has restriction sites for various restriction enzymes, including EcoRI, BamHI, and PstI. Another vector used in genetic engineering is pUC19, which is similar to pUC18, but its polylinker region is reversed. E.coli is also commonly used as the bacterial host because of the availability, quick growth rate, and versatility. An example of a plasmid cloning vector which modifies the inserted protein is pFUSE-Fc plasmid. In order to genetically engineer insulin, the first step is to cut the MCS in the plasmid being used. Once the MCS is cut, the gene for human insulin can be added making the plasmid genetically modified. After that, the genetically modified plasmid is put into the bacterial host and allowed to divide. To make the large supply that is demanded, the host cells are put into a large fermentation tank that is an optimal environment for the host. The process is finished by filtering out the insulin from the host. Purification can then take place so the insulin can be packaged and distributed to individuals with diabetes. References Genetics techniques
Multiple cloning site
[ "Engineering", "Biology" ]
783
[ "Genetics techniques", "Genetic engineering" ]
1,877,895
https://en.wikipedia.org/wiki/Operator%20product%20expansion
In quantum field theory, the operator product expansion (OPE) is used as an axiom to define the product of fields as a sum over the same fields. As an axiom, it offers a non-perturbative approach to quantum field theory. One example is the vertex operator algebra, which has been used to construct two-dimensional conformal field theories. Whether this result can be extended to QFT in general, thus resolving many of the difficulties of a perturbative approach, remains an open research question. In practical calculations, such as those needed for scattering amplitudes in various collider experiments, the operator product expansion is used in QCD sum rules to combine results from both perturbative and non-perturbative (condensate) calculations. OPE Formulation and Application of Thirring Model are conceived by Kenneth G. Wilson. 2D Euclidean quantum field theory In 2D Euclidean field theory, the operator product expansion is a Laurent series expansion associated with two operators. In such an expansion, there are finitely many negative powers of the variable, in addition to potentially infinitely many positive powers of the variable. This expansion is a locally convergent sum. More precisely, if is a point, and and are operator-valued fields, then there is an open neighborhood of such that for all Heuristically, in quantum field theory the interest is in the physical observables represented by operators. To know the result of making two physical observations at two points and , their operators can be ordered in increasing time. In conformal coordinate mappings, the radial ordering is instead more relevant. This is the analogue of time ordering where increasing time has been mapped to some increasing radius on the complex plane. Normal ordering of creation operators is useful when working in the second quantization formalism. A radial-ordered OPE can be written as a normal-ordered OPE minus the non-normal-ordered terms. The non-normal-ordered terms can often be written as a commutator, and these have useful simplifying identities. The radial ordering supplies the convergence of the expansion. The result is a convergent expansion of the product of two operators in terms of some terms that have poles in the complex plane (the Laurent terms) and terms that are finite. This result represents the expansion of two operators at two different points in the original coordinate system as an expansion around just one point in the space of displacements between points, with terms of the form: . Related to this is that an operator on the complex plane is in general written as a function of and . These are referred to as the holomorphic and anti-holomorphic parts respectively, as they are continuous and differentiable functions with finitely many singularities. In general, the operator product expansion may not separate into holomorphic and anti-holomorphic parts, especially if there are terms in the expansion. However, derivatives of the OPE can often separate the expansion into holomorphic and anti-holomorphic expansions. The resulting expression is also an OPE and in general is more useful. Operator product algebra In the generic case, one is given a set of fields (or operators) that are assumed to be valued over some algebra. For example, fixing x, the may be taken to span some Lie algebra. Setting x free to live on a manifold, the operator product is then simply some element in the ring of functions. In general, such rings do not possess enough structure to make meaningful statements; thus, one considers additional axioms to strengthen the system. The operator product algebra is an associative algebra of the form The structure constants are required to be single-valued functions, rather than sections of some vector bundle. Furthermore, the fields are required to span the ring of functions. In practical calculations, it is usually required that the sums be analytic within some radius of convergence; typically with a radius of convergence of . Thus, the ring of functions can be taken to be the ring of polynomial functions. The above can be viewed as a requirement that is imposed on a ring of functions; imposing this requirement on the fields of a conformal field theory is known as the conformal bootstrap. An example of an operator product algebra is the vertex operator algebra. It is currently hoped that operator product algebras can be used to axiomatize all of quantum field theory; they have successfully done so for the conformal field theories, and whether they can be used as a basis for non-perturbative QFT is an open research area. References External links The OPE at Scholarpedia Quantum field theory Axiomatic quantum field theory Conformal field theory String theory
Operator product expansion
[ "Physics", "Astronomy" ]
961
[ "String theory", "Quantum field theory", "Astronomical hypotheses", "Quantum mechanics" ]
1,878,107
https://en.wikipedia.org/wiki/Lithium%20tetramethylpiperidide
Lithium tetramethylpiperidide (often abbreviated LiTMP or LTMP) is a chemical compound with the molecular formula . It is used as a non-nucleophilic base, being comparable to LiHMDS in terms of steric hindrance. Synthesis It is synthesised by the deprotonation of 2,2,6,6-tetramethylpiperidine with n-butyllithium at −78 °C. Recent reports show that this reaction can also be performed 0 °C. The compound is stable in a THF/ethylbenzene solvent mixture and is commercially available as such. Structure Like many lithium reagents it has a tendency to aggregate, forming a tetramer in the solid state. See also Lithium diisopropylamide Lithium amide References Lithium compounds Non-nucleophilic bases Organolithium compounds Reagents for organic chemistry
Lithium tetramethylpiperidide
[ "Chemistry" ]
191
[ "Non-nucleophilic bases", "Organolithium compounds", "Bases (chemistry)", "Reagents for organic chemistry" ]
1,878,412
https://en.wikipedia.org/wiki/Nernst%20effect
In physics and chemistry, the Nernst effect (also termed the first Nernst–Ettingshausen effect, after Walther Nernst and Albert von Ettingshausen) is a thermoelectric (or thermomagnetic) phenomenon observed when a sample allowing electrical conduction is subjected to a magnetic field and a temperature gradient normal (perpendicular) to each other. An electric field will be induced normal to both. This effect is quantified by the Nernst coefficient , which is defined to be where is the y-component of the electric field that results from the magnetic field's z-component and the x-component of the temperature gradient . The reverse process is known as the Ettingshausen effect and also as the second Nernst–Ettingshausen effect. Physical picture Mobile energy carriers (for example conduction-band electrons in a semiconductor) will move along temperature gradients due to statistics and the relationship between temperature and kinetic energy. If there is a magnetic field transversal to the temperature gradient and the carriers are electrically charged, they experience a force perpendicular to their direction of motion (also the direction of the temperature gradient) and to the magnetic field. Thus, a perpendicular electric field is induced. Sample types The semiconductors exhibit the Nernst effect, as first observed by T. V. Krylova and Mochan in the Soviet Union in 1955. In metals however, it is almost non-existent. Superconductors Nernst effect appears in the vortex phase of type-II superconductors due to vortex motion. High-temperature superconductors exhibit the Nernst effect both in the superconducting and in the pseudogap phase. Heavy fermion superconductors can show a strong Nernst signal which is likely not due to the vortices. See also Spin Nernst effect Seebeck effect Peltier effect Hall effect Righi–Leduc effect References Walther Nernst Electrodynamics Thermoelectricity
Nernst effect
[ "Mathematics" ]
425
[ "Electrodynamics", "Dynamical systems" ]
1,878,645
https://en.wikipedia.org/wiki/Kenbak-1
The Kenbak-1 is considered by the Computer History Museum, the Computer Museum of America and the American Computer Museum to be the world's first "personal computer", invented by John Blankenbaker (born 1929) of Kenbak Corporation in 1970 and first sold in early 1971. Less than 50 machines were ever built, using Bud Industries enclosures as a housing. The system first sold for US$750. Today, only 14 machines are known to exist worldwide, in the hands of various collectors and museums. Production of the Kenbak-1 stopped in 1973, as Kenbak failed and was taken over by CTI Education Products, Inc. CTI rebranded the inventory and renamed it the 5050, though sales remained elusive. Since the Kenbak-1 was invented before the first microprocessor, the machine did not have a one-chip CPU but was instead based purely on small-scale integration TTL chips. The 8-bit machine offered 256 bytes of memory, implemented on Intel's type 1404A silicon gate MOS shift registers. The clock signal period was 1 microsecond (equivalent to a clock speed of 1 MHz), but the program speed averaged below 1,000 instructions per second due the many clock cycles needed for each operation and slow access to serial memory. The machine was programmed in pure machine code using an array of buttons and switches. Output consisted of a row of lights. Internally, the Kenbak-1 has a serial computer architecture, processing one bit at a time. Technical description Registers The Kenbak-1 has a total of nine registers. All are memory mapped. It has three general-purpose registers: A, B and X. Register A is the implicit destination of some operations. Register X, also known as the index register, turns the direct and indirect modes into indexed direct and indexed indirect modes. It also has a program counter, called Register P, three "overflow and carry" registers for A, B and X, respectively, as well as an Input Register and an Output Register. Addressing modes Add, Subtract, Load, Store, Load Complement, And, and Or instructions operate between a register and another operand using five addressing modes: Immediate (operand is in second byte of instruction) Memory (second byte of instruction is the address of the operand) Indirect (second byte of instruction is the address of the address of the operand) Indexed (second byte of instruction is added to X to form the address of the operand) Indirect Indexed (second byte of instruction points to a location which is added to X to form the address of the operand) Instruction table The instructions are encoded in 8 bits, with a possible second byte providing an immediate value or address. Some instructions have multiple possible encodings. History The Kenbak-1, released in early 1971, is considered by the Computer History Museum to be the world's first personal computer. It was designed and invented by John Blankenbaker of Kenbak Corporation in 1970, and was first sold in early 1971. Unlike a modern personal computer, the Kenbak-1 was built of small-scale integrated circuits, and did not use a microprocessor. The system first sold for US$750. Only 44 machines were ever sold, though it's said 50 to 52 were built. In 1973, production of the Kenbak-1 stopped as Kenbak Corporation folded. With a fixed 256 bytes of memory, input and output restricted to lights and switches (no ports or serial output), and no possible way to extend its capabilities, the Kenbak-1 was only really useful for educational use. 256 bytes of memory, 8 bit word size, and I/O limited to switches and lights on the front panel are also characteristics of the 1975 Altair 8800, whose fate was diametrically opposed to that of the Kenbak. However, there were three major differentiating factors between the Altair and the Kenbak which led to the later Altair 8800 selling over 25000 units and influencing many, while the Kenbak-1 only sold 44, and influenced mostly no one. The Kenbak-1, designed before the invention of the microprocessor, had a limited instruction set that was professionally considered "incompatible with microcomputer application goals", according to a citation pointing at the KENBAK-1 programming manual in the contemporary February 1974 issue of RCA Engineer Magazine. The Kenbak-1 had no ability for expansion. There were no expansion slots, and tragically, no serial port or any other way to get data out of the machine (other than the 8 lamps on the front). There was also no way to load data into the machine other than its physical switches. There was no ability to upgrade the capacity of the RAM, and even if there were, there would have been no way to simultaneously address more than 256 bytes of RAM due to limitations of the machine code language. The Kenbak-1 was not advertised outside of the educational market. It was advertised in Science magazine and in-person at a local teacher's convention. There was no attempt to market the machine at the hobbyist market as later successful computers did. John Blankenbaker would later cite this as the reason that his machine failed, as the educational market was "too slow" to adopt his machine while it could have been relevant. However, it is also worth noting that in the educational market, the Kenbak-1 was competing against timeshares of more capable and established computers such as the PDP-8. If the Kenbak-1 were advertised better, and the machine had at least one serial port to make it more useful, it may have done very well at its price-point of $750 in 1971, which no other Turing-complete computer on the market came close to. However, it would not be very long before personal computers based on the much more capable Intel 8008 would come to market, followed shortly after once again by the ten-times-as-fast Intel 8080, in the highly-expandable Altair 8800. See also Datapoint 2200, a contemporary machine with alphanumeric screen and keyboard, suitable to run non-trivial application programs Mark-8, designed by graduate student Jonathan A. Titus and announced as a "loose kit" in the July 1974 issue of Radio-Electronics magazine Altair 8800, a very popular 1975 microcomputer that provided the inspiration for starting Microsoft Gigatron TTL, a 21st-century implementation of a computer using small-scale integration parts References External links Kenbak.com Comprehensive Kenbak-1 history and technical information KENBAK-1 Computer Article KENBAK-1 Computer – Official Kenbak-1 website at www.kenbak-1.info Kenbak-1 Emulator – Kenbak-1 emulator in JavaScript Kenbak-1 Emulator – Kenbak-1 Emulator download Kenbak 1 – Images and information at www.vintage-computer.com Kenbak documentation at bitsavers.org Early microcomputers Computer-related introductions in 1971 Serial computers 8-bit computers
Kenbak-1
[ "Technology" ]
1,487
[ "Serial computers", "Computers" ]
1,879,354
https://en.wikipedia.org/wiki/Telluride%20%28chemistry%29
The telluride ion is the anion Te2− and its derivatives. It is analogous to the other chalcogenide anions, the lighter O2−, S2−, and Se2−, and the heavier Po2−. In principle, Te2− is formed by the two-e− reduction of tellurium. The redox potential is −1.14 V. Te(s) + 2 e− ↔ Te2− Although solutions of the telluride dianion have not been reported, soluble salts of bitelluride (TeH−) are known. Organic tellurides Tellurides also describe a class of organotellurium compounds formally derived from Te2−. An illustrative member is dimethyl telluride, which results from the methylation of telluride salts: 2 CH3I + Na2Te → (CH3)2Te + 2 NaI Dimethyl telluride is formed by the body when tellurium is ingested. Such compounds are often called telluroethers because they are structurally related to ethers with tellurium replacing oxygen, although the length of the C–Te bond is much longer than a C–O bond. C–Te–C angles tend to be closer to 90°. Inorganic tellurides Many metal tellurides are known, including some telluride minerals. These include natural gold tellurides, like calaverite and krennerite (AuTe2), and sylvanite (AgAuTe4). They are minor ores of gold, although they comprise the major naturally occurring compounds of gold. (A few other natural compounds of gold, such as the bismuthide maldonite (Au2Bi) and antimonide aurostibite (AuSb2), are known). Although the bonding in such materials is often fairly covalent, they are described casually as salts of Te2−. Using this approach, Ag2Te is derived from Ag+ and Te2−. Catenated Te anions are known in the form of the polytellurides. They arise by the reaction of telluride dianion with elemental Te: Te2- + n Te → Ten+12- Applications Tellurides have no large scale applications aside from cadmium telluride photovoltaics. Both bismuth telluride and lead telluride are exceptional thermoelectric materials. Some of these thermoelectric materials have been commercialized. References Anions
Telluride (chemistry)
[ "Physics", "Chemistry" ]
530
[ "Ions", "Matter", "Anions" ]
1,879,440
https://en.wikipedia.org/wiki/Oxygen%20bar
An oxygen bar is an establishment, or part of one, that sells oxygen for recreational use. Individual scents may be added to enhance the experience. The flavors in an oxygen bar come from bubbling oxygen through bottles containing aromatic solutions before it reaches the nostrils: most bars use food-grade particles to produce the scent, but some bars use aroma oils. History In 1776, Thomas Henry, an apothecary and Fellow of the Royal Society of England speculated tongue in cheek that Joseph Priestley’s newly discovered dephlogisticated air (now called oxygen) might become "as fashionable as French wine at the fashionable taverns". He did not expect, however, that tavern goers would "relish calling for a bottle of Air, instead of Claret." Another early reference to the recreational use of oxygen is found in Jules Verne's 1870 novel Around the Moon. In this work, Verne states: Modeled after the "air stations" in polluted downtown Tokyo and Beijing, the first oxygen bar (the O2 Spa Bar) opened in Toronto, Canada, in 1996. The trend continued in North America and by the late 1990s, bars were in use in New York, California, Florida, Las Vegas and the Rocky Mountain region. Customers in these bars breathe oxygen through a plastic nasal cannula inserted into their nostrils. Oxygen bars can now be found in many venues such as nightclubs, salons, spas, health clubs, resorts, tanning salons, restaurants, coffee houses, bars, airports, ski chalets, yoga studios, chiropractors, and casinos. They can also be found at trade shows, conventions and corporate meetings, as well as at private parties and promotional events. Provision of oxygen Oxygen bar guests pay about one U.S. dollar per minute to inhale a percentage of oxygen greater than the normal atmospheric content of 20.9% oxygen. This oxygen is gathered from the ambient air by an industrial (non-medical) oxygen concentrator and inhaled through a nasal cannula for up to about 20 minutes. The machines used by oxygen bars or oxygen vendors differ from the typical medical-issue machine, although customers use the cannula, the rubber tube apparatus that fits around the ears and inserts in the nostrils, to breathe in the oxygen. Customers can enhance their experience by using aromatherapy scents to be added to the oxygen, such as lavender or mint. Health risks and benefit claims It has been claimed by alternative medicine that the human body is oxygen-deprived, and that oxygen will remove "toxins" and even cure cancer. Proponents claim this practice is not only safe, but enhances health and well-being, including strengthening the immune system, enhancing concentration, reducing stress, increasing energy and alertness, lessening the effects of hangovers, headaches, and sinus problems, and generally relaxing the body. It has also been alleged to help with altitude sickness. However, no long-term, well-controlled scientific studies have confirmed any of the proponents' claims. Furthermore, the human body is adapted to 21 percent oxygen, and the blood exiting the lungs already has about 97 percent of the oxygen that it could carry bound to hemoglobin. Having a higher oxygen fraction in the lungs serves no purpose, and may actually be detrimental. The medical profession warns that individuals with respiratory diseases such as asthma and emphysema should not inhale too much oxygen. Higher than normal oxygen partial pressure can also indirectly cause carbon dioxide narcosis in patients with chronic obstructive pulmonary disease (COPD). The FDA warns that in some situations, droplets of flavoring oil can be inhaled, which may contribute to an inflammation of the lungs. Some oxygen bar companies offer safe water-based aromas for flavoring in order to maintain compliance and stay within FDA guidelines. Oxygen may also cause serious side effects at excessive doses. Although the effects of oxygen toxicity at atmospheric pressure can cause lung damage, the low fraction of oxygen (30–40%) and relatively brief exposures make pulmonary toxicity unlikely. Nevertheless, due caution should be exercised when consuming oxygen. In the UK, the Health and Safety Executive publishes guidance on equipment (including tubing) and on staff training, as well as warning on potential hazards, and makes several recommendations to ensure safe practice, principally to minimise fire risks. Another concern is the improper maintenance of oxygen equipment. Some oxygen concentrators use clay filters which cause micro-organisms to grow, creating an additional danger that can cause lung infections. Safety hazards Raised concentrations of oxygen increase the risk of ignition, the rate and heat of combustion, and the difficulty of extinguishing a fire. Many materials that will not burn in air will burn in a sufficiently high partial pressure of oxygen. Regulations In the United States, the Federal Food, Drug, and Cosmetic Act defines any substance used for breathing and administered by another person as a prescription drug. Melvin Szymanski, a consumer safety officer in the Food and Drug Administration's (FDA) Center for Drug Evaluation and Research, has explained that at one end of the hose is a source of oxygen, so the individual providing the hose and turning on the supply is dispensing a prescription drug. He commented that "Although oxygen bars that dispense oxygen without a prescription violate FDA regulations, the agency applies regulatory discretion to permit the individual state boards of licensing to enforce the requirements pertaining to the dispensing of oxygen." In the state of Massachusetts, oxygen bars are illegal. See also References Biologically based therapies Gases Oxygen Industrial gases
Oxygen bar
[ "Physics", "Chemistry" ]
1,134
[ "Matter", "Phases of matter", "Industrial gases", "Chemical process engineering", "Statistical mechanics", "Gases" ]
1,879,769
https://en.wikipedia.org/wiki/Lagrangian%20foliation
In mathematics, a Lagrangian foliation or polarization is a foliation of a symplectic manifold, whose leaves are Lagrangian submanifolds. It is one of the steps involved in the geometric quantization of a square-integrable functions on a symplectic manifold. References Kenji FUKAYA, Floer homology of Lagrangian Foliation and Noncommutative Mirror Symmetry, (2000) Symplectic geometry Foliations Mathematical quantization
Lagrangian foliation
[ "Physics", "Mathematics" ]
108
[ "Topology stubs", "Topology", "Mathematical quantization", "Quantum mechanics" ]
1,881,003
https://en.wikipedia.org/wiki/Phycocyanin
Phycocyanin is a pigment-protein complex from the light-harvesting phycobiliprotein family, along with allophycocyanin and phycoerythrin. It is an accessory pigment to chlorophyll. All phycobiliproteins are water-soluble, so they cannot exist within the membrane like carotenoids can. Instead, phycobiliproteins aggregate to form clusters that adhere to the membrane called phycobilisomes. Phycocyanin is a characteristic light blue color, absorbing orange and red light, particularly 620 nm (depending on which specific type it is), and emits fluorescence at about 650 nm (also depending on which type it is). Allophycocyanin absorbs and emits at longer wavelengths than phycocyanin C or phycocyanin R. Phycocyanins are found in cyanobacteria (also called blue-green algae). Phycobiliproteins have fluorescent properties that are used in immunoassay kits. Phycocyanin is from the Greek phyco meaning “algae” and cyanin is from the English word “cyan", which conventionally means a shade of blue-green (close to "aqua") and is derived from the Greek “kyanos" which means a somewhat different color: "dark blue". The product phycocyanin, produced by Aphanizomenon flos-aquae and Spirulina, is for example used in the food and beverage industry as the natural coloring agent 'Lina Blue' or 'EXBERRY Shade Blue' and is found in sweets and ice cream. In addition, fluorescence detection of phycocyanin pigments in water samples is a useful method to monitor cyanobacteria biomass. The phycobiliproteins are made of two subunits (alpha and beta) having a protein backbone to which 1–2 linear tetrapyrrole chromophores are covalently bound. C-phycocyanin is often found in cyanobacteria which thrive around hot springs, as it can be stable up to around 70 °C, with identical spectroscopic (light absorbing) behaviours at 20 and 70 °C. Thermophiles contain slightly different amino acid sequences making it stable under these higher conditions. Molecular weight is around 30,000 Da. Stability of this protein in vitro at these temperatures has been shown to be substantially lower. Photo-spectral analysis of the protein after 1 min exposure to 65 °C conditions in a purified state demonstrated a 50% loss of tertiary structure. Structure Phycocyanin shares a common structural theme with all phycobiliproteins. The structure begins with the assembly of phycobiliprotein monomers, which are heterodimers composed of α and β subunits, and their respective chromophores linked via thioether bond. Each subunit is typically composed of eight α-helices. Monomers spontaneously aggregate to form ring-shaped trimers (αβ)3, which have rotational symmetry and a central channel. Trimers aggregate in pairs to form hexamers (αβ)6, sometimes assisted with additional linker proteins. Each phycobilisome rod generally has two or more phycocyanin hexamers. Despite the overall similarity in structure and assembly of phycobiliproteins, there is a large diversity in hexamer and rod conformations, even when only considering phycocyanins. On a larger scale phycocyanins also vary in crystal structure, although the biological relevance of this is debatable. As an example, the structure of C-phycocyanin from Synechococcus vulcanus has been refined to 1.6 Angstrom resolution. The (αβ) monomer consists of 332 amino acids and 3 thio-linked phycocyanobilin (PCB) cofactor molecules. Both the α- and β-subunits have a PCB at amino acid 84, but the β-subunit has an additional PCB at position 155 as well. This additional PCB faces the exterior of the trimeric ring and is therefore implicated in inter-rod energy transfer in the phycobilisome complex. In addition to cofactors, there are many predictable non-covalent interactions with the surrounding solvent (water) that are hypothesized to contribute to structural stability. R-phycocyanin II (R-PC II) is found in some Synechococcus species. R-PC II is said to be the first PEB containing phycocyanin that originates in cyanobacteria. Its purified protein is composed of alpha and beta subunits in equal quantities. R-PC II has PCB at beta-84 and the phycoerythrobillin (PEB) at alpha-84 and beta-155. As of March 21, 2023, there are 310 crystal structures of phycocyanin deposited in the Protein Data Bank. Spectral characteristics C-phycocyanin has a single absorption peak at ~621 nm, varying slightly depending on the organism and conditions such as temperature, pH, and protein concentration in vitro. Its emission maximum is ~642 nm. This means that the pigment absorbs orange light, and emits reddish light. R-phycocyanin has an absorption maxima at 533 and 544 nm. The fluorescence emission maximum of R-phycocyanin is 646 nm. Ecological relevance Phycocyanin is produced by many photoautotrophic cyanobacteria. Even if cyanobacteria have large concentrations of phycocyanin, productivity in the ocean is still limited due to light conditions. Phycocyanin has ecological significance in indicating cyanobacteria bloom. Normally chlorophyll a is used to indicate cyanobacteria numbers, however since it is present in a large number of phytoplankton groups, it is not an ideal measure. For instance a study in the Baltic Sea used phycocyanin as a marker for filamentous cyanobacteria during toxic summer blooms. Some filamentous organisms in the Baltic Sea include Nodularia spumigena and Aphanizomenon flosaquae. An important cyanobacteria named spirulina (Arthrospira platensis) is a micro algae that produces C-PC. There are many different methods of phycocyanin production including photoautotrophic, mixotrophic and heterotrophic and recombinant production. Photoautotrophic production of phycocyanin is where cultures of cyanobacteria are grown in open ponds in either subtropical or tropical regions. Mixotrophic production of algae is where the algae are grown on cultures that have an organic carbon source like glucose. Using mixotrophic production produces higher growth rates and higher biomass compared to simply using a photoautotrophic culture. In the mixotrophic culture, the sum of heterotrophic and autotrophic growth separately was equal to the mixotrophic growth. Heterotrophic production of phycocyanin is not light limited, as per its definition. Galdieria sulphuraria is a unicellular rhodophyte that contains a large amount of C-PC and a small amount of allophycocyanin. G. sulphuraria is an example of the heterotrophic production of C-PC because its habitat is hot, acidic springs and uses a number of carbon sources for growth. Recombinant production of C-PC is another heterotrophic method and involves gene engineering. Lichen-forming fungi and cyanobacteria often have a symbiotic relationship and thus phycocyanin markers can be used to show the ecological distribution of fungi-associated cyanobacteria. As shown in the highly specific association between Lichina species and Rivularia strains, phycocyanin has enough phylogenetic resolution to resolve the evolutionary history of the group across the northwestern Atlantic Ocean coastal margin. Biosynthesis The two genes cpcA and cpcB, located in the cpc operon and translated from the same mRNA transcript, encode for the C-PC α- and β-chains respectively. Additional elements such as linker proteins, and enzymes involved in phycobilin synthesis and the phycobiliproteins are often encoded by genes in adjacent gene clusters, and the cpc operon of Arthrospira platensis also encodes a linker protein assisting in the assembly of C-PC complexes. In red algae, the phycobiliprotein and linker protein genes are located on the plastid genome. Phycocyanobilin is synthesized from heme and inserted into the C-PC apo-protein by three enzymatic steps. Cyclic heme is oxidised to linear biliverdin IXα by heme oxygenase and further converted to 3Z-phycocyanobilin, the dominant phycocyanobilin isomer, by 3Z-phycocyanobilin:ferredoxin oxidoreductase. Insertion of 3Z-phycocyanobilin into the C-PC apo-protein via thioether bond formation is catalysed by phycocyanobilin lyase. The promoter for the cpc operon is located within the 427-bp upstream region of the cpcB gene. In A. platensis, six putative promoter sequences have been identified in the region, with four of them showing expression of green fluorescent protein when transformed into E. coli. The presence of other positive elements such as light-response elements in the same region have also been demonstrated. The multiple promoter and response element sequences in the cpc operon enable cyanobacteria and red algae to adjust its expression in response to multiple environmental conditions. Expression of the cpcA and cpcB genes is regulated by light. Low light intensities stimulate synthesis of CPC and other pigments, while pigment synthesis is repressed at high light intensities. Temperature has also been shown to affect synthesis, with specific pigment concentrations showing a clear maximum at 36 °C in Arthronema africanum, a cyanobacterium with particular high C-PC and APC contents. Nitrogen and also iron limitation induce phycobiliprotein degradation. Organic carbon sources stimulate C-PC synthesis in Anabaena spp., but seem to have almost no effector negative effect in A. platensis. In the rhodophytes Cyanidium caldarium and Galdieria sulphuraria, C-PC production is repressed by glucose but stimulated by heme. Biotechnology Pure phycocyanin extractions can be isolated from algae. The basic segregation order is as follows. The rupturing of the cell wall, with mechanical forces (freeze thawing) or chemical agents (enzymes). Then, C-PC is isolated with centrifugation and purified with ammonium sulfate precipitation or chromatography -either ion or gel-filtration. After, the sample gets frozen and dried. Applications Phycocyanin can be used in many practices, it is particularly used medicine and foods applications. It can also be used in genetics, where it acts a tracer due to its natural fluorescence. Medicine Anti-oxidation and anti-inflammation Phycocyanin has both anti-oxidant and anti-inflammation properties. Peroxyl, hydroxyl, and alkoxyl radicals are all oxidants scavenged by C-PC. C-PC, however, has a greater effect on peroxyl radicals. C-PC is a metal binding antioxidant as it prevents lipid peroxidation from occurring. The peroxyl radicals are stabilized by the chromophore (a subunit of C-PC). For hydroxyl radicals to be scavenged, it must be done in low light and with high C-PC levels. Hydroxyl radicals are found at inflamed parts of the body. C-PC, being an anti-oxidant, scavenges these damage-inducing radicals, hence being an anti-inflammation agent. Neuroprotection Excess oxygen in the brain generates Reactive Oxygen Species (ROS). ROS cause damage to brain neurons, leading to decreased neurological function. C-phycocyanin scavenges hydrogen peroxide, a type of ROS species, from the inside of astrocyte, reducing oxidative stress. Astrocytes also increase the production of growth factors like BDNF and NDF, therefore, enhance nerve regeneration. C-PC also prevents astrogliosis and glial inflammation. Hepatoprotection C-phycocyanin is found to have hepatotoxicity protection. Vadiraja et al. (1998) found an increase in the serum glutamic pyruvic transaminase (SGPT) when C-PC is treated against heptatoxins such as Carbon tetrachloride (CCl4) or R-(+)-pulegone. C-PC protects the liver by the means of the Cytochrome-P450 system. It can either disturb the production of menthofuran or disturb formation of α, β-unsaturated- γ-ketoaldehyde. Both of which are key components of the cytochrome P-450 system that produced a reactive metabolite that produce toxins when it binds to liver tissues. Another possible protection mechanism by C-PC can be the scavenging of reactive metabolites (or free radicals if the cause is CCl4). Anti-cancer C-phycocyanin (C-PC) has anti-cancer effects. Cancer happens when cells continue to grow uncontrollably. C-PC has been found to prevent cell growth. C-PC stops the formation of tumour before the S phase. DNA synthesis is not performed due to the tumour cell entering G0, resulting in no tumour proliferation. Furthermore, C-PC induces apoptosis. When cells are treated with C-PC, ROS (Radical Oxygen Species) are made. These molecules decrease BCl-2 (regulator of apoptosis) production. Here, BCl-2 inhibits proteins called caspases. Caspases are part of the apoptosis pathway. When BCl-2 decreases, the expression of caspases increases. As a result, apoptosis occurs. C-PC alone is not enough to treat cancer, it needs to work other drugs to overcome the persistence nature of tumour cells. Food C-phycocyanin (C-PC) can be used as a natural blue food colouring. This food colourant can only be used for low temperature prepared goods because of its inability to maintaining its blue colouring in high heats unless there is an addition of preservatives or sugars. The type of sugar is irrelevant, C-PC is stable when there is high sugar content. Knowing so, C-PC can be used for numerous types of foods, one of which being syrups. C-PC can be used for syrups ranging from green to blue colours. It can have different green tints by adding yellow food colourings. References Further reading Photosynthetic pigments
Phycocyanin
[ "Chemistry" ]
3,322
[ "Photosynthetic pigments", "Photosynthesis" ]
1,881,082
https://en.wikipedia.org/wiki/Mitsunobu%20reaction
The Mitsunobu reaction is an organic reaction that converts an alcohol into a variety of functional groups, such as an ester, using triphenylphosphine and an azodicarboxylate such as diethyl azodicarboxylate (DEAD) or diisopropyl azodicarboxylate (DIAD). Although DEAD and DIAD are most commonly used, there are a variety of other azodicarboxylates available which facilitate an easier workup and/or purification and in some cases, facilitate the use of more basic nucleophiles. It was discovered by Oyo Mitsunobu (1934–2003). In a typical protocol, one dissolves the alcohol, the carboxylic acid, and triphenylphosphine in tetrahydrofuran or other suitable solvent (e.g. diethyl ether), cool to 0 °C using an ice-bath, slowly add the DEAD dissolved in THF, then stir at room temperature for several hours. The alcohol reacts with the phosphine to create a good leaving group then undergoes an inversion of stereochemistry in classic SN2 fashion as the nucleophile displaces it. A common side-product is produced when the azodicarboxylate displaces the leaving group instead of the desired nucleophile. This happens if the nucleophile is not acidic enough (pKa larger than 13) or is not nucleophilic enough due to steric or electronic constraints. A variation of this reaction utilizing a nitrogen nucleophile is known as a Fukuyama–Mitsunobu. Several reviews have been published. Reaction mechanism The reaction mechanism of the Mitsunobu reaction is fairly complex. The identity of intermediates and the roles they play has been the subject of debate. Initially, the triphenyl phosphine (2) makes a nucleophilic attack upon diethyl azodicarboxylate (1) producing a betaine intermediate 3, which deprotonates the carboxylic acid (4) to form the ion pair 5. The formation of the ion pair 5 is very fast. The second phase of the mechanism is proposed to be phosphorus-centered, the DEAD having been converted to the hydrazine. The ratio and interconversion of intermediates 8–11 depend on the carboxylic acid pKa and the solvent polarity. Although several phosphorus intermediates are present, the attack of the carboxylate anion upon intermediate 8 is the only productive pathway forming the desired product 12 and triphenylphosphine oxide (13). The formation of the oxyphosphonium intermediate 8 is slow and facilitated by the alkoxide. Therefore, the overall rate of reaction is controlled by carboxylate basicity and solvation. Order of addition of reagents The order of addition of the reagents of the Mitsunobu reaction can be important. Typically, one dissolves the alcohol, the carboxylic acid, and triphenylphosphine in tetrahydrofuran or other suitable solvent (e.g. diethyl ether), cool to 0 °C using an ice-bath, slowly add the DEAD dissolved in THF, then stir at room temperature for several hours. If this is unsuccessful, then preforming the betaine may give better results. To preform the betaine, add DEAD to triphenylphosphine in tetrahydrofuran at 0 °C, followed by the addition of the alcohol and finally the acid. Variations Other nucleophilic functional groups Many other functional groups can serve as nucleophiles besides carboxylic acids. For the reaction to be successful, the nucleophile must have a pKa less than 15. Modifications Several modifications to the original reagent combination have been developed in order to simplify the separation of the product and avoid production of so much chemical waste. One variation of the Mitsunobu reaction uses resin-bound triphenylphosphine and uses di-tert-butylazodicarboxylate instead of DEAD. The oxidized triphenylphosphine resin can be removed by filtration, and the di-tert-butylazodicarboxylate byproduct is removed by treatment with trifluoroacetic acid. Bruce H. Lipshutz has developed an alternative to DEAD, di-(4-chlorobenzyl)azodicarboxylate (DCAD) where the hydrazine by-product can be easily removed by filtration and recycled back to DCAD. A modification has also been reported in which DEAD can be used in catalytic versus stoichiometric quantities, however this procedure requires the use of stoichiometric (diacetoxyiodo)benzene to oxidise the hydrazine by-product back to DEAD. Denton and co-workers have reported a redox-neutral variant of the Mitsunobu reaction which employs a phosphorus(III) catalyst to activate the substrate, ensuring inversion in the nucleophilic attack, and uses a Dean-Stark trap to remove the water by-product. Phosphorane reagents Tsunoda et al. have shown that one can combine the triphenylphosphine and the diethyl azodicarboxylate into one reagent: a phosphorane ylide. Both (cyanomethylene)trimethylphosphorane (CMMP, R = Me) and (cyanomethylene)tributylphosphorane (CMBP, R = Bu) have proven particularly effective. The ylide acts as both the reducing agent and the base. The byproducts are acetonitrile (6) and the trialkylphosphine oxide (8). Uses The Mitsunobu reaction has been applied in the synthesis of aryl ethers: With these particular reactants the conversion with DEAD fails because the hydroxyl group is only weakly acidic. Instead the related 1,1'-(azodicarbonyl)dipiperidine (ADDP) is used of which the betaine intermediate is a stronger base. The phosphine is a polymer-supported triphenylphosphine (PS-PPh3). The reaction has been used to synthesize quinine, colchicine, sarain, morphine, stigmatellin, eudistomin, oseltamivir, strychnine, and nupharamine. See also Dehydration reaction — broader category of reactions Appel reaction — prior reaction superseded by the Mitsonobu conditions Burgess reagent — another dehydrating agent for sensitive molecules References Substitution reactions Name reactions
Mitsunobu reaction
[ "Chemistry" ]
1,437
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
264,582
https://en.wikipedia.org/wiki/Remanence
Remanence or remanent magnetization or residual magnetism is the magnetization left behind in a ferromagnetic material (such as iron) after an external magnetic field is removed. Colloquially, when a magnet is "magnetized", it has remanence. The remanence of magnetic materials provides the magnetic memory in magnetic storage devices, and is used as a source of information on the past Earth's magnetic field in paleomagnetism. The word remanence is from remanent + -ence, meaning "that which remains". The equivalent term residual magnetization is generally used in engineering applications. In transformers, electric motors and generators a large residual magnetization is not desirable (see also electrical steel) as it is an unwanted contamination, for example, a magnetization remaining in an electromagnet after the current in the coil is turned off. Where it is unwanted, it can be removed by degaussing. Sometimes the term retentivity is used for remanence measured in units of magnetic flux density. Types Saturation remanence The default definition of magnetic remanence is the magnetization remaining in zero field after a large magnetic field is applied (enough to achieve saturation). The effect of a magnetic hysteresis loop is measured using instruments such as a vibrating sample magnetometer; and the zero-field intercept is a measure of the remanence. In physics this measure is converted to an average magnetization (the total magnetic moment divided by the volume of the sample) and denoted in equations as Mr. If it must be distinguished from other kinds of remanence, then it is called the saturation remanence or saturation isothermal remanence (SIRM) and denoted by Mrs. In engineering applications the residual magnetization is often measured using a B-H analyzer, which measures the response to an AC magnetic field (as in Fig. 1). This is represented by a flux density Br. This value of remanence is one of the most important parameters characterizing permanent magnets; it measures the strongest magnetic field they can produce. Neodymium magnets, for example, have a remanence approximately equal to 1.3 Tesla. Isothermal remanence Often a single measure of remanence does not provide adequate information on a magnet. For example, magnetic tapes contain a large number of small magnetic particles (see magnetic storage), and these particles are not identical. Magnetic minerals in rocks may have a wide range of magnetic properties (see rock magnetism). One way to look inside these materials is to add or subtract small increments of remanence. One way of doing this is first demagnetizing the magnet in an AC field, and then applying a field H and removing it. This remanence, denoted by Mr(H), depends on the field. It is called the initial remanence or the isothermal remanent magnetization (IRM). Another kind of IRM can be obtained by first giving the magnet a saturation remanence in one direction and then applying and removing a magnetic field in the opposite direction. This is called demagnetization remanence or DC demagnetization remanence and is denoted by symbols like Md(H), where H is the magnitude of the field. Yet another kind of remanence can be obtained by demagnetizing the saturation remanence in an ac field. This is called AC demagnetization remanence or alternating field demagnetization remanence and is denoted by symbols like Maf(H). If the particles are noninteracting single-domain particles with uniaxial anisotropy, there are simple linear relations between the remanences. Anhysteretic remanence Another kind of laboratory remanence is anhysteretic remanence or anhysteretic remanent magnetization (ARM). This is induced by exposing a magnet to a large alternating field plus a small DC bias field. The amplitude of the alternating field is gradually reduced to zero to get an anhysteretic magnetization, and then the bias field is removed to get the remanence. The anhysteretic magnetization curve is often close to an average of the two branches of the hysteresis loop, and is assumed in some models to represent the lowest-energy state for a given field. There are several ways for experimental measurement of the anhysteretic magnetization curve, based on fluxmeters and DC biased demagnetization. ARM has also been studied because of its similarity to the write process in some magnetic recording technology and to the acquisition of natural remanent magnetization in rocks. Examples See also Coercivity Hysteresis Rock magnetism Thermoremanent magnetization Viscous remanent magnetization Notes References External links Coercivity and Remanence in Permanent Magnets Magnet Man Computer engineering Rock magnetism Magnetic hysteresis ja:残留磁束密度
Remanence
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,040
[ "Physical phenomena", "Computer engineering", "Electrical engineering", "Hysteresis", "Magnetic hysteresis" ]
264,606
https://en.wikipedia.org/wiki/Schwarzschild%20metric
In Einstein's theory of general relativity, the Schwarzschild metric (also known as the Schwarzschild solution) is an exact solution to the Einstein field equations that describes the gravitational field outside a spherical mass, on the assumption that the electric charge of the mass, angular momentum of the mass, and universal cosmological constant are all zero. The solution is a useful approximation for describing slowly rotating astronomical objects such as many stars and planets, including Earth and the Sun. It was found by Karl Schwarzschild in 1916. According to Birkhoff's theorem, the Schwarzschild metric is the most general spherically symmetric vacuum solution of the Einstein field equations. A Schwarzschild black hole or static black hole is a black hole that has neither electric charge nor angular momentum (non-rotating). A Schwarzschild black hole is described by the Schwarzschild metric, and cannot be distinguished from any other Schwarzschild black hole except by its mass. The Schwarzschild black hole is characterized by a surrounding spherical boundary, called the event horizon, which is situated at the Schwarzschild radius (), often called the radius of a black hole. The boundary is not a physical surface, and a person who fell through the event horizon (before being torn apart by tidal forces) would not notice any physical surface at that position; it is a mathematical surface which is significant in determining the black hole's properties. Any non-rotating and non-charged mass that is smaller than its Schwarzschild radius forms a black hole. The solution of the Einstein field equations is valid for any mass , so in principle (within the theory of general relativity) a Schwarzschild black hole of any mass could exist if conditions became sufficiently favorable to allow for its formation. In the vicinity of a Schwarzschild black hole, space curves so much that even light rays are deflected, and very nearby light can be deflected so much that it travels several times around the black hole. Formulation The Schwarzschild metric is a spherically symmetric Lorentzian metric (here, with signature convention ), defined on (a subset of) where is 3 dimensional Euclidean space, and is the two sphere. The rotation group acts on the or factor as rotations around the center , while leaving the first factor unchanged. The Schwarzschild metric is a solution of Einstein's field equations in empty space, meaning that it is valid only outside the gravitating body. That is, for a spherical body of radius the solution is valid for . To describe the gravitational field both inside and outside the gravitating body the Schwarzschild solution must be matched with some suitable interior solution at , such as the interior Schwarzschild metric. In Schwarzschild coordinates the Schwarzschild metric (or equivalently, the line element for proper time) has the form where is the metric on the two sphere, i.e. . Furthermore, is positive for timelike curves, in which case is the proper time (time measured by a clock moving along the same world line with a test particle), is the speed of light, is, for , the time coordinate (measured by a clock located infinitely far from the massive body and stationary with respect to it), is, for , the radial coordinate (measured as the circumference, divided by 2, of a sphere centered around the massive body), is a point on the two sphere , is the colatitude of (angle from north, in units of radians) defined after arbitrarily choosing a z-axis, is the longitude of (also in radians) around the chosen z-axis, and is the Schwarzschild radius of the massive body, a scale factor which is related to its mass by , where is the gravitational constant. The Schwarzschild metric has a singularity for , which is an intrinsic curvature singularity. It also seems to have a singularity on the event horizon . Depending on the point of view, the metric is therefore defined only on the exterior region , only on the interior region or their disjoint union. However, the metric is actually non-singular across the event horizon, as one sees in suitable coordinates (see below). For , the Schwarzschild metric is asymptotic to the standard Lorentz metric on Minkowski space. For almost all astrophysical objects, the ratio is extremely small. For example, the Schwarzschild radius of the Earth is roughly , while the Sun, which is times as massive has a Schwarzschild radius of approximately 3.0 km. The ratio becomes large only in close proximity to black holes and other ultra-dense objects such as neutron stars. The radial coordinate turns out to have physical significance as the "proper distance between two events that occur simultaneously relative to the radially moving geodesic clocks, the two events lying on the same radial coordinate line". The Schwarzschild solution is analogous to a classical Newtonian theory of gravity that corresponds to the gravitational field around a point particle. Even at the surface of the Earth, the corrections to Newtonian gravity are only one part in a billion. History The Schwarzschild solution is named in honour of Karl Schwarzschild, who found the exact solution in 1915 and published it in January 1916, a little more than a month after the publication of Einstein's theory of general relativity. It was the first exact solution of the Einstein field equations other than the trivial flat space solution. Schwarzschild died shortly after his paper was published, as a result of a disease (thought to be pemphigus) he developed while serving in the German army during World War I. Johannes Droste in 1916 independently produced the same solution as Schwarzschild, using a simpler, more direct derivation. In the early years of general relativity there was a lot of confusion about the nature of the singularities found in the Schwarzschild and other solutions of the Einstein field equations. In Schwarzschild's original paper, he put what we now call the event horizon at the origin of his coordinate system. In this paper he also introduced what is now known as the Schwarzschild radial coordinate ( in the equations above), as an auxiliary variable. In his equations, Schwarzschild was using a different radial coordinate that was zero at the Schwarzschild radius. A more complete analysis of the singularity structure was given by David Hilbert in the following year, identifying the singularities both at and . Although there was general consensus that the singularity at was a 'genuine' physical singularity, the nature of the singularity at remained unclear. In 1921, Paul Painlevé and in 1922 Allvar Gullstrand independently produced a metric, a spherically symmetric solution of Einstein's equations, which we now know is coordinate transformation of the Schwarzschild metric, Gullstrand–Painlevé coordinates, in which there was no singularity at . They, however, did not recognize that their solutions were just coordinate transforms, and in fact used their solution to argue that Einstein's theory was wrong. In 1924 Arthur Eddington produced the first coordinate transformation (Eddington–Finkelstein coordinates) that showed that the singularity at was a coordinate artifact, although he also seems to have been unaware of the significance of this discovery. Later, in 1932, Georges Lemaître gave a different coordinate transformation (Lemaître coordinates) to the same effect and was the first to recognize that this implied that the singularity at was not physical. In 1939 Howard Robertson showed that a free falling observer descending in the Schwarzschild metric would cross the singularity in a finite amount of proper time even though this would take an infinite amount of time in terms of coordinate time . In 1950, John Synge produced a paper that showed the maximal analytic extension of the Schwarzschild metric, again showing that the singularity at was a coordinate artifact and that it represented two horizons. A similar result was later rediscovered by George Szekeres, and independently Martin Kruskal. The new coordinates nowadays known as Kruskal–Szekeres coordinates were much simpler than Synge's but both provided a single set of coordinates that covered the entire spacetime. However, perhaps due to the obscurity of the journals in which the papers of Lemaître and Synge were published their conclusions went unnoticed, with many of the major players in the field including Einstein believing that the singularity at the Schwarzschild radius was physical. Synge's later derivation of the Kruskal–Szekeres metric solution, which was motivated by a desire to avoid "using 'bad' [Schwarzschild] coordinates to obtain 'good' [Kruskal–Szekeres] coordinates", has been generally under-appreciated in the literature, but was adopted by Chandrasekhar in his black hole monograph. Real progress was made in the 1960s when the mathematically rigorous formulation cast in terms of differential geometry entered the field of general relativity, allowing more exact definitions of what it means for a Lorentzian manifold to be singular. This led to definitive identification of the singularity in the Schwarzschild metric as an event horizon, i.e., a hypersurface in spacetime that can be crossed in only one direction. Singularities and black holes The Schwarzschild solution appears to have singularities at and ; some of the metric components "blow up" (entail division by zero or multiplication by infinity) at these radii. Since the Schwarzschild metric is expected to be valid only for those radii larger than the radius of the gravitating body, there is no problem as long as . For ordinary stars and planets this is always the case. For example, the radius of the Sun is approximately , while its Schwarzschild radius is only . The singularity at divides the Schwarzschild coordinates in two disconnected patches. The exterior Schwarzschild solution with is the one that is related to the gravitational fields of stars and planets. The interior Schwarzschild solution with , which contains the singularity at , is completely separated from the outer patch by the singularity at . The Schwarzschild coordinates therefore give no physical connection between the two patches, which may be viewed as separate solutions. The singularity at is an illusion however; it is an instance of what is called a coordinate singularity. As the name implies, the singularity arises from a bad choice of coordinates or coordinate conditions. When changing to a different coordinate system (for example Lemaître coordinates, Eddington–Finkelstein coordinates, Kruskal–Szekeres coordinates, Novikov coordinates, or Gullstrand–Painlevé coordinates) the metric becomes regular at and can extend the external patch to values of smaller than . Using a different coordinate transformation one can then relate the extended external patch to the inner patch. The case is different, however. If one asks that the solution be valid for all one runs into a true physical singularity, or gravitational singularity, at the origin. To see that this is a true singularity one must look at quantities that are independent of the choice of coordinates. One such important quantity is the Kretschmann invariant, which is given by At the curvature becomes infinite, indicating the presence of a singularity. At this point the metric cannot be extended in a smooth manner (the Kretschmann invariant involves second derivatives of the metric), spacetime itself is then no longer well-defined. Furthermore, Sbierski showed the metric cannot be extended even in a continuous manner. For a long time it was thought that such a solution was non-physical. However, a greater understanding of general relativity led to the realization that such singularities were a generic feature of the theory and not just an exotic special case. The Schwarzschild solution, taken to be valid for all , is called a Schwarzschild black hole. It is a perfectly valid solution of the Einstein field equations, although (like other black holes) it has rather bizarre properties. For the Schwarzschild radial coordinate becomes timelike and the time coordinate becomes spacelike. A curve at constant is no longer a possible worldline of a particle or observer, not even if a force is exerted to try to keep it there; this occurs because spacetime has been curved so much that the direction of cause and effect (the particle's future light cone) points into the singularity. The surface demarcates what is called the event horizon of the black hole. It represents the point past which light can no longer escape the gravitational field. Any physical object whose radius becomes less than or equal to the Schwarzschild radius has undergone gravitational collapse and become a black hole. Alternative coordinates The Schwarzschild solution can be expressed in a range of different choices of coordinates besides the Schwarzschild coordinates used above. Different choices tend to highlight different features of the solution. The table below shows some popular choices. In table above, some shorthand has been introduced for brevity. The speed of light has been set to one. The notation is used for the metric of a unit radius 2-dimensional sphere. Moreover, in each entry and denote alternative choices of radial and time coordinate for the particular coordinates. Note, the or may vary from entry to entry. The Kruskal–Szekeres coordinates have the form to which the Belinski–Zakharov transform can be applied. This implies that the Schwarzschild black hole is a form of gravitational soliton. Flamm's paraboloid The spatial curvature of the Schwarzschild solution for can be visualized as the graphic shows. Consider a constant time equatorial slice through the Schwarzschild solution by fixing , = constant, and letting the remaining Schwarzschild coordinates vary. Imagine now that there is an additional Euclidean dimension , which has no physical reality (it is not part of spacetime). Then replace the plane with a surface dimpled in the direction according to the equation (Flamm's paraboloid) This surface has the property that distances measured within it match distances in the Schwarzschild metric, because with the definition of w above, Thus, Flamm's paraboloid is useful for visualizing the spatial curvature of the Schwarzschild metric. It should not, however, be confused with a gravity well. No ordinary (massive or massless) particle can have a worldline lying on the paraboloid, since all distances on it are spacelike (this is a cross-section at one moment of time, so any particle moving on it would have an infinite velocity). A tachyon could have a spacelike worldline that lies entirely on a single paraboloid. However, even in that case its geodesic path is not the trajectory one gets through a "rubber sheet" analogy of gravitational well: in particular, if the dimple is drawn pointing upward rather than downward, the tachyon's geodesic path still curves toward the central mass, not away. See the gravity well article for more information. Flamm's paraboloid may be derived as follows. The Euclidean metric in the cylindrical coordinates is written Letting the surface be described by the function , the Euclidean metric can be written as Comparing this with the Schwarzschild metric in the equatorial plane () at a fixed time ( = constant, ) yields an integral expression for : whose solution is Flamm's paraboloid. Orbital motion A particle orbiting in the Schwarzschild metric can have a stable circular orbit with . Circular orbits with between and are unstable, and no circular orbits exist for . The circular orbit of minimum radius corresponds to an orbital velocity approaching the speed of light. It is possible for a particle to have a constant value of between and , but only if some force acts to keep it there. Noncircular orbits, such as Mercury's, dwell longer at small radii than would be expected in Newtonian gravity. This can be seen as a less extreme version of the more dramatic case in which a particle passes through the event horizon and dwells inside it forever. Intermediate between the case of Mercury and the case of an object falling past the event horizon, there are exotic possibilities such as knife-edge orbits, in which the satellite can be made to execute an arbitrarily large number of nearly circular orbits, after which it flies back outward. Symmetries The isometry group of the Schwarzchild metric is , where is the orthogonal group of rotations and reflections in three dimensions, comprises the time translations, and is the group generated by time reversal. This is thus the subgroup of the ten-dimensional Poincaré group which takes the time axis (trajectory of the star) to itself. It omits the spatial translations (three dimensions) and boosts (three dimensions). It retains the time translations (one dimension) and rotations (three dimensions). Thus it has four dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the time reversed component; the spatial inversion component; and the component which is both time reversed and spatially inverted. Curvatures The Ricci curvature scalar and the Ricci curvature tensor are both zero. Non-zero components of the Riemann curvature tensor are given by from which one can see that . Six of these formulas are Eq. 5.13 in Carroll and imply the other 6 by . Components which are obtainable by other symmetries of the Riemann tensor are not displayed. To understand the physical meaning of these quantities, it is useful to express the curvature tensor in an orthonormal basis. In an orthonormal basis of an observer the non-zero components in geometric units are Again, components which are obtainable by the symmetries of the Riemann tensor are not displayed. These results are invariant to any Lorentz boost, thus the components do not change for non-static observers. The geodesic deviation equation shows that the tidal acceleration between two observers separated by is , so a body of length is stretched in the radial direction by an apparent acceleration and squeezed in the perpendicular directions by . See also Derivation of the Schwarzschild solution Reissner–Nordström metric (charged, non-rotating solution) Kerr metric (uncharged, rotating solution) Kerr–Newman metric (charged, rotating solution) Black hole, a general review Schwarzschild coordinates Kruskal–Szekeres coordinates Eddington–Finkelstein coordinates Gullstrand–Painlevé coordinates Lemaître coordinates (Schwarzschild solution in synchronous coordinates) Frame fields in general relativity (Lemaître observers in the Schwarzschild vacuum) Tolman–Oppenheimer–Volkoff equation (metric and pressure equations of a static and spherically symmetric body of isotropic material) Planck length Notes References Text of the original paper, in Wikisource Translation: A commentary on the paper, giving a simpler derivation: Text of the original paper, in Wikisource Translation: Exact solutions in general relativity Black holes 1916 in science
Schwarzschild metric
[ "Physics", "Astronomy", "Mathematics" ]
3,964
[ "Exact solutions in general relativity", "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Mathematical objects", "Astrophysics", "Equations", "Density", "Stellar phenomena", "Astronomical objects" ]
265,064
https://en.wikipedia.org/wiki/Supercritical%20water%20oxidation
Supercritical water oxidation (SCWO) is a process that occurs in water at temperatures and pressures above a mixture's thermodynamic critical point. Under these conditions water becomes a fluid with unique properties that can be used to advantage in the destruction of recalcitrant and hazardous wastes such as polychlorinated biphenyls (PCB) or per- and polyfluoroalkyl substances (PFAS). Supercritical water has a density between that of water vapor and liquid at standard conditions, and exhibits high gas-like diffusion rates along with high liquid-like collision rates. In addition, the behavior of water as a solvent is altered (in comparison to that of subcritical liquid water) - it behaves much less like a polar solvent. As a result, the solubility behavior is "reversed" so that oxygen, and organics such as chlorinated hydrocarbons become soluble in the water, allowing single-phase reaction of aqueous waste with a dissolved oxidizer. The reversed solubility also causes salts to precipitate out of solution, meaning they can be treated using conventional methods for solid-waste residuals. Efficient oxidation reactions occur at low temperature (400-650 °C) with reduced NOx production. SCWO can be classified as green chemistry or as a clean technology. The elevated pressures and temperatures required for SCWO are routinely encountered in industrial applications such as petroleum refining and chemical synthesis. A unique addition (mostly of academic interest) to the world of supercritical water (SCW) oxidation is generating high-pressure flames inside the SCW medium. The pioneer works on high-pressure supercritical water flames were carried out by professor EU Franck at the German University of Karlsruhe in the late 80s. The works were mainly aimed at anticipating conditions which would cause spontaneous generation of non-desirable flames in the flameless SCW oxidation process. These flames would cause instabilities to the system and its components. ETH Zurich pursued the investigation of hydrothermal flames in continuously operated reactors. The rising needs for waste treatment and destruction methods motivated a Japanese Group in the Ebara Corporation to explore SCW flames as an environmental tool. Research on hydrothermal flames has also begun at NASA Glenn Research Center in Cleveland, Ohio. Basic research Basic research on supercritical water oxidation was undertaken in the 1990s at Sandia National Laboratory's Combustion Research Facility (CRF), in Livermore, CA. Originally proposed as a hazardous waste destruction technology in response to the Kyoto protocol, multiple waste streams were studied by Steven F. Rice and Russ Hanush, and hydrothermal (supercritical water) flames were investigated by Richard R. Steeper and Jason D. Aiken. Among the waste streams studied were military dyes and pyrotechnics, methanol, and isopropyl alcohol. Hydrogen peroxide was used as an oxidizing agent, and Eric Croiset was tasked with detailed measurements of the decomposition of hydrogen peroxide at supercritical water conditions. In mid-1992, Thomas G. McGuinness, PE invented what is now known as the "transpiring-wall SCWO reactor" (TWR) while seconded to Los Alamos National Laboratory on behalf of Summit Research Corporation. McGuinness subsequently received the first US patent for a TWR in early 1995. The TWR was designed to mitigate problems of salt/solids deposition, corrosion and thermal limitations occurring in other SCWO reactor designs (eg. tubular & vat-type reactors) at the time. The upper part of the vertical reactor incorporates a permeable liner through which a clean fluid permeates to help prevent salts and other solids from accumulating at the inner surface of the liner. The liner also insulates the outer pressure containment vessel from high temperatures within the reaction zone. The liner can be manufactured from a variety of materials resistant to corrosion and high reaction temperatures. The bottom end of the TWR incorporates a "quench cooler" for cooling the reaction byproducts while neutralizing any components that might form acids during transition to subcritical temperature. Proof-of-concept and performance advantages of the TWR for a variety of feedstocks was demonstrated by Eckhard Dinjus and Johannes Abeln at Forschungszentrum Karlsruhe (FZK), via direct comparison between a TWR and an adjacent tubular reactor. Major engineering challenges were associated with the deposition of salts and chemical corrosion in these supercritical water reactors. Anthony Lajeunesse led the team investigating these issues. To address these issues Lajeunesse designed a transpiring wall reactor which introduced a pressure differential through the walls of an inner sleeve filled with pores to continuously rinse the inner walls of the reactor with fresh water. Russ Hanush was charged with the construction and operation of the supercritical fluids reactor (SFR) used for these studies. Among its design intricacies were the Inconel 625 alloy necessary for operation at such extreme temperatures and pressures, and the design of the high-pressure, high-temperature optical cells used for photometric access to the reacting flows which incorporated 24 carat gold pressure seals and sapphire windows. Commercial applications Several companies in the United States are now working to commercialize supercritical reactors to destroy hazardous wastes. Widespread commercial application of SCWO technology requires a reactor design capable of resisting fouling and corrosion under supercritical conditions. In Japan a number of commercial SCWO applications exist, among them one unit for treatment of halogenated waste built by Organo. In Korea two commercial size units have been built by Hanwha. In Europe, Chematur Engineering AB of Sweden commercialized the SCWO technology for treatment of spent chemical catalysts to recover the precious metal, the AquaCat process. The unit has been built for Johnson Matthey in the UK. It is the only commercial SCWO unit in Europe and with its capacity of 3000l/h it is the largest SCWO unit in the world. Chematur's Super Critical Fluids technology was acquired by SCFI Group (Cork, Ireland) who are actively commercializing the Aqua Critox SCWO process for treatment of sludge, e.g. de-inking sludge and sewage sludge. Many long duration trials on these applications have been made and thanks to the high destruction efficiency of 99.9%+ the solid residue after the SCWO process is well suited for recycling – in the case of de-inking sludge as paper filler and in the case of sewage sludge as phosphorus and coagulant. SCFI Group operate a 250l/h Aqua Critox demonstration plant in Cork, Ireland. AquaNova Technologies, Inc. https://aquanovatech.com is actively commercializing their 2nd-generation transpiring-wall SCWO reactor ("TWR") with a focus on waste treatment and renewable energy applications. AquaNova's patent-pending TWR-SCWO technology is projected to treat a broad variety of wastes, including PFAS, while generating electric power with improved system thermal efficiency. AquaNova's paradigm-changing technology is designed to operate at supercritical and sub-critical pressures, and at higher reaction temperatures than traditional SCWO technology. AquaNova is targeting larger-scale industrial applications. AquaNova Technologies was founded by Tom McGuinness, PE, who is the original inventor of the transpiring-wall reactor (TWR) under US patent 5,384,051. 374Water Inc. is a company offering commercial SCWO systems that convert organic wastes to clean water, energy and minerals. It is spun out after more than seven years of research and development funded by the Bill & Melinda Gates Foundation to Prof. Deshusses laboratory based at Duke University. The founders of 374Water, Prof. Marc Deshusses and Kobe Nagar, possess the waste processing reactor patent relevant to SCWO. 374Water is actively commercializing its AirSCWO systems for the treatment of biosolids and wastewater sludges, organic chemical wastes, and PFAS wastes including unspent Aqueous Film Forming Foams (AFFFs), rinsates or spent resins and adsorption media. The first commercial sale was announced in February 2022. Aquarden Technologies (Skaevinge, Denmark) provides modular SCWO plants for the destruction of hazardous pollutants such as PFAS, pesticides, and other problematic hydrocarbons in industrial wastestreams. Aquarden is also providing remediation of hazardous energetic wastes and chemical warfare agents with SCWO, where a full-scale SCWO system has been operating for some years in France for the Defense Industry. See also Supercritical fluid Wet oxidation Incineration References External links There are some research groups working in this topic throughout the world: The Deshusses lab at Duke University has a Nix1 (1 ton/day) prototype in Durham, North Carolina SCFI have a working AquaCritox A10 plant in Cork (Ireland) UVa High Pressure Processes Group (Spain) Clean Technology Group (UK) FZK Karlsruhe (Germany) ETH Zurich, Transport Processes and Reactions Laboratory (Switzerland) UCL (London, UK) Clean Materials technology group working on Continuous Hydrothermal Flow Synthesis UBC (Vancouver, BC) Mechanical Engineering Research Activities, including projects on SCWO Turbosystems Engineering SCWO technology Universidad de Cádiz (UCA). Supercritical Fluids Group Physical chemistry Waste treatment technology
Supercritical water oxidation
[ "Physics", "Chemistry", "Engineering" ]
1,962
[ "Applied and interdisciplinary physics", "Water treatment", "nan", "Environmental engineering", "Waste treatment technology", "Physical chemistry" ]
265,767
https://en.wikipedia.org/wiki/Epidemiological%20method
The science of epidemiology has matured significantly from the times of Hippocrates, Semmelweis and John Snow. The techniques for gathering and analyzing epidemiological data vary depending on the type of disease being monitored but each study will have overarching similarities. Outline of the process of an epidemiological study Establish that a problem exists Full epidemiological studies are expensive and laborious undertakings. Before any study is started, a case must be made for the importance of the research. Confirm the homogeneity of the events Any conclusions drawn from inhomogeneous cases will be suspicious. All events or occurrences of the disease must be true cases of the disease. Collect all the events It is important to collect as much information as possible about each event in order to inspect a large number of possible risk factors. The events may be collected from varied methods of epidemiological study or from censuses or hospital records. The events can be characterized by Incidence rates and prevalence rates. Often, occurrence of a single disease entity is set as an event. Given inherent heterogeneous nature of any given disease (i.e., the unique disease principle), a single disease entity may be treated as disease subtypes. This framework is well conceptualized in the interdisciplinary field of molecular pathological epidemiology (MPE). Characterize the events as to epidemiological factors Predisposing factors Non-environmental factors that increase the likelihood of getting a disease. Genetic history, age, and gender are examples. Enabling/disabling factors Factors relating to the environment that either increase or decrease the likelihood of disease. Exercise and good diet are examples of disabling factors. A weakened immune system and poor nutrition are examples of enabling factors. Precipitation factors This factor is the most important in that it identifies the source of exposure. It may be a germ, toxin or gene. Reinforcing factors These are factors that compound the likelihood of getting a disease. They may include repeated exposure or excessive environmental stresses. Look for patterns and trends Here one looks for similarities in the cases which may identify major risk factors for contracting the disease. Epidemic curves may be used to identify such risk factors. Formulate a hypothesis If a trend has been observed in the cases, the researcher may postulate as to the nature of the relationship between the potential disease-causing agent and the disease. Test the hypothesis Because epidemiological studies can rarely be conducted in a laboratory the results are often polluted by uncontrollable variations in the cases. This often makes the results difficult to interpret. Two methods have evolved to assess the strength of the relationship between the disease causing agent and the disease. Koch's postulates were the first criteria developed for epidemiological relationships. Because they only work well for highly contagious bacteria and toxins, this method is largely out of favor. Bradford-Hill Criteria are the current standards for epidemiological relationships. A relationship may fill all, some, or none of the criteria and still be true. Publish the results. Measures Epidemiologists are famous for their use of rates. Each measure serves to characterize the disease giving valuable information about contagiousness, incubation period, duration, and mortality of the disease. Measures of occurrence Incidence measures Incidence rate, where cases included are defined using a case definition Hazard rate Cumulative incidence Prevalence measures Point prevalence Period prevalence Measures of association Relative measures Risk ratio Rate ratio Odds ratio Hazard ratio Absolute measures Absolute risk reduction Attributable risk Attributable risk in exposed Percent attributable risk Levin's attributable risk Other measures Virulence and Infectivity Mortality rate and Morbidity rate Case fatality Sensitivity (tests) and Specificity (tests) Limitations Epidemiological (and other observational) studies typically highlight associations between exposures and outcomes, rather than causation. While some consider this a limitation of observational research, epidemiological models of causation (e.g. Bradford Hill criteria) contend that an entire body of evidence is needed before determining if an association is truly causal. Moreover, many research questions are impossible to study in experimental settings, due to concerns around ethics and study validity. For example, the link between cigarette smoke and lung cancer was uncovered largely through observational research; however research ethics would certainly prohibit conducting a randomized trial of cigarette smoking once it had already been identified as a potential health threat. See also References External links Epidemiologic.org Epidemiologic Inquiry online weblog for epidemiology researchers Epidemiology Forum A discussion and forum community for epi analysis support and fostering questions, debates, and collaborations in epidemiology The Centre for Evidence Based Medicine at Oxford maintains an on-line "Toolbox" of evidence-based medicine methods. Epimonitor has a comprehensive list of links to associations, agencies, bulletins, etc. Epidemiology for the Uninitiated On line text, with easy explanations. North Carolina Center for Public Health Preparedness Training On line training classes for epidemiology and related topics. People's Epidemiology Library Epidemiology
Epidemiological method
[ "Environmental_science" ]
1,067
[ "Epidemiology", "Environmental social science" ]
265,769
https://en.wikipedia.org/wiki/Hazard%20ratio
In survival analysis, the hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions characterised by two distinct levels of a treatment variable of interest. For example, in a clinical study of a drug, the treated population may die at twice the rate of the control population. The hazard ratio would be 2, indicating a higher hazard of death from the treatment. For example, a scientific paper might use an HR to state something such as: "Adequate COVID-19 vaccination status was associated with significantly decreased risk for the composite of severe COVID-19 or mortality with a[n] HR of 0.20 (95% CI, 0.17–0.22)." In essence, the hazard for the composite outcome was 80% lower among the vaccinated relative to those who were unvaccinated in the same study. So, for a hazardous outcome (e.g., severe disease or death), an HR below 1 indicates that the treatment (e.g., vaccination) is protective against the outcome of interest. In other cases, an HR greater than 1 indicates the treatment is favorable. For example, if the outcome is actually favorable (e.g., accepting a job offer to end a spell of unemployment), an HR greater than 1 indicates that seeking a job is favorable to not seeking one (if "treatment" is defined as seeking a job). Hazard ratios differ from relative risks (RRs) and odds ratios (ORs) in that RRs and ORs are cumulative over an entire study, using a defined endpoint, while HRs represent instantaneous risk over the study time period, or some subset thereof. Hazard ratios suffer somewhat less from selection bias with respect to the endpoints chosen and can indicate risks that happen before the endpoint. Definition and derivation Regression models are used to obtain hazard ratios and their confidence intervals. The instantaneous hazard rate is the limit of the number of events per unit time divided by the number at risk, as the time interval approaches 0: where N(t) is the number at risk at the beginning of an interval. A hazard is the probability that a patient fails between and , given that they have survived up to time , divided by , as approaches zero. The hazard ratio is the effect on this hazard rate of a difference, such as group membership (for example, treatment or control, male or female), as estimated by regression models that treat the logarithm of the HR as a function of a baseline hazard and a linear combination of explanatory variables: Such models are generally classed proportional hazards regression models; the best known being the Cox proportional hazards model, and the exponential, Gompertz and Weibull parametric models. For two groups that differ only in treatment condition, the ratio of the hazard functions is given by , where is the estimate of treatment effect derived from the regression model. This hazard ratio, that is, the ratio between the predicted hazard for a member of one group and that for a member of the other group, is given by holding everything else constant, i.e. assuming proportionality of the hazard functions. For a continuous explanatory variable, the same interpretation applies to a unit difference. Other HR models have different formulations and the interpretation of the parameter estimates differs accordingly. Interpretation In its simplest form, the hazard ratio can be interpreted as the chance of an event occurring in the treatment arm divided by the chance of the event occurring in the control arm, or vice versa, of a study. The resolution of these endpoints are usually depicted using Kaplan–Meier survival curves. These curves relate the proportion of each group where the endpoint has not been reached. The endpoint could be any dependent variable associated with the covariate (independent variable), e.g. death, remission of disease or contraction of disease. The curve represents the odds of an endpoint having occurred at each point in time (the hazard). The hazard ratio is simply the relationship between the instantaneous hazards in the two groups and represents, in a single number, the magnitude of distance between the Kaplan–Meier plots. Hazard ratios do not reflect a time unit of the study. The difference between hazard-based and time-based measures is akin to the difference between the odds of winning a race and the margin of victory. When a study reports one hazard ratio per time period, it is assumed that difference between groups was proportional. Hazard ratios become meaningless when this assumption of proportionality is not met. If the proportional hazard assumption holds, a hazard ratio of one means equivalence in the hazard rate of the two groups, whereas a hazard ratio other than one indicates difference in hazard rates between groups. The researcher indicates the probability of this sample difference being due to chance by reporting the probability associated with some test statistic. For instance, the from the Cox-model or the log-rank test might then be used to assess the significance of any differences observed in these survival curves. Conventionally, probabilities lower than 0.05 are considered significant and researchers provide a 95% confidence interval for the hazard ratio, e.g. derived from the standard deviation of the Cox-model regression coefficient, i.e. . Statistically significant hazard ratios cannot include unity (one) in their confidence intervals. The proportional hazards assumption The proportional hazards assumption for hazard ratio estimation is strong and often unreasonable. Complications, adverse effects and late effects are all possible causes of change in the hazard rate over time. For instance, a surgical procedure may have high early risk, but excellent long term outcomes. If the hazard ratio between groups remain constant, this is not a problem for interpretation. However, interpretation of hazard ratios become impossible when selection bias exists between groups. For instance, a particularly risky surgery might result in the survival of a systematically more robust group who would have fared better under any of the competing treatment conditions, making it look as if the risky procedure was better. Follow-up time is also important. A cancer treatment associated with better remission rates might on follow-up be associated with higher relapse rates. The researchers' decision about when to follow up is arbitrary and may lead to very different reported hazard ratios. The hazard ratio and survival Hazard ratios are often treated as a ratio of death probabilities. For example, a hazard ratio of 2 is thought to mean that a group has twice the chance of dying than a comparison group. In the Cox-model, this can be shown to translate to the following relationship between group survival functions: (where r is the hazard ratio). Therefore, with a hazard ratio of 2, if (20% survived at time t), (4% survived at t). The corresponding death probabilities are 0.8 and 0.96. It should be clear that the hazard ratio is a relative measure of effect and tells us nothing about absolute risk. While hazard ratios allow for hypothesis testing, they should be considered alongside other measures for interpretation of the treatment effect, e.g. the ratio of median times (median ratio) at which treatment and control group participants are at some endpoint. If the analogy of a race is applied, the hazard ratio is equivalent to the odds that an individual in the group with the higher hazard reaches the end of the race first. The probability of being first can be derived from the odds, which is the probability of being first divided by the probability of not being first: ; conversely, . In the previous example, a hazard ratio of 2 corresponds to a 67% chance of an early death. The hazard ratio does not convey information about how soon the death will occur. The hazard ratio, treatment effect and time-based endpoints Treatment effect depends on the underlying disease related to survival function, not just the hazard ratio. Since the hazard ratio does not give us direct time-to-event information, researchers have to report median endpoint times and calculate the median endpoint time ratio by dividing the control group median value by the treatment group median value. While the median endpoint ratio is a relative speed measure, the hazard ratio is not. The relationship between treatment effect and the hazard ratio is given as . A statistically important, but practically insignificant effect can produce a large hazard ratio, e.g. a treatment increasing the number of one-year survivors in a population from one in 10,000 to one in 1,000 has a hazard ratio of 10. It is unlikely that such a treatment would have had much impact on the median endpoint time ratio, which likely would have been close to unity, i.e. mortality was largely the same regardless of group membership and clinically insignificant. By contrast, a treatment group in which 50% of infections are resolved after one week (versus 25% in the control) yields a hazard ratio of two. If it takes ten weeks for all cases in the treatment group and half of cases in the control group to resolve, the ten-week hazard ratio remains at two, but the median endpoint time ratio is ten, a clinically significant difference. See also Survival analysis Failure rate and Hazard rate Proportional hazards models Relative risk References Epidemiology Medical statistics Statistical ratios Survival analysis
Hazard ratio
[ "Environmental_science" ]
1,870
[ "Epidemiology", "Environmental social science" ]
265,811
https://en.wikipedia.org/wiki/Lockheed%20L-2000
The Lockheed L-2000 was Lockheed Corporation's entry in a government-funded competition to build the United States' first supersonic airliner in the 1960s. The L-2000 lost the contract to the Boeing 2707, but that competing design was ultimately canceled for political, environmental and economic reasons. In 1961, President John F. Kennedy committed the government to subsidize 75% of the development of a commercial airliner to compete with the Anglo-French Concorde then under development. The director of the Federal Aviation Administration (FAA), Najeeb Halaby, elected to improve on the Concorde's design rather than compete head-to-head with it. The SST, which might have represented a significant advance over the Concorde, was intended to carry 250 passengers (a large number at the time, more than twice as many as Concorde), fly at Mach 2.7-3.0, and have a range of 4,000 mi (7,400 km). The program was launched on June 5, 1963, and the FAA estimated that by 1990 there would be a market for 500 SSTs. Boeing, Lockheed, and North American officially responded. North American's design was soon rejected, but the Boeing and Lockheed designs were selected for further study. Design and development Early design studies Most of the major US aviation firms spent at least some time in the 1950s considering SST designs. Lockheed's first attempts date to 1958. Lockheed sought an airplane with cruise speeds of around with takeoff and landing speeds that compared to large subsonic jets of the same era. Early designs followed Lockheed's tapered straight wing, similar to the one used on the F-104 Starfighter, with a delta-shaped canard for aerodynamic trim. During wind-tunnel tests, this design demonstrated substantial shifts in the airplane's center of pressure (C/L). These would require large trim changes as the aircraft changed speed, causing trim drag. A delta wing was substituted which alleviated a portion of the movement, but it was not deemed sufficient. Lockheed knew a variable geometry, swing-wing design could accomplish this goal, but felt it was too heavy: they preferred a fixed-wing solution. In a worst-case scenario, they were willing to design a fixed-wing aircraft using fuel for ballast. By 1962, Lockheed arrived at a highly swept, cranked-arrow design featuring four engine pods buried in the wings and a canard. The improvement was closer to their goal, but still not optimal. By 1963, they extended the leading edge of the wing forward to eliminate the need for the canard, and re-shaped the wing into a double-delta shape with a mild twist and camber. This, along with careful shaping of the fuselage, was able to control the shift in the center of pressure caused by the highly swept forward part of the wing developing lift supersonically. The engines were shifted from being buried in the wings to individual pods slung below the wings. Later design studies The new design was designated L-2000-1 and was 223 ft (70 m) long with a narrow-body 132 in (335.2 cm) wide fuselage to meet aerodynamic requirements, allowing for passenger seating of five abreast seating in coach and a four-abreast arrangement in first-class seating. A typical mixed-class seating layout would equal around 170 passengers, with high-density layouts exceeding 200 passengers. The L-2000-1 featured a long, pointed nose that was almost flat on top and curved on the bottom, which allowed for improved supersonic performance, and could be drooped for takeoff and landing to provide adequate visibility. The wing design featured a sharp forward inboard sweep of 80°, with the remaining part of the wing's leading edge swept back 60°, with an overall area of 8,370 ft² (778 m²). The high sweep angles produced powerful vortices on the leading edge which increased lift at moderate to high angles of attack, yet still retained stable airflow over the control surfaces during a stall. These vortices also provided good directional control as well, which was somewhat deficient with the nose drooped at low speeds. The wing, while only 3% thick, provided substantial lift due to its large area, which, aided by vortex lift, allowed takeoff and landing speeds comparable to a Boeing 707. Additionally, a delta wing is a naturally rigid structure which requires little stiffening. The plane's undercarriage was a traditional tricycle type with a twin-wheeled nose gear. Each of the two six-wheeled main gear utilized the same tires used on the Douglas DC-8, but which were filled with nitrogen and to lower pressures. To provide an optimum entry date into service, Lockheed decided to use a beefed-up turbofan derivative of the Pratt & Whitney J58. The J58 had already successfully proven itself as a high-thrust, high-performance jet engine on the top-secret Lockheed A-12 (and subsequently on the Lockheed SR-71 Blackbird.) Since it was a turbofan, it was deemed to be quieter than a typical turbojet at low altitude and low speed, required no afterburner for takeoff, and allowed reduced power settings. The engines were placed in cylindrical pods with a wedge-shaped splitter, and a squarish intake providing the inlet system for the aircraft. The inlet was designed with the goal of requiring no moving parts, and was naturally stable. To reduce the noise from sonic booms, rather than penetrate the sound barrier at a more ideal 30,000 ft (9,144 m), they intended to penetrate it at 42,000 ft (12,802 m) instead. It would not be possible on hot days, but on normal days this would be achievable. Acceleration would continue through the sound barrier to Mach 1.15, at which point sonic booms would be audible on the ground. The plane would climb precisely to minimize sonic boom levels. After an initial level-off at around 71,500 ft (21,793 m), the plane would cruise climb upwards, ultimately reaching 76,500 ft (23,317 m). Descents would also be performed in a precise way to reduce sonic boom levels until subsonic speeds were reached. By 1964, the US Government issued new requirements regarding the SST Program which required Lockheed to modify their design, by now called the L-2000-2. The new design had numerous modifications to the wing; one change was rounding the front of the forward delta in order to eliminate the pitch-up tendency. To increase high-speed aerodynamic efficiency, the wing's thickness was reduced to 2.3%, the leading edges were made sharper, the sweep angles were changed from 80/60° to 85/62°, and substantial twist and camber were added to the forward delta; much of the rear delta was twisted upwards to allow the elevons to remain flush at Mach 3.0. In addition, wing/body fairings were added on the underside of the fuselage where the wings are located, allowing a more normally shaped nose to be used. To retain low-speed performance, the rear delta was enlarged considerably; to increase the payload, the trailing edge featured a forward sweep of 10°, extending the inner part of the wing rearward. The new nose reduced the overall length to 214 ft (65.2 m) while retaining virtually the same internal dimensions. Wingspan was identical as before, and despite the thinner wing, the increased wing area of 9,026 ft² (838.5 m²) allowed the same takeoff performance. The airplane's overall lift-to-drag ratio increased from 7.25 to 7.94. During the course of the L-2000-2's development, the engine previously selected by Lockheed was no longer deemed acceptable. During the time frame between the L-2000-1 and L-2000-2, Pratt and Whitney designed a new afterburning turbofan called the JTF-17A, which produced greater amounts of thrust. General Electric developed the GE4 which was an afterburning turbojet with variable guide-vanes, which was actually the less powerful of the two at sea level, but produced more power at high altitudes. Both engines required some degree of afterburner during cruise. Lockheed's design favored the JTF-17A over the GE-4, but there was the risk that GE would win the engine competition and Lockheed would win the SST contract, so they developed new engine pods that could accommodate either engine. Aerodynamic modifications allowed a shorter engine pod to be used and which utilized a new inlet design. This inlet featured minimal external cowl angles and was precisely contoured to allow a high-pressure recovery using no moving parts, and allowed maximum performance with either engine option. To allow additional airflow for noise-reduction, or to aid afterburner performance, a set of suck-in doors was added to the rear portion of the pod. To provide mid-air braking capability for rapid deceleration and rapid descents, and to assist ground braking, part of the nozzle could be employed as a thrust reverser at speeds below Mach 1.2. The pods were also repositioned on the new wing to better shield them from abrupt changes in airflow. The additional thrust from the new engines allowed supersonic penetration to be delayed until up to 45,000 ft (13,716 m) under virtually all conditions. Since at this point the possibility of supersonic overland flight was still considered to be an option, Lockheed also considered larger, shorter-ranged versions of the L-2000-2B. All designs weighed exactly the same, with a new tail design, changes to the fuselage length, extensions to the forward delta, increased capacity, and variations in fuel capacity. The largest version featured capacity for 250 domestic passengers, while the medium version featured transatlantic capability with 220 passengers. Despite the fuselage length changes, there was no appreciable increase in the risk of the aircraft pitching upwards too far (over-rotation) on takeoff. Design competition By 1966, the design took on its final form as the L-2000-7A and L-2000-7B. The L-2000-7A featured a re-designed wing and fuselage lengthened to 273 ft (83 m). The longer fuselage allows for a mixed-class seating of 230 passengers. The new wing featured a proportionately larger forward delta, with greater refinement to the wing's twist and curvature. Despite having the same wingspan, the wing-area was increased to 9,424 ft² (875 m²), with a slightly reduced 84° sweepback, and an increased 65° main delta wing, with reduced forward sweep along the trailing edge. Unlike previous versions, this aircraft featured a leading-edge flap to increase lift at low speeds, and to allow a slight down-elevon deflection. The fuselage, as a result of greater length, changes to the wing design, and attempts to further reduce drag, featured a slight vertical thinning in the fuselage where the wings were, a more prominent wing/body "belly" to carry fuel and cargo, a longer nose, and a refined tail. Since the airplane was not as directionally stable as before, the plane featured a ventral fin, located on the underside of the trailing fuselage. The L-2000-7B was extended to 293 ft (89 m), utilizing a lengthened cabin and a more pronounced upward-curving tail to reduce the chance of the tail striking the runway during over-rotation. Both designs had the same maximum weight of 590,000 lb (267,600 kg), and the aerodynamic lift-to-drag ratio was increased to 8:1. Full-scale mock-ups of the Boeing 2707-200 and L-2000-7 designs were presented to the FAA, and on December 31, 1966 the Boeing design was selected. The Lockheed design was judged simpler to produce and less risky, but its performance during takeoff and at high speed was slightly lower. Because of the JTF-17A, the L-2000-7 was also predicted to be louder as well. The Boeing design was considered more advanced, representing a greater lead over the Concorde and thus more fitting to the original design mandate. Boeing eventually changed its advanced variable-geometry wing design to a simpler delta-wing similar to Lockheed's design, but with a tail. The Boeing SST was ultimately cancelled on May 20, 1971 after the US Congress stopped federal funding for the SST program on March 24, 1971. Specifications (L-2000-7A) See also Boom Overture References Further reading Boyne, Walter J, Beyond the Horizons: The Lockheed Story. New York: St. Martin's Press, 1998. . Francillon, René J, Lockheed Aircraft Since 1913. Annapolis, Maryland: Naval Institute Press, 1987. . External links "The United States SST Contenders" a 1964 Flight article "Mach Three Technology" a 1966 Flight article on the L-2000 "Ticket Through The Sound Barrier" - 1966 Supersonic Transport Educational Documentary L-2000 Abandoned civil aircraft projects of the United States Tailless delta-wing aircraft Quadjets Supersonic transports Aircraft with retractable tricycle landing gear
Lockheed L-2000
[ "Physics" ]
2,725
[ "Physical systems", "Transport", "Supersonic transports" ]
265,816
https://en.wikipedia.org/wiki/Open%20system%20%28systems%20theory%29
An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system. The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences. In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter. The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes. Social sciences In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event. David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the Asian and South-East Asian financial crisis of 1997-8, involving "hedge fund raising" of national currencies, as examples of this. Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components. The sociology of religion finds both open and closed systems within the field of religion. Thermodynamics Systems engineering See also Business process Complex system Dynamical system Glossary of systems theory Ludwig von Bertalanffy Maximum power principle Non-equilibrium thermodynamics Open system (computing) Open System Environment Reference Model Openness Phantom loop Thermodynamic system References Further reading Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438. Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronics Engineers. External links OPEN SYSTEM, Principia Cybernetica Web, 2007. Cybernetics Thermodynamic systems
Open system (systems theory)
[ "Physics", "Chemistry", "Mathematics" ]
812
[ "Physical systems", "Thermodynamic systems", "Thermodynamics", "Dynamical systems" ]
265,823
https://en.wikipedia.org/wiki/Thermodynamic%20equilibrium
Thermodynamic equilibrium is a notion of thermodynamics with axiomatic status referring to an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of mass nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, not only is there an absence of macroscopic change, but there is an “absence of any tendency toward change on a macroscopic scale.” Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, unless disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings. In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a "meta-stable equilibrium". Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equilibrium and this is accompanied by an increase in the sum of the entropies of the portions. Overview Classical thermodynamics deals with states of dynamic equilibrium. The state of a system at thermodynamic equilibrium is the one for which some thermodynamic potential is minimized (in the absence of an applied voltage), or for which the entropy (S) is maximized, for specified conditions. One such potential is the Helmholtz free energy (A), for a closed system at constant volume and temperature (controlled by a heat bath): Another potential, the Gibbs free energy (G), is minimized at thermodynamic equilibrium in a closed system at constant temperature and pressure, both controlled by the surroundings: where T denotes the absolute thermodynamic temperature, P the pressure, S the entropy, V the volume, and U the internal energy of the system. In other words, is a necessary condition for chemical equilibrium under these conditions (in the absence of an applied voltage). Thermodynamic equilibrium is the unique stable stationary state that is approached or eventually reached as the system interacts with its surroundings over a long time. The above-mentioned potentials are mathematically constructed to be the thermodynamic quantities that are minimized under the particular conditions in the specified surroundings. Conditions For a completely isolated system, S is maximum at thermodynamic equilibrium. For a closed system at controlled constant temperature and volume, A is minimum at thermodynamic equilibrium. For a closed system at controlled constant temperature and pressure without an applied voltage, G is minimum at thermodynamic equilibrium. The various types of equilibriums are achieved as follows: Two systems are in thermal equilibrium when their temperatures are the same. Two systems are in mechanical equilibrium when their pressures are the same. Two systems are in diffusive equilibrium when their chemical potentials are the same. All forces are balanced and there is no significant external driving force. Relation of exchange equilibrium between systems Often the surroundings of a thermodynamic system may also be regarded as another thermodynamic system. In this view, one may consider the system and its surroundings as two systems in mutual contact, with long-range forces also linking them. The enclosure of the system is the surface of contiguity or boundary between the two systems. In the thermodynamic formalism, that surface is regarded as having specific properties of permeability. For example, the surface of contiguity may be supposed to be permeable only to heat, allowing energy to transfer only as heat. Then the two systems are said to be in thermal equilibrium when the long-range forces are unchanging in time and the transfer of energy as heat between them has slowed and eventually stopped permanently; this is an example of a contact equilibrium. Other kinds of contact equilibrium are defined by other kinds of specific permeability. When two systems are in contact equilibrium with respect to a particular kind of permeability, they have common values of the intensive variable that belongs to that particular kind of permeability. Examples of such intensive variables are temperature, pressure, chemical potential. A contact equilibrium may be regarded also as an exchange equilibrium. There is a zero balance of rate of transfer of some quantity between the two systems in contact equilibrium. For example, for a wall permeable only to heat, the rates of diffusion of internal energy as heat between the two systems are equal and opposite. An adiabatic wall between the two systems is 'permeable' only to energy transferred as work; at mechanical equilibrium the rates of transfer of energy as work between them are equal and opposite. If the wall is a simple wall, then the rates of transfer of volume across it are also equal and opposite; and the pressures on either side of it are equal. If the adiabatic wall is more complicated, with a sort of leverage, having an area-ratio, then the pressures of the two systems in exchange equilibrium are in the inverse ratio of the volume exchange ratio; this keeps the zero balance of rates of transfer as work. A radiative exchange can occur between two otherwise separate systems. Radiative exchange equilibrium prevails when the two systems have the same temperature. Thermodynamic state of internal equilibrium of a system A collection of matter may be entirely isolated from its surroundings. If it has been left undisturbed for an indefinitely long time, classical thermodynamics postulates that it is in a state in which no changes occur within it, and there are no flows within it. This is a thermodynamic state of internal equilibrium. (This postulate is sometimes, but not often, called the "minus first" law of thermodynamics. One textbook calls it the "zeroth law", remarking that the authors think this more befitting that title than its more customary definition, which apparently was suggested by Fowler.) Such states are a principal concern in what is known as classical or equilibrium thermodynamics, for they are the only states of the system that are regarded as well defined in that subject. A system in contact equilibrium with another system can by a thermodynamic operation be isolated, and upon the event of isolation, no change occurs in it. A system in a relation of contact equilibrium with another system may thus also be regarded as being in its own state of internal thermodynamic equilibrium. Multiple contact equilibrium The thermodynamic formalism allows that a system may have contact with several other systems at once, which may or may not also have mutual contact, the contacts having respectively different permeabilities. If these systems are all jointly isolated from the rest of the world those of them that are in contact then reach respective contact equilibria with one another. If several systems are free of adiabatic walls between each other, but are jointly isolated from the rest of the world, then they reach a state of multiple contact equilibrium, and they have a common temperature, a total internal energy, and a total entropy. Amongst intensive variables, this is a unique property of temperature. It holds even in the presence of long-range forces. (That is, there is no "force" that can maintain temperature discrepancies.) For example, in a system in thermodynamic equilibrium in a vertical gravitational field, the pressure on the top wall is less than that on the bottom wall, but the temperature is the same everywhere. A thermodynamic operation may occur as an event restricted to the walls that are within the surroundings, directly affecting neither the walls of contact of the system of interest with its surroundings, nor its interior, and occurring within a definitely limited time. For example, an immovable adiabatic wall may be placed or removed within the surroundings. Consequent upon such an operation restricted to the surroundings, the system may be for a time driven away from its own initial internal state of thermodynamic equilibrium. Then, according to the second law of thermodynamics, the whole undergoes changes and eventually reaches a new and final equilibrium with the surroundings. Following Planck, this consequent train of events is called a natural thermodynamic process. It is allowed in equilibrium thermodynamics just because the initial and final states are of thermodynamic equilibrium, even though during the process there is transient departure from thermodynamic equilibrium, when neither the system nor its surroundings are in well defined states of internal equilibrium. A natural process proceeds at a finite rate for the main part of its course. It is thereby radically different from a fictive quasi-static 'process' that proceeds infinitely slowly throughout its course, and is fictively 'reversible'. Classical thermodynamics allows that even though a process may take a very long time to settle to thermodynamic equilibrium, if the main part of its course is at a finite rate, then it is considered to be natural, and to be subject to the second law of thermodynamics, and thereby irreversible. Engineered machines and artificial devices and manipulations are permitted within the surroundings. The allowance of such operations and devices in the surroundings but not in the system is the reason why Kelvin in one of his statements of the second law of thermodynamics spoke of "inanimate" agency; a system in thermodynamic equilibrium is inanimate. Otherwise, a thermodynamic operation may directly affect a wall of the system. It is often convenient to suppose that some of the surrounding subsystems are so much larger than the system that the process can affect the intensive variables only of the surrounding subsystems, and they are then called reservoirs for relevant intensive variables. Local and global equilibrium It can be useful to distinguish between global and local thermodynamic equilibrium. In thermodynamics, exchanges within a system and between the system and the outside are controlled by intensive parameters. As an example, temperature controls heat exchanges. Global thermodynamic equilibrium (GTE) means that those intensive parameters are homogeneous throughout the whole system, while local thermodynamic equilibrium (LTE) means that those intensive parameters are varying in space and time, but are varying so slowly that, for any point, one can assume thermodynamic equilibrium in some neighborhood about that point. If the description of the system requires variations in the intensive parameters that are too large, the very assumptions upon which the definitions of these intensive parameters are based will break down, and the system will be in neither global nor local equilibrium. For example, it takes a certain number of collisions for a particle to equilibrate to its surroundings. If the average distance it has moved during these collisions removes it from the neighborhood it is equilibrating to, it will never equilibrate, and there will be no LTE. Temperature is, by definition, proportional to the average internal energy of an equilibrated neighborhood. Since there is no equilibrated neighborhood, the concept of temperature doesn't hold, and the temperature becomes undefined. This local equilibrium may apply only to a certain subset of particles in the system. For example, LTE is usually applied only to massive particles. In a radiating gas, the photons being emitted and absorbed by the gas do not need to be in a thermodynamic equilibrium with each other or with the massive particles of the gas for LTE to exist. In some cases, it is not considered necessary for free electrons to be in equilibrium with the much more massive atoms or molecules for LTE to exist. As an example, LTE will exist in a glass of water that contains a melting ice cube. The temperature inside the glass can be defined at any point, but it is colder near the ice cube than far away from it. If energies of the molecules located near a given point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for a certain temperature. If the energies of the molecules located near another point are observed, they will be distributed according to the Maxwell–Boltzmann distribution for another temperature. Local thermodynamic equilibrium does not require either local or global stationarity. In other words, each small locality need not have a constant temperature. However, it does require that each small locality change slowly enough to practically sustain its local Maxwell–Boltzmann distribution of molecular velocities. A global non-equilibrium state can be stably stationary only if it is maintained by exchanges between the system and the outside. For example, a globally-stable stationary state could be maintained inside the glass of water by continuously adding finely powdered ice into it to compensate for the melting, and continuously draining off the meltwater. Natural transport phenomena may lead a system from local to global thermodynamic equilibrium. Going back to our example, the diffusion of heat will lead our glass of water toward global thermodynamic equilibrium, a state in which the temperature of the glass is completely homogeneous. Reservations Careful and well informed writers about thermodynamics, in their accounts of thermodynamic equilibrium, often enough make provisos or reservations to their statements. Some writers leave such reservations merely implied or more or less unstated. For example, one widely cited writer, H. B. Callen writes in this context: "In actuality, few systems are in absolute and true equilibrium." He refers to radioactive processes and remarks that they may take "cosmic times to complete, [and] generally can be ignored". He adds "In practice, the criterion for equilibrium is circular. Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory!" J.A. Beattie and I. Oppenheim write: "Insistence on a strict interpretation of the definition of equilibrium would rule out the application of thermodynamics to practically all states of real systems." Another author, cited by Callen as giving a "scholarly and rigorous treatment", and cited by Adkins as having written a "classic text", A.B. Pippard writes in that text: "Given long enough a supercooled vapour will eventually condense, ... . The time involved may be so enormous, however, perhaps 10100 years or more, ... . For most purposes, provided the rapid change is not artificially stimulated, the systems may be regarded as being in equilibrium." Another author, A. Münster, writes in this context. He observes that thermonuclear processes often occur so slowly that they can be ignored in thermodynamics. He comments: "The concept 'absolute equilibrium' or 'equilibrium with respect to all imaginable processes', has therefore, no physical significance." He therefore states that: "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions." According to L. Tisza: "... in the discussion of phenomena near absolute zero. The absolute predictions of the classical theory become particularly vague because the occurrence of frozen-in nonequilibrium states is very common." Definitions The most general kind of thermodynamic equilibrium of a system is through contact with the surroundings that allows simultaneous passages of all chemical substances and all kinds of energy. A system in thermodynamic equilibrium may move with uniform acceleration through space but must not change its shape or size while doing so; thus it is defined by a rigid volume in space. It may lie within external fields of force, determined by external factors of far greater extent than the system itself, so that events within the system cannot in an appreciable amount affect the external fields of force. The system can be in thermodynamic equilibrium only if the external force fields are uniform, and are determining its uniform acceleration, or if it lies in a non-uniform force field but is held stationary there by local forces, such as mechanical pressures, on its surface. Thermodynamic equilibrium is a primitive notion of the theory of thermodynamics. According to P.M. Morse: "It should be emphasized that the fact that there are thermodynamic states, ..., and the fact that there are thermodynamic variables which are uniquely specified by the equilibrium state ... are not conclusions deduced logically from some philosophical first principles. They are conclusions ineluctably drawn from more than two centuries of experiments." This means that thermodynamic equilibrium is not to be defined solely in terms of other theoretical concepts of thermodynamics. M. Bailyn proposes a fundamental law of thermodynamics that defines and postulates the existence of states of thermodynamic equilibrium. Textbook definitions of thermodynamic equilibrium are often stated carefully, with some reservation or other. For example, A. Münster writes: "An isolated system is in thermodynamic equilibrium when, in the system, no changes of state are occurring at a measurable rate." There are two reservations stated here; the system is isolated; any changes of state are immeasurably slow. He discusses the second proviso by giving an account of a mixture oxygen and hydrogen at room temperature in the absence of a catalyst. Münster points out that a thermodynamic equilibrium state is described by fewer macroscopic variables than is any other state of a given system. This is partly, but not entirely, because all flows within and through the system are zero. R. Haase's presentation of thermodynamics does not start with a restriction to thermodynamic equilibrium because he intends to allow for non-equilibrium thermodynamics. He considers an arbitrary system with time invariant properties. He tests it for thermodynamic equilibrium by cutting it off from all external influences, except external force fields. If after insulation, nothing changes, he says that the system was in equilibrium. In a section headed "Thermodynamic equilibrium", H.B. Callen defines equilibrium states in a paragraph. He points out that they "are determined by intrinsic factors" within the system. They are "terminal states", towards which the systems evolve, over time, which may occur with "glacial slowness". This statement does not explicitly say that for thermodynamic equilibrium, the system must be isolated; Callen does not spell out what he means by the words "intrinsic factors". Another textbook writer, C.J. Adkins, explicitly allows thermodynamic equilibrium to occur in a system which is not isolated. His system is, however, closed with respect to transfer of matter. He writes: "In general, the approach to thermodynamic equilibrium will involve both thermal and work-like interactions with the surroundings." He distinguishes such thermodynamic equilibrium from thermal equilibrium, in which only thermal contact is mediating transfer of energy. Another textbook author, J.R. Partington, writes: "(i) An equilibrium state is one which is independent of time." But, referring to systems "which are only apparently in equilibrium", he adds : "Such systems are in states of ″false equilibrium.″" Partington's statement does not explicitly state that the equilibrium refers to an isolated system. Like Münster, Partington also refers to the mixture of oxygen and hydrogen. He adds a proviso that "In a true equilibrium state, the smallest change of any external condition which influences the state will produce a small change of state ..." This proviso means that thermodynamic equilibrium must be stable against small perturbations; this requirement is essential for the strict meaning of thermodynamic equilibrium. A student textbook by F.H. Crawford has a section headed "Thermodynamic Equilibrium". It distinguishes several drivers of flows, and then says: "These are examples of the apparently universal tendency of isolated systems toward a state of complete mechanical, thermal, chemical, and electrical—or, in a single word, thermodynamic—equilibrium." A monograph on classical thermodynamics by H.A. Buchdahl considers the "equilibrium of a thermodynamic system", without actually writing the phrase "thermodynamic equilibrium". Referring to systems closed to exchange of matter, Buchdahl writes: "If a system is in a terminal condition which is properly static, it will be said to be in equilibrium." Buchdahl's monograph also discusses amorphous glass, for the purposes of thermodynamic description. It states: "More precisely, the glass may be regarded as being in equilibrium so long as experimental tests show that 'slow' transitions are in effect reversible." It is not customary to make this proviso part of the definition of thermodynamic equilibrium, but the converse is usually assumed: that if a body in thermodynamic equilibrium is subject to a sufficiently slow process, that process may be considered to be sufficiently nearly reversible, and the body remains sufficiently nearly in thermodynamic equilibrium during the process. A. Münster carefully extends his definition of thermodynamic equilibrium for isolated systems by introducing a concept of contact equilibrium. This specifies particular processes that are allowed when considering thermodynamic equilibrium for non-isolated systems, with special concern for open systems, which may gain or lose matter from or to their surroundings. A contact equilibrium is between the system of interest and a system in the surroundings, brought into contact with the system of interest, the contact being through a special kind of wall; for the rest, the whole joint system is isolated. Walls of this special kind were also considered by C. Carathéodory, and are mentioned by other writers also. They are selectively permeable. They may be permeable only to mechanical work, or only to heat, or only to some particular chemical substance. Each contact equilibrium defines an intensive parameter; for example, a wall permeable only to heat defines an empirical temperature. A contact equilibrium can exist for each chemical constituent of the system of interest. In a contact equilibrium, despite the possible exchange through the selectively permeable wall, the system of interest is changeless, as if it were in isolated thermodynamic equilibrium. This scheme follows the general rule that "... we can consider an equilibrium only with respect to specified processes and defined experimental conditions." Thermodynamic equilibrium for an open system means that, with respect to every relevant kind of selectively permeable wall, contact equilibrium exists when the respective intensive parameters of the system and surroundings are equal. This definition does not consider the most general kind of thermodynamic equilibrium, which is through unselective contacts. This definition does not simply state that no current of matter or energy exists in the interior or at the boundaries; but it is compatible with the following definition, which does so state. M. Zemansky also distinguishes mechanical, chemical, and thermal equilibrium. He then writes: "When the conditions for all three types of equilibrium are satisfied, the system is said to be in a state of thermodynamic equilibrium". P.M. Morse writes that thermodynamics is concerned with "states of thermodynamic equilibrium". He also uses the phrase "thermal equilibrium" while discussing transfer of energy as heat between a body and a heat reservoir in its surroundings, though not explicitly defining a special term 'thermal equilibrium'. J.R. Waldram writes of "a definite thermodynamic state". He defines the term "thermal equilibrium" for a system "when its observables have ceased to change over time". But shortly below that definition he writes of a piece of glass that has not yet reached its "full thermodynamic equilibrium state". Considering equilibrium states, M. Bailyn writes: "Each intensive variable has its own type of equilibrium." He then defines thermal equilibrium, mechanical equilibrium, and material equilibrium. Accordingly, he writes: "If all the intensive variables become uniform, thermodynamic equilibrium is said to exist." He is not here considering the presence of an external force field. J.G. Kirkwood and I. Oppenheim define thermodynamic equilibrium as follows: "A system is in a state of thermodynamic equilibrium if, during the time period allotted for experimentation, (a) its intensive properties are independent of time and (b) no current of matter or energy exists in its interior or at its boundaries with the surroundings." It is evident that they are not restricting the definition to isolated or to closed systems. They do not discuss the possibility of changes that occur with "glacial slowness", and proceed beyond the time period allotted for experimentation. They note that for two systems in contact, there exists a small subclass of intensive properties such that if all those of that small subclass are respectively equal, then all respective intensive properties are equal. States of thermodynamic equilibrium may be defined by this subclass, provided some other conditions are satisfied. Characteristics of a state of internal thermodynamic equilibrium Homogeneity in the absence of external forces A thermodynamic system consisting of a single phase in the absence of external forces, in its own internal thermodynamic equilibrium, is homogeneous. This means that the material in any small volume element of the system can be interchanged with the material of any other geometrically congruent volume element of the system, and the effect is to leave the system thermodynamically unchanged. In general, a strong external force field makes a system of a single phase in its own internal thermodynamic equilibrium inhomogeneous with respect to some intensive variables. For example, a relatively dense component of a mixture can be concentrated by centrifugation. Uniform temperature Such equilibrium inhomogeneity, induced by external forces, does not occur for the intensive variable temperature. According to E.A. Guggenheim, "The most important conception of thermodynamics is temperature." Planck introduces his treatise with a brief account of heat and temperature and thermal equilibrium, and then announces: "In the following we shall deal chiefly with homogeneous, isotropic bodies of any form, possessing throughout their substance the same temperature and density, and subject to a uniform pressure acting everywhere perpendicular to the surface." As did Carathéodory, Planck was setting aside surface effects and external fields and anisotropic crystals. Though referring to temperature, Planck did not there explicitly refer to the concept of thermodynamic equilibrium. In contrast, Carathéodory's scheme of presentation of classical thermodynamics for closed systems postulates the concept of an "equilibrium state" following Gibbs (Gibbs speaks routinely of a "thermodynamic state"), though not explicitly using the phrase 'thermodynamic equilibrium', nor explicitly postulating the existence of a temperature to define it. Although thermodynamic laws are immutable, systems can be created that delay the time to reach thermodynamic equilibrium. In a thought experiment, Reed A. Howald conceived of a system called "The Fizz Keeper"consisting of a cap with a nozzle that can re-pressurize any standard bottle of carbonated beverage. Nitrogen and oxygen, which air are mostly made out of, would keep getting pumped in, which would slow down the rate at which the carbon dioxide fizzles out of the system. This is possible because the thermodynamic equilibrium between the unconverted and converted carbon dioxide inside the bottle would stay the same. To come to this conclusion, he also appeals to Henry's Law, which states that gases dissolve in direct proportion to their partial pressures. By influencing the partial pressure on the top of a closed system, this would help slow down the rate of fizzing out of carbonated beverages which is governed by thermodynamic equilibrium. The equilibria of carbon dioxide and other gases would not change, however the partial pressure on top would slow down the rate of dissolution extending the time a gas stays in a particular state. due to the nature of thermal equilibrium of the remainder of the beverage. The equilibrium constant of carbon dioxide would be completely independent of the nitrogen and oxygen pumped into the system, which would slow down the diffusion of gas, and yet not have an impact on the thermodynamics of the entire system. The temperature within a system in thermodynamic equilibrium is uniform in space as well as in time. In a system in its own state of internal thermodynamic equilibrium, there are no net internal macroscopic flows. In particular, this means that all local parts of the system are in mutual radiative exchange equilibrium. This means that the temperature of the system is spatially uniform. This is so in all cases, including those of non-uniform external force fields. For an externally imposed gravitational field, this may be proved in macroscopic thermodynamic terms, by the calculus of variations, using the method of Langrangian multipliers. Considerations of kinetic theory or statistical mechanics also support this statement. In order that a system may be in its own internal state of thermodynamic equilibrium, it is of course necessary, but not sufficient, that it be in its own internal state of thermal equilibrium; it is possible for a system to reach internal mechanical equilibrium before it reaches internal thermal equilibrium. Number of real variables needed for specification In his exposition of his scheme of closed system equilibrium thermodynamics, C. Carathéodory initially postulates that experiment reveals that a definite number of real variables define the states that are the points of the manifold of equilibria. In the words of Prigogine and Defay (1945): "It is a matter of experience that when we have specified a certain number of macroscopic properties of a system, then all the other properties are fixed." As noted above, according to A. Münster, the number of variables needed to define a thermodynamic equilibrium is the least for any state of a given isolated system. As noted above, J.G. Kirkwood and I. Oppenheim point out that a state of thermodynamic equilibrium may be defined by a special subclass of intensive variables, with a definite number of members in that subclass. If the thermodynamic equilibrium lies in an external force field, it is only the temperature that can in general be expected to be spatially uniform. Intensive variables other than temperature will in general be non-uniform if the external force field is non-zero. In such a case, in general, additional variables are needed to describe the spatial non-uniformity. Stability against small perturbations As noted above, J.R. Partington points out that a state of thermodynamic equilibrium is stable against small transient perturbations. Without this condition, in general, experiments intended to study systems in thermodynamic equilibrium are in severe difficulties. Approach to thermodynamic equilibrium within an isolated system When a body of material starts from a non-equilibrium state of inhomogeneity or chemical non-equilibrium, and is then isolated, it spontaneously evolves towards its own internal state of thermodynamic equilibrium. It is not necessary that all aspects of internal thermodynamic equilibrium be reached simultaneously; some can be established before others. For example, in many cases of such evolution, internal mechanical equilibrium is established much more rapidly than the other aspects of the eventual thermodynamic equilibrium. Another example is that, in many cases of such evolution, thermal equilibrium is reached much more rapidly than chemical equilibrium. Fluctuations within an isolated system in its own internal thermodynamic equilibrium In an isolated system, thermodynamic equilibrium by definition persists over an indefinitely long time. In classical physics it is often convenient to ignore the effects of measurement and this is assumed in the present account. To consider the notion of fluctuations in an isolated thermodynamic system, a convenient example is a system specified by its extensive state variables, internal energy, volume, and mass composition. By definition they are time-invariant. By definition, they combine with time-invariant nominal values of their conjugate intensive functions of state, inverse temperature, pressure divided by temperature, and the chemical potentials divided by temperature, so as to exactly obey the laws of thermodynamics. But the laws of thermodynamics, combined with the values of the specifying extensive variables of state, are not sufficient to provide knowledge of those nominal values. Further information is needed, namely, of the constitutive properties of the system. It may be admitted that on repeated measurement of those conjugate intensive functions of state, they are found to have slightly different values from time to time. Such variability is regarded as due to internal fluctuations. The different measured values average to their nominal values. If the system is truly macroscopic as postulated by classical thermodynamics, then the fluctuations are too small to detect macroscopically. This is called the thermodynamic limit. In effect, the molecular nature of matter and the quantal nature of momentum transfer have vanished from sight, too small to see. According to Buchdahl: "... there is no place within the strictly phenomenological theory for the idea of fluctuations about equilibrium (see, however, Section 76)." If the system is repeatedly subdivided, eventually a system is produced that is small enough to exhibit obvious fluctuations. This is a mesoscopic level of investigation. The fluctuations are then directly dependent on the natures of the various walls of the system. The precise choice of independent state variables is then important. At this stage, statistical features of the laws of thermodynamics become apparent. If the mesoscopic system is further repeatedly divided, eventually a microscopic system is produced. Then the molecular character of matter and the quantal nature of momentum transfer become important in the processes of fluctuation. One has left the realm of classical or macroscopic thermodynamics, and one needs quantum statistical mechanics. The fluctuations can become relatively dominant, and questions of measurement become important. The statement that 'the system is its own internal thermodynamic equilibrium' may be taken to mean that 'indefinitely many such measurements have been taken from time to time, with no trend in time in the various measured values'. Thus the statement, that 'a system is in its own internal thermodynamic equilibrium, with stated nominal values of its functions of state conjugate to its specifying state variables', is far far more informative than a statement that 'a set of single simultaneous measurements of those functions of state have those same values'. This is because the single measurements might have been made during a slight fluctuation, away from another set of nominal values of those conjugate intensive functions of state, that is due to unknown and different constitutive properties. A single measurement cannot tell whether that might be so, unless there is also knowledge of the nominal values that belong to the equilibrium state. Thermal equilibrium An explicit distinction between 'thermal equilibrium' and 'thermodynamic equilibrium' is made by B. C. Eu. He considers two systems in thermal contact, one a thermometer, the other a system in which there are several occurring irreversible processes, entailing non-zero fluxes; the two systems are separated by a wall permeable only to heat. He considers the case in which, over the time scale of interest, it happens that both the thermometer reading and the irreversible processes are steady. Then there is thermal equilibrium without thermodynamic equilibrium. Eu proposes consequently that the zeroth law of thermodynamics can be considered to apply even when thermodynamic equilibrium is not present; also he proposes that if changes are occurring so fast that a steady temperature cannot be defined, then "it is no longer possible to describe the process by means of a thermodynamic formalism. In other words, thermodynamics has no meaning for such a process." This illustrates the importance for thermodynamics of the concept of temperature. Thermal equilibrium is achieved when two systems in thermal contact with each other cease to have a net exchange of energy. It follows that if two systems are in thermal equilibrium, then their temperatures are the same. Thermal equilibrium occurs when a system's macroscopic thermal observables have ceased to change with time. For example, an ideal gas whose distribution function has stabilised to a specific Maxwell–Boltzmann distribution would be in thermal equilibrium. This outcome allows a single temperature and pressure to be attributed to the whole system. For an isolated body, it is quite possible for mechanical equilibrium to be reached before thermal equilibrium is reached, but eventually, all aspects of equilibrium, including thermal equilibrium, are necessary for thermodynamic equilibrium. Non-equilibrium A system's internal state of thermodynamic equilibrium should be distinguished from a "stationary state" in which thermodynamic parameters are unchanging in time but the system is not isolated, so that there are, into and out of the system, non-zero macroscopic fluxes which are constant in time. Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods. Laws governing systems which are far from equilibrium are also debatable. One of the guiding principles for these systems is the maximum entropy production principle. It states that a non-equilibrium system evolves such as to maximize its entropy production. See also Thermodynamic models Non-random two-liquid model (NRTL model) - Phase equilibrium calculations UNIQUAC model - Phase equilibrium calculations Time crystal Topics in control theory Coefficient diagram method Control reconfiguration Feedback H infinity Hankel singular value Krener's theorem Lead-lag compensator Markov chain approximation method Minor loop feedback Multi-loop feedback Positive systems Radial basis function Root locus Signal-flow graphs Stable polynomial State space representation Steady state Transient state Underactuation Youla–Kucera parametrization Other related topics Automation and remote control Bond graph Control engineering Control–feedback–abort loop Controller (control theory) Cybernetics Intelligent control Mathematical system theory Negative feedback amplifier People in systems and control Perceptual control theory Systems theory Time scale calculus General references C. Michael Hogan, Leda C. Patmore and Harry Seidman (1973) Statistical Prediction of Dynamic Thermal Equilibrium Temperatures using Standard Meteorological Data Bases, Second Edition (EPA-660/2-73-003 2006) United States Environmental Protection Agency Office of Research and Development, Washington, D.C. Cesare Barbieri (2007) Fundamentals of Astronomy. First Edition (QB43.3.B37 2006) CRC Press , F. Mandl (1988) Statistical Physics, Second Edition, John Wiley & Sons Hans R. Griem (2005) Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York References Cited bibliography Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, third edition, McGraw-Hill, London, . Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Beattie, J.A., Oppenheim, I. (1979). Principles of Thermodynamics, Elsevier Scientific Publishing, Amsterdam, . Boltzmann, L. (1896/1964). Lectures on Gas Theory, translated by S.G. Brush, University of California Press, Berkeley. Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, Cambridge UK. Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . Carathéodory, C. (1909). Untersuchungen über die Grundlagen der Thermodynamik, Mathematische Annalen, 67: 355–386. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. Chapman, S., Cowling, T.G. (1939/1970). The Mathematical Theory of Non-uniform gases. An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, third edition 1970, Cambridge University Press, London. Crawford, F.H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, . Denbigh, K.G. (1951). Thermodynamics of the Steady State, Methuen, London. Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, . Fitts, D.D. (1962). Nonequilibrium thermodynamics. A Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York. Gibbs, J.W. (1876/1878). On the equilibrium of heterogeneous substances, Trans. Conn. Acad., 3: 108–248, 343–524, reprinted in The Collected Works of J. Willard Gibbs, PhD, LL. D., edited by W.R. Longley, R.G. Van Name, Longmans, Green & Co., New York, 1928, volume 1, pp. 55–353. Griem, H.R. (2005). Principles of Plasma Spectroscopy (Cambridge Monographs on Plasma Physics), Cambridge University Press, New York . Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland, Amsterdam. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Kirkwood, J.G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York. Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. Levine, I.N. (1983), Physical Chemistry, second edition, McGraw-Hill, New York, . Morse, P.M. (1969). Thermal Physics, second edition, W.A. Benjamin, Inc, New York. Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London. Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. Pippard, A.B. (1957/1966). The Elements of Classical Thermodynamics, reprinted with corrections 1966, Cambridge University Press, London. Planck. M. (1914). The Theory of Heat Radiation, a translation by Masius, M. of the second German edition, P. Blakiston's Son & Co., Philadelphia. Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège. Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London. Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005). Physical Chemistry, fourth edition, Wiley, Hoboken NJ. ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA. Also published in Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA. Uhlenbeck, G.E., Ford, G.W. (1963). Lectures in Statistical Mechanics, American Mathematical Society, Providence RI. Waldram, J.R. (1985). The Theory of Thermodynamics, Cambridge University Press, Cambridge UK, . Zemansky, M. (1937/1968). Heat and Thermodynamics. An Intermediate Textbook, fifth edition 1967, McGraw–Hill Book Company, New York. External links Breakdown of Local Thermodynamic Equilibrium George W. Collins, The Fundamentals of Stellar Astrophysics, Chapter 15 Local Thermodynamic Equilibrium Non-Local Thermodynamic Equilibrium in Cloudy Planetary Atmospheres Paper by R. E. Samueison quantifying the effects due to non-LTE in an atmosphere Thermodynamic Equilibrium, Local and otherwise lecture by Michael Richmond Equilibrium chemistry Thermodynamic cycles Thermodynamic processes Thermodynamic systems Thermodynamics
Thermodynamic equilibrium
[ "Physics", "Chemistry", "Mathematics" ]
9,949
[ "Thermodynamic systems", "Thermodynamic processes", "Physical systems", "Equilibrium chemistry", "Thermodynamics", "Dynamical systems" ]
265,892
https://en.wikipedia.org/wiki/Pyrethrum
Pyrethrum was a genus of several Old World plants now classified in either Chrysanthemum or Tanacetum which are cultivated as ornamentals for their showy flower heads. Pyrethrum continues to be used as a common name for plants formerly included in the genus Pyrethrum. Pyrethrum is also the name of a natural insecticide made from the dried flower heads of Chrysanthemum cinerariifolium and Chrysanthemum coccineum. The insecticidal compounds present in these species are pyrethrins. Description Some members of the genus Chrysanthemum, such as the following two, are placed in the genus Tanacetum instead by some botanists. Both genera are members of the daisy (or aster) family, Asteraceae. They are all perennial plants with a daisy-like appearance and white petals. Tanacetum cinerariifolium is called the Dalmatian chrysanthemum, denoting its origin in that region of the Balkans (Dalmatia). It looks more like the common daisy than other pyrethrums do. Its flowers, typically white with yellow centers, grow from numerous fairly rigid stems. Plants have blue-green leaves and grow to in height. The plant is economically important as a natural source of pyrethrin insecticides. C. coccineum, the Persian chrysanthemum, is a perennial plant native to Caucasus and looks somewhat like a daisy. It produces large white, pink or red flowers. The leaves resemble those of ferns, and the plant grows to between in height. The flowering period is June to July in temperate climates (Northern Hemisphere). C. coccineum also contains insecticidal pyrethrins, but it is a poor source compared to C. cinerariifolium. Other species, such as C. balsamita and C. marshalli, also contain insecticidal substances, but are less effective than the two species mentioned above. Insecticides The flowers are pulverized and the active components, called pyrethrins, contained in the seed cases, are extracted and sold in the form of an oleoresin. This is applied as a suspension in water or oil, or as a powder. Pyrethrins attack the nervous systems of all insects, and inhibit female mosquitoes from biting. When present in amounts less than those fatal to insects, they still appear to have an insect repellent effect. They are harmful to fish, but are far less toxic to mammals and birds than many synthetic insecticides and are not persistent, being biodegradable and also decompose easily on exposure to light. They are considered to be amongst the safest insecticides for use around food. In 1998 Kenya was producing 90% (over 6,000 tonnes) of the world's pyrethrum, called py for short. Production in Tanzania and Ecuador is also significant. Currently the world's major producer is Tasmania, Australia. Sprays Pyrethrum has been used for centuries as an insecticide, and as a lice remedy in the Middle East (Persian powder, also known as "Persian pellitory"). It was sold worldwide under the brand Zacherlin by Austrian industrialist J. Zacherl. It is one of the most commonly used non-synthetic insecticides allowed in certified organic agriculture. The flowers should be dried and then crushed and mixed with water. Pyrethroids are synthetic insecticides based on natural pyrethrum (pyrethrins); one common example is permethrin. Pyrethrins are often sold in preparations that also contain the synthetic chemical piperonyl butoxide, which enhances the toxicity to insects and is faster acting compared with pyrethrins used alone. These formulations are known as synergized pyrethrins. Companion planting A pheromone produced by these plants attracts ladybug beetles, and at the same time acts as an alarm signal to aphids. Toxicity Mammals Rat and rabbit levels for pyrethrum are high, with doses in some cases of about 1% of the animal's body weight required to cause significant mortality. This is similar to fatal levels in synthetic pyrethroids. Nevertheless, pyrethrum should be handled with the same caution as synthetic insecticides: safety equipment should be worn, and mixing with other chemicals should be avoided. People can be exposed to pyrethrum as a mixture of cinerin, jasmolin, and pyrethrin in the workplace by breathing it in, getting it in the eyes or on the skin, or swallowing it. The Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for pyrethrum exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a Recommended exposure limit (REL) of 5 mg/m3 over an 8-hour workday. At levels of 5000 mg/m3, pyrethrum is immediately dangerous to life and health. People exposed to pyrethrum may experience symptoms including pruritus (itching), dermatitis, papules, erythema (red skin), rhinorrhea (runny nose), sneezing, and asthma. Other animals Pyrethrum, specifically the pyrethrin within, is highly toxic to insects including useful pollinators like bees. The risk of killing bees and other beneficial insects is partially reduced by the compound's rapid breakdown (a half-life of approximately 12 hours on plants and on the surface of the soil, with about 3% remaining after five days, but persisting several weeks or more if it enters a body of water or is dug into the soil) and its slight insect-repellant activity. Common names Common names for Chrysanthemum cinerariifolium include: Pyrethrum Pyrethrum daisy Dalmatian pyrethrum Dalmatian chrysanthemum Dalmatian insect flower Dalmatian pellitory Big daisy Common names for Chrysanthemum coccineum include: Pyrethrum Pyrethrum daisy Painted daisy Persian chrysanthemum Persian insect flower Persian pellitory Caucasian insect powder plant See also Chrysanthemum List of companion plants Category: Plant toxin insecticides Permethrin Pyrethrin References External links National Pesticide Information Center: Pyrethrins and Pyrethroids Fact Sheet CDC - NIOSH Pocket Guide to Chemical Hazards EXTOXNET: Pyrethrins and Pyrethroids "What is Pyrethrum?" Role of aphid alarm pheromone produced by the flowers in repelling aphids and attracting ladybug beetles p Pyrethroids Flora of Europe Plant toxin insecticides Biological pest control Garden plants of Europe Household chemicals Anthemideae Plant common names Historically recognized angiosperm genera
Pyrethrum
[ "Chemistry", "Biology" ]
1,464
[ "Plant toxin insecticides", "Chemical ecology", "Common names of organisms", "Plants", "Plant common names" ]
265,977
https://en.wikipedia.org/wiki/Palladium%20hydride
Palladium hydride is palladium metal with hydrogen within its crystal lattice. Despite its name, it is not an ionic hydride but rather an alloy of palladium with metallic hydrogen that can be written PdH. At room temperature, palladium hydrides may contain two crystalline phases, α and β (also called α′). Pure α-phase exists at x < 0.017 while pure β-phase exists at x > 0.58; intermediate values of x correspond to α–β mixtures. Hydrogen absorption by palladium is reversible and therefore has been investigated for hydrogen storage. Palladium electrodes have been used in some cold fusion experiments, under the theory that hydrogen can be "squeezed" between palladium atoms to help it fuse at lower temperatures than normal. History The absorption of hydrogen gas by palladium was first noted by T. Graham in 1866 and absorption of electrolytically produced hydrogen, where hydrogen was absorbed into a palladium cathode, was first documented in 1939. Graham produced an alloy with the composition PdH. Making palladium hydride The hydrogen atoms occupy interstitial sites in palladium hydride. The H–H bond in H is cleaved. The ratio in which H is absorbed on Pd is defined by . When Pd is brought into a H environment with a pressure of 1 atm, the resulting concentration of H reaches x ≈ 0.7. However, the concentration of H to obtain superconductivity is higher, in the range x > 0.75. This is done via three different routes, with measures to prevent the ready desorption of the hydrogen from the palladium. The first route is loading from gas phase. A Pd sample is placed into a high-pressure cell of H, at room temperature. The H is added through a capillary. To maintain the high absorption, the pressure cell is cooled to liquid N temperature (77 K). The resulting concentration may be as high as [H]/[Pd] = 0.97. The second route is electrochemical bonding. This is a method where the critical concentration for superconductivity can easily be exceeded without using a high-pressure environment, via a reaction as equilibrium between H in an electrochemical phase and H in a solid phase. The hydrogen is added to Pd and Pd–Ni alloys by an H concentration of ~ 0.95. Thereafter, it has been loaded into electrolysis of 0.1n-HSO with a current density of 50 to 150 mA/cm. Finally, after lowering the loading temperature to ~ 190 K, a H concentration of x ≈ 1 has been reached. The third route is known as ion implantation. Before the implantation of H ions into Pd, the Pd foil was pre-charged with H. This is done in H high-temperature gas. This shortens the implantation time which follows. The concentration reached is about x ≈ 0.7. Afterwards the foil is cooled to a temperature of 77 K to prevent a loss of H before the implantation can take place. The implantation of H in PdH happens at a temperature of 4 K. The H ions penetrate in an H-beam. This results in a high concentration layer of H in a Pd foil. Chemical structure and properties Palladium is sometimes metaphorically called a "metal sponge" (not to be confused with literal metal sponges) because it soaks up hydrogen "like a sponge soaks up water". At standard temperature and pressure, palladium can absorb up to 900 times its own volume of hydrogen. Hydrogen can be absorbed into the metal hydride and then desorbed back out for thousands of cycles. Researchers look for ways to extend the useful life of palladium storage. Size effect The absorption of hydrogen produces two different phases, both of which contain palladium metal atoms in a face-centered cubic (fcc, rocksalt) lattice, which is the same structure as pure palladium metal. At low concentrations up to PdH the palladium lattice expands slightly, from 388.9 pm to 389.5 pm. Above this concentration the second phase appears with a lattice constant of 402.5 pm. Both phases coexist until a composition of PdH when the alpha phase disappears. Neutron diffraction studies have shown that hydrogen atoms randomly occupy the octahedral interstices in the metal lattice (in an fcc lattice there is one octahedral hole per metal atom). The limit of absorption at normal pressures is PdH, indicating that about 70% of the octahedral holes are occupied. When x = 1 is reached, the octahedral interstices are fully occupied. The absorption of hydrogen is reversible, and hydrogen rapidly diffuses through the metal lattice. Metallic conductivity reduces as hydrogen is absorbed, until at around PdH the solid becomes a semiconductor. This formation of the bulk hydride does depend on the size of the catalyst palladium. When palladium becomes smaller than 2.6 nm, hydrides are no longer formed. Hydrogen dissolved in the bulk differ from hydrogen dissolved on the surface. When the particles of palladium decrease in size, less hydrogen dissolves in these smaller palladium particles. Therefore, relatively more hydrogen adsorbs on the surface of the small particles. This hydrogen adsorbed onto the particles do not form an hydride. Therefore, bigger particles have more places available for the formation of hydrides. Electron and phonon band The most important property of the band structure of PdH(oct) is that filled Pd states are lowered with the presence of hydrogen. Also, the lowest energy levels, which are the bonding states, of PdH are lower than that of Pd. Additionally, empty Pd states, that are below the fermi energy, are also lowered with the presence of H. Palladium prefers to be with hydrogen due to the interaction between the s state of hydrogen and the p states of palladium. The energy of an independent H atom lies in the energy range of the dominating p-states of the Pd bands. Therefore, these empty states under the fermi-energy and holes in the d-band are filled. Additionally, the hydride formation raises the fermi level above the d-band. Empty states, above the d-band, are also filled. This results in filled p-states and shifts the ‘edge’ to a higher energy level. Superconductivity PdH is a superconductor with a transition temperature T of about 9 K for x = 1. (Pure palladium is not superconducting.) Drops in resistivity vs. temperature curves were observed at higher temperatures (up to 273 K) in hydrogen-rich (x ≈ 1), nonstoichiometric palladium hydride and interpreted as superconducting transitions. These results have been questioned and have not been confirmed thus far. A great advantage of palladium hydride over many other hydride systems is that palladium hydride does not need to be highly pressurized to become superconducting. This makes measurements easier and gives more opportunity for different kinds of measurements (many superconducting materials require extreme pressure in order to superconduct, on the order of 100 GPa). Palladium hydride could therefore also be used to explore the role that hydrogen plays in these hydride systems being superconductors. Susceptibility One of the magnetic properties of Palladium hydride is susceptibility. The susceptibility of PdH varies largely when changing the concentration of H. This is due to the β-phase of PdH. The α-phase of PdH lies in the same range of the fermi surface as Pd itself, therefore 𝛼-phase does not influence the susceptibility. However, the β-phase of PdH is characterized by s-electrons filling the d-band. Therefore, the susceptibility of the α–β mixture decreases at room temperature with an increasing concentration of H. Finally, when the spin fluctuations of pure Pd are decreased, the superconductivity will occur. Specific heat capacity Another metallic property is the electronic heat coefficient γ. This coefficient depends on the density of states. For pure Pd the heat coefficient is 9.5 mJ(mol⋅K2). When H is added to the pure Pd, the electronic heat coefficient drops. For the range of x = 0.83 to x = 0.88γ is observed to be six times smaller than for pure Pd. This region is the superconducting region. However, Zimmerman et al. also measured the heat coefficient γ for a concentration of x = 0.96. A broadening of the superconducting transition was observed at this concentration. One of the reasons for this could be explained by an inhomogeneity of the macroscopic structure of PdH. γ at this value of x has a large fluctuation and is therefore uncertain. The critical concentration for superconductivity to happen is estimated at x ≈ 0.72. The critical temperature or the superconducting transition temperature is estimated at 9 K. This was achieved at a stoichiometric concentration of x = 1. Pressure also influences the critical temperature. It is shown that an increase in the pressure on PdH decreases T. This can be explained by a hardening of the phonon spectrum, which includes a decrease in the electron–phonon constant λ. Surface absorption process The process of absorption of hydrogen has been shown by scanning tunnelling microscopy to require aggregates of at least three vacancies on the surface of the crystal to promote the dissociation of the hydrogen molecule. The reason for such a behaviour and the particular structure of trimers has been analyzed. Uses The absorption of hydrogen is reversible and is highly selective. A palladium-based diffuser separator is used, although they are not employed industrially. Impure gas is passed through tubes of thin walled silver–palladium alloy as protium and deuterium readily diffuse through the alloy membrane. The gas that comes through is pure and ready for use. Palladium is alloyed with silver to improve its strength and resistance to embrittlement. To ensure that the formation of the beta phase is avoided, as the lattice expansion noted earlier would cause distortions and splitting of the membrane, the temperature is maintained above 300 °C. Another use of palladium hydride is increased adsorption of H-molecules with respect to pure palladium. In 2009, a study was conducted which tested this fact. At a pressure of 1 bar, the probability was measured of Hydrogen molecules sticking to the surface of Palladium versus the probability of sticking to surface of palladium hydride. The sticking probability of Palladium was found to be greater at temperatures where the phase of the used Palladium and hydrogen mixture was pure β-phase, which is in this context corresponds to palladium hydride (at 1 bar this means temperatures greater than roughly 160 degrees Celsius), as opposed to temperatures where β- and α-phases coexist and even lower temperatures where there is pure α-phase (α-phase here corresponds to a solid solution of Hydrogen atoms in Palladium). Knowing these sticking probabilities enables one to calculate the rate of adsorption by virtue of the equation where is the aforementioned sticking probability and is the flux of hydrogen molecules in the toward the surface of the palladium/palladium-hydride. When the system is in a steady state, we must have that the rate of adsorption and, oppositely, the rate of desorption () are equal. This gives The rate of desorption is assumed to be given by a Boltzmannian distribution, i.e. (*) where is some unknown constant, is the desorption energy, is the Boltzmann constant and is the temperature. The relation (*) can be fitted to find the value of . It was found that, within the uncertainty of their experiment, the values for of Palladium and Palladium hydride respectively were roughly equal. Thus palladium hydride has as higher average adsorption rate than Palladium, while the energy required for desorption is the same. Density functional theory was performed to find an explanation for this fact. It was found that the bond of hydrogen with the palladium hydride surface is weaker than the bond with the palladium surface and that the desorption activation barrier is lower by a small amount for Palladium hydride than for palladium, although the adsorption barriers are comparable in magnitude. Moreover, the heat of adsorption is lower for palladium hydride than for Palladium, which leads to lower equilibrium surface coverage of H. This means that the surface of palladium hydride would be less saturated, which leads to greater opportunity for sticking, i.e. a higher sticking probability. The reversible absorption of palladium is a means to store hydrogen, and the above findings indicate that even in the hydrogen-absorbed state of palladium, there is further opportunity for hydrogen storing. See also Hydrogen sensor References External links Hydride Metal hydrides
Palladium hydride
[ "Chemistry" ]
2,783
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
266,145
https://en.wikipedia.org/wiki/Schumann%20resonances
The Schumann resonances (SR) are a set of spectral peaks in the extremely low frequency portion of the Earth's electromagnetic field spectrum. Schumann resonances are global electromagnetic resonances, generated and excited by lightning discharges in the cavity formed by the Earth's surface and the ionosphere. Description The global electromagnetic resonance phenomenon is named after physicist Winfried Otto Schumann who predicted it mathematically in 1952. Schumann resonances are the principal background in the part of the electromagnetic spectrum from 3 Hz through 60 Hz and appear as distinct peaks at extremely low frequencies around 7.83 Hz (fundamental), 14.3, 20.8, 27.3, and 33.8 Hz. Schumann resonances occur because the space between the surface of the Earth and the conductive ionosphere acts as a closed, although variable-sized waveguide. The limited dimensions of the Earth cause this waveguide to act as a resonant cavity for electromagnetic waves in the extremely low frequency band. The cavity is naturally excited by electric currents in lightning. In the normal mode descriptions of Schumann resonances, the fundamental mode is a standing wave in the Earth–ionosphere cavity with a wavelength equal to the circumference of the Earth. The lowest-frequency mode has the highest intensity, and the frequency of all modes can vary slightly owing to solar-induced perturbations to the ionosphere (which compress the upper wall of the closed cavity) amongst other factors. The higher resonance modes are spaced at approximately 6.5 Hz intervals (as may be seen by feeding numbers into the formula), a characteristic attributed to the atmosphere's spherical geometry. The peaks exhibit a spectral width of approximately 20% due to the damping of the respective modes in the dissipative cavity. Observations of Schumann resonances have been used to track global lightning activity. Owing to the connection between lightning activity and the Earth's climate it has been suggested that they may be used to monitor global temperature variations and variations of water vapor in the upper troposphere. Schumann resonances have been used to study the lower ionosphere on Earth and it has been suggested as one way to explore the lower ionosphere on celestial bodies. Some have proposed that lightning on other planets might be detectable and studied by means of Schumann resonance signatures of those planets. Effects on Schumann resonances have been reported following geomagnetic and ionospheric disturbances. More recently, discrete Schumann resonance excitation has been linked to transient luminous events — sprites, ELVES, jets, and other upper-atmospheric lightning. A new field of interest using Schumann resonances is related to short-term earthquake prediction. Interest in Schumann resonances renewed in 1993 when E.R. Williams showed a correlation between the resonance frequency and tropical air temperatures, suggesting the resonance could be used to monitor global warming. In geophysical survey, Schumann resonances are used to locate offshore hydrocarbon deposits. History In 1893, George Francis FitzGerald noted that the upper layers of the atmosphere must be fairly good conductors. Assuming that the height of these layers is about 100 km above ground, he estimated that oscillations (in this case the lowest mode of the Schumann resonances) would have a period of 0.1 second. Because of this contribution, it has been suggested to rename these resonances "Schumann–FitzGerald resonances". However, FitzGerald's findings were not widely known, as they were only presented at a meeting of the British Association for the Advancement of Science, followed by a brief mention in a column in Nature. The first suggestion that an ionosphere existed, capable of trapping electromagnetic waves, is attributed to Heaviside and Kennelly (1902). It took another twenty years before Edward Appleton and Barnett in 1925 were able to prove experimentally the existence of the ionosphere. Although some of the most important mathematical tools for dealing with spherical waveguides were developed by G. N. Watson in 1918, it was Winfried Otto Schumann who first studied the theoretical aspects of the global resonances of the earth–ionosphere waveguide system, known today as the Schumann resonances. In 1952–1954 Schumann, together with H. L. König, attempted to measure the resonant frequencies. However, it was not until measurements made by Balser and Wagner in 1960–1963 that adequate analysis techniques were available to extract the resonance information from the background noise. Since then there has been an increasing interest in Schumann resonances in a wide variety of fields. Basic theory Lightning discharges are considered to be the primary natural source of Schumann resonance excitation; lightning channels behave like huge antennas that radiate electromagnetic energy at frequencies below about 100 kHz. These signals are very weak at large distances from the lightning source, but the Earth–ionosphere waveguide behaves like a resonator at extremely low resonance frequencies. In an ideal cavity, the resonant frequency of the -th mode is determined by the Earth radius and the speed of light . The real Earth–ionosphere waveguide is not a perfect electromagnetic resonant cavity. Losses due to finite ionosphere electrical conductivity lower the propagation speed of electromagnetic signals in the cavity, resulting in a resonance frequency that is lower than would be expected in an ideal case, and the observed peaks are wide. In addition, there are a number of horizontal asymmetries—day-night difference in the height of the ionosphere, latitudinal changes in the Earth's magnetic field, sudden ionospheric disturbances, polar cap absorption, variation in the Earth radius of ± 11 km from equator to geographic poles, etc. that produce other effects in the Schumann resonance power spectra. Measurements Today Schumann resonances are recorded at many separate research stations around the world. The sensors used to measure Schumann resonances typically consist of two horizontal magnetic inductive coils for measuring the north-south and east-west components of the magnetic field, and a vertical electric dipole antenna for measuring the vertical component of the electric field. A typical passband of the instruments is 3–100 Hz. The Schumann resonance electric field amplitude (~300 microvolts per meter) is much smaller than the static fair-weather electric field (~150 V/m) in the atmosphere. Similarly, the amplitude of the Schumann resonance magnetic field (~1 picotesla) is many orders of magnitude smaller than the Earth's magnetic field (~30–50 microteslas). Specialized receivers and antennas are needed to detect and record Schumann resonances. The electric component is commonly measured with a ball antenna, suggested by Ogawa et al., in 1966, connected to a high-impedance amplifier. The magnetic induction coils typically consist of tens- to hundreds-of-thousands of turns of wire wound around a core of very high magnetic permeability. Dependence on global lightning activity From the very beginning of Schumann resonance studies, it was known that they could be used to monitor global lightning activity. At any given time there are about 2000 thunderstorms around the globe. Producing approximately 50 lightning events per second, these thunderstorms are directly linked to the background Schumann resonance signal. Determining the spatial lightning distribution from Schumann resonance records is a complex problem. To estimate the lightning intensity from Schumann resonance records it is necessary to account for both the distance to lightning sources and the wave propagation between the source and the observer. A common approach is to make a preliminary assumption on the spatial lightning distribution, based on the known properties of lightning climatology. An alternative approach is placing the receiver at the North or South Pole, which remain approximately equidistant from the main thunderstorm centers during the day. One method not requiring preliminary assumptions on the lightning distribution is based on the decomposition of the average background Schumann resonance spectra, utilizing ratios between the average electric and magnetic spectra and between their linear combination. This technique assumes the cavity is spherically symmetric and therefore does not include known cavity asymmetries that are believed to affect the resonance and propagation properties of electromagnetic waves in the system. Diurnal variations The best documented and the most debated features of the Schumann resonance phenomenon are the diurnal variations of the background Schumann resonance power spectrum. A characteristic Schumann resonance diurnal record reflects the properties of both global lightning activity and the state of the Earth–ionosphere cavity between the source region and the observer. The vertical electric field is independent of the direction of the source relative to the observer, and is therefore a measure of global lightning. The diurnal behavior of the vertical electric field shows three distinct maxima, associated with the three "hot spots" of planetary lightning activity: one at 9 UT (Universal Time) linked to the daily peak of thunderstorm activity from Southeast Asia; one at 14 UT linked to the peak of African lightning activity; and one at 20 UT linked to the peak of South American lightning activity. The time and amplitude of the peaks vary throughout the year, linked to seasonal changes in lightning activity. "Chimney" ranking In general, the African peak is the strongest, reflecting the major contribution of the African "chimney" to global lightning activity. The ranking of the two other peaks—Asian and American—is the subject of a vigorous dispute among Schumann resonance scientists. Schumann resonance observations made from Europe show a greater contribution from Asia than from South America, while observations made from North America indicate the dominant contribution comes from South America. Williams and Sátori suggest that in order to obtain "correct" Asia-America chimney ranking, it is necessary to remove the influence of the day/night variations in the ionospheric conductivity (day-night asymmetry influence) from the Schumann resonance records. The "corrected" records presented in the work by Sátori, et al. show that even after the removal of the day-night asymmetry influence from Schumann resonance records, the Asian contribution remains greater than American. Similar results were obtained by Pechony et al. who calculated Schumann resonance fields from satellite lightning data. It was assumed that the distribution of lightning in the satellite maps was a good proxy for Schumann excitations sources, even though satellite observations predominantly measure in-cloud lightning rather than the cloud-to-ground lightning that are the primary exciters of the resonances. Both simulations—those neglecting the day-night asymmetry, and those taking this asymmetry into account—showed the same Asia-America chimney ranking. On the other hand, some optical satellite and climatological lightning data suggest the South American thunderstorm center is stronger than the Asian center. The reason for the disparity among rankings of Asian and American chimneys in Schumann resonance records remains unclear, and is the subject of further research. Influence of the day-night asymmetry In the early literature the observed diurnal variations of Schumann resonance power were explained by the variations in the source-receiver (lightning-observer) geometry. It was concluded that no particular systematic variations of the ionosphere (which serves as the upper waveguide boundary) are needed to explain these variations. Subsequent theoretical studies supported the early estimations of the small influence of the ionosphere day-night asymmetry (difference between day-side and night-side ionosphere conductivity) on the observed variations in Schumann resonance field intensities. The interest in the influence of the day-night asymmetry in the ionosphere conductivity on Schumann resonances gained new strength in the 1990s, after publication of a work by Sentman and Fraser. developed a technique to separate the global and the local contributions to the observed field power variations using records obtained simultaneously at two stations that were widely separated in longitude. They interpreted the diurnal variations observed at each station in terms of a combination of a diurnally varying global excitation modulated by the local ionosphere height. Their work, which combined both observations and energy conservation arguments, convinced many scientists of the importance of the ionospheric day-night asymmetry and inspired numerous experimental studies. Recently it was shown that results obtained by Sentman and Fraser can be approximately simulated with a uniform model (without taking into account ionosphere day-night variation) and therefore cannot be uniquely interpreted solely in terms of ionosphere height variation. Schumann resonance amplitude records show significant diurnal and seasonal variations which generally coincide in time with the times of the day-night transition (the terminator). This time-matching seems to support the suggestion of a significant influence of the day-night ionosphere asymmetry on Schumann resonance amplitudes. There are records showing almost clock-like accuracy of the diurnal amplitude changes. On the other hand, there are numerous days when Schumann resonance amplitudes do not increase at sunrise or do not decrease at sunset. There are studies showing that the general behavior of Schumann resonance amplitude records can be recreated from diurnal and seasonal thunderstorm migration, without invoking ionospheric variations. Two recent independent theoretical studies have shown that the variations in Schumann resonance power related to the day-night transition are much smaller than those associated with the peaks of the global lightning activity, and therefore the global lightning activity plays a more important role in the variation of the Schumann resonance power. It is generally acknowledged that source-observer effects are the dominant source of the observed diurnal variations, but there remains considerable controversy about the degree to which day-night signatures are present in the data. Part of this controversy stems from the fact that the Schumann resonance parameters extractable from observations provide only a limited amount of information about the coupled lightning source-ionospheric system geometry. The problem of inverting observations to simultaneously infer both the lightning source function and ionospheric structure is therefore extremely underdetermined, leading to the possibility of non-unique interpretations. "Inverse problem" One of the interesting problems in Schumann resonances studies is determining the lightning source characteristics (the "inverse problem"). Temporally resolving each individual flash is impossible because the mean rate of excitation by lightning, ~50 lightning events per second globally, mixes up the individual contributions together. However, occasionally extremely large lightning flashes occur which produce distinctive signatures that stand out from the background signals. Called "Q-bursts", they are produced by intense lightning strikes that transfer large amounts of charge from clouds to the ground and often carry high peak current. Q-bursts can exceed the amplitude of the background signal level by a factor of 10 or more and appear with intervals of ~10 s, which allows them to be considered as isolated events and determine the source lightning location. The source location is determined with either multi-station or single-station techniques and requires assuming a model for the Earth–ionosphere cavity. The multi-station techniques are more accurate, but require more complicated and expensive facilities. Transient luminous events research It is now believed that many of the Schumann resonances transients (Q bursts) are related to the transient luminous events (TLEs). In 1995, Boccippio et al. showed that sprites, the most common TLE, are produced by positive cloud-to-ground lightning occurring in the stratiform region of a thunderstorm system, and are accompanied by Q-burst in the Schumann resonances band. Recent observations reveal that occurrences of sprites and Q bursts are highly correlated and Schumann resonances data can possibly be used to estimate the global occurrence rate of sprites. Global temperature Williams [1992] suggested that global temperature may be monitored with the Schumann resonances. The link between Schumann resonance and temperature is lightning flash rate, which increases nonlinearly with temperature. The nonlinearity of the lightning-to-temperature relation provides a natural amplifier of the temperature changes and makes Schumann resonance a sensitive "thermometer". Moreover, the ice particles that are believed to participate in the electrification processes which result in a lightning discharge have an important role in the radiative feedback effects that influence the atmosphere temperature. Schumann resonances may therefore help us to understand these feedback effects. A paper was published in 2006 linking Schumann resonance to global surface temperature, which was followed up with a 2009 study. Upper tropospheric water vapor Tropospheric water vapor is a key element of the Earth's climate, which has direct effects as a greenhouse gas, as well as indirect effects through interaction with clouds, aerosols and tropospheric chemistry. Upper tropospheric water vapor (UTWV) has a much greater impact on the greenhouse effect than water vapor in the lower atmosphere, but whether this impact is a positive or a negative feedback is still uncertain. The main challenge in addressing this question is the difficulty in monitoring UTWV globally over long timescales. Continental deep-convective thunderstorms produce most of the lightning discharges on Earth. In addition, they transport large amount of water vapor into the upper troposphere, dominating the variations of global UTWV. Price [2000] suggested that changes in the UTWV can be derived from records of Schumann resonances. On other planets and moons The existence of Schumann-like resonances is conditioned primarily by two factors: A closed, planetary-sized and ellipsoidial cavity, consisting of conducting lower and upper boundaries separated by an insulating medium. For the earth the conducting lower boundary is its surface, and the upper boundary is the ionosphere. Other planets may have similar electrical conductivity geometry, so it is speculated that they should possess similar resonant behavior. A source of electrical excitation of electromagnetic waves in the extremely low frequency range. Within the Solar System there are five candidates for Schumann resonance detection besides the Earth: Venus, Mars, Jupiter, Saturn, and Saturn's biggest moon Titan. Modeling Schumann resonances on the planets and moons of the Solar System is complicated by the lack of knowledge of the waveguide parameters. No in situ capability exists today to validate the results. Venus The strongest evidence for lightning on Venus comes from the electromagnetic waves first detected by Venera 11 and 12 landers. Theoretical calculations of the Schumann resonances at Venus were reported by Nickolaenko and Rabinowicz [1982] and Pechony and Price [2004]. Both studies yielded very close results, indicating that Schumann resonances should be easily detectable on that planet given a lightning source of excitation and a suitably located sensor. Mars In the case of Mars there have been terrestrial observations of radio emission spectra that have been associated with Schumann resonances. The reported radio emissions are not of the primary electromagnetic Schumann modes, but rather of secondary modulations of the nonthermal microwave emissions from the planet at approximately the expected Schumann frequencies, and have not been independently confirmed to be associated with lightning activity on Mars. There is the possibility that future lander missions could carry in situ instrumentation to perform the necessary measurements. Theoretical studies are primarily directed to parameterizing the problem for future planetary explorers. Detection of lightning activity on Mars has been reported by Ruf et al. [2009]. The evidence is indirect and in the form of modulations of the nonthermal microwave spectrum at approximately the expected Schumann resonance frequencies. It has not been independently confirmed that these are associated with electrical discharges on Mars. In the event confirmation is made by direct, in situ observations, it would verify the suggestion of the possibility of charge separation and lightning strokes in the Martian dust storms made by Eden and Vonnegut [1973] and Renno et al. [2003]. Martian global resonances were modeled by Sukhorukov [1991], Pechony and Price [2004], and Molina-Cuberos et al. [2006]. The results of the three studies are somewhat different, but it seems that at least the first two Schumann resonance modes should be detectable. Evidence of the first three Schumann resonance modes is present in the spectra of radio emission from the lightning detected in Martian dust storms. Titan It was long ago suggested that lightning discharges may occur on Titan, but recent data from Cassini–Huygens seems to indicate that there is no lightning activity on this largest satellite of Saturn. Due to the recent interest in Titan, associated with the Cassini–Huygens mission, its ionosphere is perhaps the most thoroughly modeled today. Schumann resonances on Titan have received more attention than on any other celestial body, in works by Besser et al. [2002], Morente et al. [2003], Molina-Cuberos et al. [2004], Nickolaenko et al. [2003], and Pechony and Price [2004]. It appears that only the first Schumann resonance mode might be detectable on Titan. Since the landing of the Huygens probe on Titan's surface in January 2005, there have been many reports on observations and theory of an atypical Schumann resonance on Titan. After several tens of fly-bys by Cassini, neither lightning nor thunderstorms were detected in Titan's atmosphere. Scientists therefore proposed another source of electrical excitation: induction of ionospheric currents by Saturn's co-rotating magnetosphere. All data and theoretical models comply with a Schumann resonance, the second eigenmode of which was observed by the Huygens probe. The most important result of this is the proof of existence of a buried liquid water-ammonia ocean under a few tens of km of the icy subsurface crust. Jupiter and Saturn Lightning activity has been optically detected on Jupiter. Existence of lightning activity on that planet was predicted by Bar-Nun [1975] and it is now supported by data from Galileo, Voyagers 1 and 2, Pioneers 10 and 11, and Cassini. Saturn is also confirmed to have lightning activity. Though three visiting spacecraft (Pioneer 11 in 1979, Voyager 1 in 1980, and Voyager 2 in 1981) failed to provide any convincing evidence from optical observations, in July 2012 the Cassini spacecraft detected visible lightning flashes, and electromagnetic sensors aboard the spacecraft detected signatures that are characteristic of lightning. Little is known about the electrical parameters of the interior of Jupiter or Saturn. Even the question of what should serve as the lower waveguide boundary is a non-trivial one in case of the gaseous planets. There seem to be no works dedicated to Schumann resonances on Saturn. To date there has been only one attempt to model Schumann resonances on Jupiter. Here, the electrical conductivity profile within the gaseous atmosphere of Jupiter was calculated using methods similar to those used to model stellar interiors, and it was pointed out that the same methods could be easily extended to the other gas giants Saturn, Uranus and Neptune. Given the intense lightning activity at Jupiter, the Schumann resonances should be easily detectable with a sensor suitably positioned within the planetary-ionospheric cavity. See also Cymatics The Hum Plasma (physics) Radiant energy Telluric current Citations External articles and references General references Articles on the NASA ADS Database: Full list | Full text Sprite research video: The A.C. global circuit Schumann resonances oscillate at only eight cycles per second Websites "Construction and Deployment of an ULF Receiver for the Study of Schumann Resonance in Iowa" by Anton Kruger—Well illustrated study from the University of Iowa Global Coherence Initiative (Spectrogram Calendar) Schumann resonance live data Space Observing System Schumann resonance live data The Discovery of Schumann Resonance Animation Schumann resonance animation from NASA Goddard Space Flight Center Articles containing video clips Atmospheric electricity Electromagnetic radiation Ionosphere
Schumann resonances
[ "Physics" ]
4,757
[ "Physical phenomena", "Electromagnetic radiation", "Atmospheric electricity", "Radiation", "Electrical phenomena" ]
266,430
https://en.wikipedia.org/wiki/Shaper
In machining, a shaper is a type of machine tool that uses linear relative motion between the workpiece and a single-point cutting tool to machine a linear toolpath. Its cut is analogous to that of a lathe, except that it is (archetypally) linear instead of helical. A wood shaper is a functionally different woodworking tool, typically with a powered rotating cutting head and manually fed workpiece, usually known simply as a shaper in North America and spindle moulder in the UK. A metalworking shaper is somewhat analogous to a metalworking planer, with the cutter riding a ram that moves relative to a stationary workpiece, rather than the workpiece moving beneath the cutter. The ram is typically actuated by a mechanical crank inside the column, though hydraulically actuated shapers are increasingly used. Adding axes of motion to a shaper can yield helical tool paths, as also done in helical planing. Process A single-point cutting tool is rigidly held in the tool holder, which is mounted on the ram. The work piece is rigidly held in a vise or clamped directly on the table. The table may be supported at the outer end. The ram reciprocates and the cutting tool, held in the tool holder, moves forwards and backwards over the work piece. In a standard shaper, cutting of material takes place during the forward stroke of the ram and the return stroke remains idle. The return is governed by a quick return mechanism. The depth of the cut increments by moving the workpiece, and the workpiece is fed by a pawl and ratchet mechanism. Types Shapers are mainly classified as standard, draw-cut, horizontal, universal, vertical, geared, crank, hydraulic, contour and traveling head, with a horizontal arrangement most common. Vertical shapers are generally fitted with a rotary table to enable curved surfaces to be machined (same idea as in helical planing). The vertical shaper is essentially the same thing as a slotter (slotting machine), although technically a distinction can be made if one defines a true vertical shaper as a machine whose slide can be moved from the vertical. A slotter is fixed in the vertical plane Operation The workpiece mounts on a rigid, box-shaped table in front of the machine. The height of the table can be adjusted to suit this workpiece, and the table can traverse sideways underneath the reciprocating tool, which is mounted on the ram. Table motion may be controlled manually, but is usually advanced by an automatic feed mechanism acting on the feedscrew. The ram slides back and forth above the work. At the front end of the ram is a vertical tool slide that may be adjusted to either side of the vertical plane along the stroke axis. This tool-slide holds the clapper box and tool post, from which the tool can be positioned to cut a straight, flat surface on the top of the workpiece. The tool-slide permits feeding the tool downwards to deepen a cut. This flexibility, coupled with the use of specialized cutters and toolholders, enables the operator to cut internal and external gear teeth. The ram is adjustable for stroke and, due to the geometry of the linkage, it moves faster on the return (non-cutting) stroke than on the forward, cutting stroke. This return stroke is governed by a quick return mechanism. Uses The most common use is to machine straight, flat surfaces, but with ingenuity and some accessories a wide range of work can be done. Other examples of its use are: Keyways in the hub of a pulley or gear can be machined without resorting to a dedicated broaching setup. Dovetail slides Internal splines and gear teeth. Keyway, spline, and gear tooth cutting in blind holes Cam drums with toolpaths of the type that in CNC milling terms would require 4- or 5-axis contouring or turn-mill cylindrical interpolation It is even possible to obviate wire EDM work in some cases. Starting from a drilled or cored hole, a shaper with a boring-bar type tool can cut internal features that do not lend themselves to milling or boring (such as irregularly shaped holes with tight corners). Smoothing of a rough surface History Samuel Bentham developed a shaper between 1791 and 1793. However, Roe (1916) credits James Nasmyth with the invention of the shaper in 1836. Shapers were very common in industrial production from the mid-19th century through the mid-20th. In current industrial practice, shapers have been largely superseded by other machine tools (especially of the CNC type), including milling machines, grinding machines, and broaching machines. But the basic function of a shaper is still sound; tooling for them is minimal and very cheap to reproduce; and they are simple and robust in construction, making their repair and upkeep easily achievable. Thus, they are still popular in many machine shops, from jobbing shops or repair shops to tool and die shops, where only one or a few pieces are required to be produced, and the alternative methods are cost- or tooling-intensive. They also have considerable retro appeal to many hobbyist machinists, who are happy to obtain a used shaper or, in some cases, even to build a new one from scratch. See also Planer (metalworking) References Bibliography External links Lathes.co.uk information archive on hand-powered shapers YouTube video of shaper mechanism YouTube video of a vintage shaper in action YouTube video of a newly built hobbyist shaper in action Various Types of Shaper Tools Machine tools Metalworking tools
Shaper
[ "Engineering" ]
1,181
[ "Machine tools", "Industrial machinery" ]
266,443
https://en.wikipedia.org/wiki/Metalworking
Metalworking is the process of shaping and reshaping metals in order to create useful objects, parts, assemblies, and large scale structures. As a term, it covers a wide and diverse range of processes, skills, and tools for producing objects on every scale: from huge ships, buildings, and bridges, down to precise engine parts and delicate jewelry. The historical roots of metalworking predate recorded history; its use spans cultures, civilizations and millennia. It has evolved from shaping soft, native metals like gold with simple hand tools, through the smelting of ores and hot forging of harder metals like iron, up to and including highly technical modern processes such as machining and welding. It has been used as an industry, a driver of trade, individual hobbies, and in the creation of art; it can be regarded as both a science and a craft. Modern metalworking processes, though diverse and specialized, can be categorized into one of three broad areas known as forming, cutting, or joining processes. Modern metalworking workshops, typically known as machine shops, hold a wide variety of specialized or general-use machine tools capable of creating highly precise, useful products. Many simpler metalworking techniques, such as blacksmithing, are no longer economically competitive on a large scale in developed countries; some of them are still in use in less developed countries, for artisanal or hobby work, or for historical reenactment. Prehistory The oldest archaeological evidence of copper mining and working was the discovery of a copper pendant in northern Iraq from 8,700 BCE. The earliest substantiated and dated evidence of metalworking in the Americas was the processing of copper in Wisconsin, near Lake Michigan. Copper was hammered until it became brittle, then heated so it could be worked further. In America, this technology is dated to about 4000–5000 BCE. The oldest gold artifacts in the world come from the Bulgarian Varna Necropolis and date from 4450 BCE. Not all metal required fire to obtain it or work it. Isaac Asimov speculated that gold was the "first metal". His reasoning being, that, by its chemistry, it is found in nature as nuggets of pure gold. In other words, gold, as rare as it is, is sometimes found in nature as a native metal. Some metals can also be found in meteors. Almost all other metals are found in ores, a mineral-bearing rock, that require heat or some other process to liberate the metal. Another feature of gold is that it is workable as it is found, meaning that no technology beyond a stone hammer and anvil is needed to work the metal. This is a result of gold's properties of malleability and ductility. The earliest tools were stone, bone, wood, and sinew, all of which sufficed to work gold. At some unknown time, the process of liberating metals from rock by heat became known, and rocks rich in copper, tin, and lead came into demand. These ores were mined wherever they were recognized. Remnants of such ancient mines have been found all over Southwestern Asia. Metalworking was being carried out by the South Asian inhabitants of Mehrgarh between 7000 and 3300 BCE. The end of the beginning of metalworking occurs sometime around 6000 BCE when copper smelting became common in Southwestern Asia. Ancient civilisations knew of seven metals. Here they are arranged in order of their oxidation potential (in volts): Iron +0.44 V, Tin +0.14 V Lead +0.13 V Copper −0.34 V Mercury −0.79 V Silver −0.80 V Gold −1.50 V. The oxidation potential is important because it is one indicator of how tightly bound to the ore the metal is likely to be. As can be seen, iron is significantly higher than the other six metals while gold is dramatically lower than the six above it. Gold's low oxidation is one of the main reasons that gold is found in nuggets. These nuggets are relatively pure gold and are workable as they are found. Copper ore, being relatively abundant, and tin ore became the next important substances in the story of metalworking. Using heat to smelt copper from ore, a great deal of copper was produced. It was used for both jewelry and simple tools. However, copper by itself was too soft for tools requiring edges and stiffness. At some point tin was added into the molten copper and bronze was developed thereby. Bronze is an alloy of copper and tin. Bronze was an important advance because it had the edge-durability and stiffness that pure copper lacked. Until the advent of iron, bronze was the most advanced metal for tools and weapons in common use (see Bronze Age for more detail). Outside Southwestern Asia, these same advances and materials were being discovered and used around the world. People in China and Great Britain began using bronze with little time being devoted to copper. Japanese began the use of bronze and iron almost simultaneously. In the Americas it was different. Although the peoples of the Americas knew of metals, it was not until the European colonisation that metalworking for tools and weapons became common. Jewelry and art were the principal uses of metals in the Americas prior to European influence. About 2700 BCE, production of bronze was common in locales where the necessary materials could be assembled for smelting, heating, and working the metal. Iron was beginning to be smelted and began its emergence as an important metal for tools and weapons. The period that followed became known as the Iron Age. History By the historical periods of the Pharaohs in Egypt, the Vedic Kings in India, the Tribes of Israel, and the Maya civilization in North America, among other ancient populations, precious metals began to have value attached to them. In some cases rules for ownership, distribution, and trade were created, enforced, and agreed upon by the respective peoples. By the above periods metalworkers were very skilled at creating objects of adornment, religious artifacts, and trade instruments of precious metals (non-ferrous), as well as weaponry usually of ferrous metals and/or alloys. These skills were well executed. The techniques were practiced by artisans, blacksmiths, atharvavedic practitioners, alchemists, and other categories of metalworkers around the globe. For example, the granulation technique was employed by numerous ancient cultures before the historic record shows people traveled to far regions to share this process. Metalsmiths today still use this and many other ancient techniques. As time progressed, metal objects became more common, and ever more complex. The need to further acquire and work metals grew in importance. Skills related to extracting metal ores from the earth began to evolve, and metalsmiths became more knowledgeable. Metalsmiths became important members of society. Fates and economies of entire civilizations were greatly affected by the availability of metals and metalsmiths. The metalworker depends on the extraction of precious metals to make jewelry, build more efficient electronics, and for industrial and technological applications from construction to shipping containers to rail, and air transport. Without metals, goods and services would cease to move around the globe on the scale we know today. General processes Metalworking generally is divided into three categories: forming, cutting, and joining. Most metal cutting is done by high speed steel tools or carbide tools. Each of these categories contains various processes. Prior to most operations, the metal must be marked out and/or measured, depending on the desired finished product. Marking out (also known as layout) is the process of transferring a design or pattern to a workpiece and is the first step in the handcraft of metalworking. It is performed in many industries or hobbies, although in industry, the repetition eliminates the need to mark out every individual piece. In the metal trades area, marking out consists of transferring the engineer's plan to the workpiece in preparation for the next step, machining or manufacture. Calipers are hand tools designed to precisely measure the distance between two points. Most calipers have two sets of flat, parallel edges used for inner or outer diameter measurements. These calipers can be accurate to within one-thousandth of an inch (25.4 μm). Different types of calipers have different mechanisms for displaying the distance measured. Where larger objects need to be measured with less precision, a tape measure is often used. Casting Casting achieves a specific form by pouring molten metal into a mold and allowing it to cool, with no mechanical force. Forms of casting include: Investment casting (called lost wax casting in art) Centrifugal casting Die casting Sand casting Shell casting Spin casting Forming processes These forming processes modify metal or workpiece by deforming the object, that is, without removing any material. Forming is done with a system of mechanical forces and, especially for bulk metal forming, with heat. Bulk forming processes Plastic deformation involves using heat or pressure to make a workpiece more conductive to mechanical force. Historically, this and casting were done by blacksmiths, though today the process has been industrialized. In bulk metal forming, the workpiece is generally heated up. Cold sizing Extrusion Drawing Forging Powder metallurgy Friction drilling Rolling Burnishing Sheet (and tube) forming processes These types of forming process involve the application of mechanical force at room temperature. However, some recent developments involve the heating of dies and/or parts. Advancements in automated metalworking technology have made progressive die stamping possible which is a method that can encompass punching, coining, bending and several other ways below that modify metal at less cost while resulting in less scrap. Bending Coining Decambering Deep drawing (DD) Foldforming Hydroforming (HF) Hot metal gas forming Hot press hardening Incremental forming (IF) Spinning, Shear forming or Flowforming Planishing Raising Roll forming Roll bending Repoussé and chasing Rubber pad forming Shearing Stamping Superplastic forming (SPF) Wheeling using an English wheel (wheeling machine) Cutting processes Cutting is a collection of processes wherein material is brought to a specified geometry by removing excess material using various kinds of tooling to leave a finished part that meets specifications. The net result of cutting is two products, the waste or excess material, and the finished part. In woodworking, the waste would be sawdust and excess wood. In cutting metals the waste is chips or swarf and excess metal. Cutting processes fall into one of three major categories: Chip producing processes most commonly known as machining Burning, a set of processes wherein the metal is cut by oxidizing a kerf to separate pieces of metal Miscellaneous specialty process, not falling easily into either of the above categories Drilling a hole in a metal part is the most common example of a chip producing process. Using an oxy-fuel cutting torch to separate a plate of steel into smaller pieces is an example of burning. Chemical milling is an example of a specialty process that removes excess material by the use of etching chemicals and masking chemicals. There are many technologies available to cut metal, including: Manual technologies: saw, chisel, shear or snips Machine technologies: turning, milling, drilling, grinding, sawing Welding/burning technologies: burning by laser, oxy-fuel burning, and plasma Erosion technologies: by water jet, electric discharge, or abrasive flow machining. Chemical technologies: Photochemical machining Cutting fluid or coolant is used where there is significant friction and heat at the cutting interface between a cutter such as a drill or an end mill and the workpiece. Coolant is generally introduced by a spray across the face of the tool and workpiece to decrease friction and temperature at the cutting tool/workpiece interface to prevent excessive tool wear. In practice there are many methods of delivering coolant. Health effects The use of an angle grinder in cutting is not preferred as large amounts of harmful sparks and fumes (and particulates) are generated when compared with using reciprocating saw or band saw. Angle grinders produce sparks when cutting ferrous metals. They also produce shards cutting other materials. Milling Milling is the complex shaping of metal or other materials by removing material to form the final shape. It is generally done on a milling machine, a power-driven machine that in its basic form consists of a milling cutter that rotates about the spindle axis (like a drill), and a worktable that can move in multiple directions (usually two dimensions [x and y axis] relative to the workpiece). The spindle usually moves in the z axis. It is possible to raise the table (where the workpiece rests). Milling machines may be operated manually or under computer numerical control (CNC), and can perform a vast number of complex operations, such as slot cutting, planing, drilling and threading, rabbeting, routing, etc. Two common types of mills are the horizontal mill and vertical mill. The pieces produced are usually complex 3D objects that are converted into x, y, and z coordinates that are then fed into the CNC machine and allow it to complete the tasks required. The milling machine can produce most parts in 3D, but some require the objects to be rotated around the x, y, or z coordinate axis (depending on the need). Tolerances come in a variety of standards, depending on the locale. In countries still using the imperial system, this is usually in the thousandths of an inch (unit known as thou), depending on the specific machine. In many other European countries, standards following the ISO are used instead. In order to keep both the bit and material cool, a high temperature coolant is used. In most cases the coolant is sprayed from a hose directly onto the bit and material. This coolant can either be machine or user controlled, depending on the machine. Materials that can be milled range from aluminum to stainless steel and almost everything in between. Each material requires a different speed on the milling tool and varies in the amount of material that can be removed in one pass of the tool. Harder materials are usually milled at slower speeds with small amounts of material removed. Softer materials vary, but usually are milled with a high bit speed. The use of a milling machine adds costs that are factored into the manufacturing process. Each time the machine is used coolant is also used, which must be periodically added in order to prevent breaking bits. A milling bit must also be changed as needed in order to prevent damage to the material. Time is the biggest factor for costs. Complex parts can require hours to complete, while very simple parts take only minutes. This in turn varies the production time as well, as each part will require different amounts of time. Safety is key with these machines. The bits are traveling at high speeds and removing pieces of usually scalding hot metal. The advantage of having a CNC milling machine is that it protects the machine operator. Turning Turning is a metal cutting process for producing a cylindrical surface with a single point tool. The workpiece is rotated on a spindle and the cutting tool is fed into it radially, axially or both. Producing surfaces perpendicular to the workpiece axis is called facing. Producing surfaces using both radial and axial feeds is called profiling. A lathe is a machine tool which spins a block or cylinder of material so that when abrasive, cutting, or deformation tools are applied to the workpiece, it can be shaped to produce an object which has rotational symmetry about an axis of rotation. Examples of objects that can be produced on a lathe include candlestick holders, crankshafts, camshafts, and bearing mounts. Lathes have four main components: the bed, the headstock, the carriage, and the tailstock. The bed is a precise & very strong base which all of the other components rest upon for alignment. The headstock's spindle secures the workpiece with a chuck, whose jaws (usually three or four) are tightened around the piece. The spindle rotates at high speed, providing the energy to cut the material. While historically lathes were powered by belts from a line shaft, modern examples uses electric motors. The workpiece extends out of the spindle along the axis of rotation above the flat bed. The carriage is a platform that can be moved, precisely and independently parallel and perpendicular to the axis of rotation. A hardened cutting tool is held at the desired height (usually the middle of the workpiece) by the toolpost. The carriage is then moved around the rotating workpiece, and the cutting tool gradually removes material from the workpiece. The tailstock can be slid along the axis of rotation and then locked in place as necessary. It may hold centers to further secure the workpiece, or cutting tools driven into the end of the workpiece. Other operations that can be performed with a single point tool on a lathe are: Chamfering: Cutting an angle on the corner of a cylinder. Parting: The tool is fed radially into the workpiece to cut off the end of a part. Threading: A tool is fed along and across the outside or inside surface of rotating parts to produce external or internal threads. Boring: A single-point tool is fed linearly and parallel to the axis of rotation to create a round hole. Drilling: Feeding the drill into the workpiece axially. Knurling: Uses a tool to produce a rough surface texture on the work piece. Frequently used to allow grip by hand on a metal part. Modern computer numerical control (CNC) lathes and (CNC) machining centres can do secondary operations like milling by using driven tools. When driven tools are used the work piece stops rotating and the driven tool executes the machining operation with a rotating cutting tool. The CNC machines use x, y, and z coordinates in order to control the turning tools and produce the product. Most modern day CNC lathes are able to produce most turned objects in 3D. Nearly all types of metal can be turned, although more time & specialist cutting tools are needed for harder workpieces. Threading There are many threading processes including: cutting threads with a tap or die, thread milling, single-point thread cutting, thread rolling, cold root rolling and forming, and thread grinding. A tap is used to cut a female thread on the inside surface of a pre-drilled hole, while a die cuts a male thread on a preformed cylindrical rod. Grinding Grinding uses an abrasive process to remove material from the workpiece. A grinding machine is a machine tool used for producing very fine finishes, making very light cuts, or high precision forms using an abrasive wheel as the cutting device. This wheel can be made up of various sizes and types of stones, diamonds or inorganic materials. The simplest grinder is a bench grinder or a hand-held angle grinder, for deburring parts or cutting metal with a zip-disc. Grinders have increased in size and complexity with advances in time and technology. From the old days of a manual toolroom grinder sharpening endmills for a production shop, to today's 30000 RPM CNC auto-loading manufacturing cell producing jet turbines, grinding processes vary greatly. Grinders need to be very rigid machines to produce the required finish. Some grinders are even used to produce glass scales for positioning CNC machine axis. The common rule is the machines used to produce scales be 10 times more accurate than the machines the parts are produced for. In the past grinders were used for finishing operations only because of limitations of tooling. Modern grinding wheel materials and the use of industrial diamonds or other man-made coatings (cubic boron nitride) on wheel forms have allowed grinders to achieve excellent results in production environments instead of being relegated to the back of the shop. Modern technology has advanced grinding operations to include CNC controls, high material removal rates with high precision, lending itself well to aerospace applications and high volume production runs of precision components. Filing Filing is combination of grinding and saw tooth cutting using a file. Prior to the development of modern machining equipment it provided a relatively accurate means for the production of small parts, especially those with flat surfaces. The skilled use of a file allowed a machinist to work to fine tolerances and was the hallmark of the craft. Today filing is rarely used as a production technique in industry, though it remains as a common method of deburring. Other Broaching is a machining operation used to cut keyways into shafts. Electron beam machining (EBM) is a machining process where high-velocity electrons are directed toward a work piece, creating heat and vaporizing the material. Ultrasonic machining uses ultrasonic vibrations to machine very hard or brittle materials. Joining processes Welding Welding is a fabrication process that joins materials, usually metals or thermoplastics, by causing coalescence. This is often done by melting the workpieces and adding a filler material to form a pool of molten material that cools to become a strong joint, but sometimes pressure is used in conjunction with heat, or by itself, to produce the weld. Many different energy sources can be used for welding, including a gas flame, an electric arc, a laser, an electron beam, friction, and ultrasound. While often an industrial process, welding can be done in many different environments, including open air, underwater and in space. Regardless of location, however, welding remains dangerous, and precautions must be taken to avoid burns, electric shock, poisonous fumes, and overexposure to ultraviolet light. Brazing Brazing is a joining process in which a filler metal is melted and drawn into a capillary formed by the assembly of two or more work pieces. The filler metal reacts metallurgically with the workpieces and solidifies in the capillary, forming a strong joint. Unlike welding, the work piece is not melted. Brazing is similar to soldering, but occurs at temperatures in excess of . Brazing has the advantage of producing less thermal stresses than welding, and brazed assemblies tend to be more ductile than weldments because alloying elements can not segregate and precipitate. Brazing techniques include, flame brazing, resistance brazing, furnace brazing, diffusion brazing, inductive brazing and vacuum brazing. Soldering Soldering is a joining process that occurs at temperatures below . It is similar to brazing in the way that a filler is melted and drawn into a capillary to form a joint, although at a lower temperature. Because of this lower temperature and different alloys used as fillers, the metallurgical reaction between filler and work piece is minimal, resulting in a weaker joint. Riveting Riveting is one of the most ancient metalwork joining processes. Its use declined markedly during the second half of the 20th century, but it still retains important uses in industry and construction, and in artisan crafts such as jewellery, medieval armouring and metal couture in the early 21st century. The earlier use of rivets is being superseded by improvements in welding and component fabrication techniques. A rivet is essentially a two-headed and unthreaded bolt which holds two other pieces of metal together. Holes are drilled or punched through the two pieces of metal to be joined. The holes being aligned, a rivet is passed through the holes and permanent heads are formed onto the ends of the rivet utilizing hammers and forming dies (by either cold working or hot working). Rivets are commonly purchased with one head already formed. When it is necessary to remove rivets, one of the rivet's heads is sheared off with a cold chisel. The rivet is then driven out with a hammer and punch. Mechanical fixings This includes screws, as well as bolts. This is often used as it requires relatively little specialist equipment, and are therefore often used in flat-pack furniture. It can also be used when a metal is joined to another material (such as wood) or a particular metal does not weld well (such as aluminum). This can be done to directly join metals, or with an intermediate material such as nylon. While often weaker than other methods such as welding or brazing, the metal can easily be removed and therefore reused or recycled. It can also be done in conjunction with an epoxy or glue, reverting its ecological benefits. Associated processes While these processes are not primary metalworking processes, they are often performed before or after metalworking processes. Heat treatment Metals can be heat treated to alter the properties of strength, ductility, toughness, hardness or resistance to corrosion. Common heat treatment processes include annealing, precipitation hardening, quenching, and tempering: annealing softens the metal by allowing recovery of cold work and grain growth. quenching can be used to harden alloy steels, or in precipitation hardenable alloys, to trap dissolved solute atoms in solution. tempering will cause the dissolved alloying elements to precipitate, or in the case of quenched steels, improve impact strength and ductile properties. Often, mechanical and thermal treatments are combined in what is known as thermo-mechanical treatments for better properties and more efficient processing of materials. These processes are common to high alloy special steels, super alloys and titanium alloys. Plating Electroplating is a common surface-treatment technique. It involves bonding a thin layer of another metal such as gold, silver, chromium or zinc to the surface of the product by hydrolysis. It is used to reduce corrosion, create abrasion resistance and improve the product's aesthetic appearance. Plating can even change the properties of the original part including conductivity, heat dissipation or structural integrity. There are four main electroplating methods to ensure proper coating and cost effectiveness per product: mass plating, rack plating, continuous plating and line plating. Thermal spraying Thermal spraying techniques are another popular finishing option, and often have better high temperature properties than electroplated coatings due to the thicker coating. The four main thermal spray processes include electric wire arc spray, flame (oxy acetylene combustion) spray, plasma spray and high velocity oxy fuel (HVOF) spray. See also Bronze and brass ornamental work Chip formation Heavy metals Lead poisoning List of metalworking occupations Metal swarf Metal testing Metalworking hand tool Occupational dust exposure Particulates Power tool Stone mould General: List of manufacturing processes Timeline of materials technology References External links What's the Best Way to Cut Thick Steel? Schneider, George. "Chapter 1: Cutting Tool Materials", American Machinist, October, 2009 Schneider, George. "Cutting Tool Applications: Chapter 2 Metal Removal Methods", American Machinist, November, 2009 Videos about metalworking published by Institut für den Wissenschaftlichen Film. Available in the AV-Portal of the German National Library of Science and Technology. Evidences of Metalworking History Reference Metal industry M M 9th-millennium BC establishments Articles containing video clips
Metalworking
[ "Physics", "Chemistry", "Engineering" ]
5,570
[ "Chemical engineering", "nan", "Applied and interdisciplinary physics", "Mechanical engineering" ]
266,449
https://en.wikipedia.org/wiki/Coilgun
A coilgun is a type of mass driver consisting of one or more coils used as electromagnets in the configuration of a linear motor that accelerate a ferromagnetic or conducting projectile to high velocity. In almost all coilgun configurations, the coils and the gun barrel are arranged on a common axis. A coilgun is not a rifle as the barrel is smoothbore (not rifled). Coilguns generally consist of one or more coils arranged along a barrel, so the path of the accelerating projectile lies along the central axis of the coils. The coils are switched on and off in a precisely timed sequence, causing the projectile to be accelerated quickly along the barrel via magnetic forces. Coilguns are distinct from railguns, as the direction of acceleration in a railgun is at right angles to the central axis of the current loop formed by the conducting rails. In addition, railguns usually require the use of sliding contacts to pass a large current through the projectile or sabot, but coilguns do not necessarily require sliding contacts. While some simple coilgun concepts can use ferromagnetic projectiles or even permanent magnet projectiles, most designs for high velocities actually incorporate a coupled coil as part of the projectile. Coilguns are also distinct from Gauss guns, although many works of science fiction have erroneously confused the two. A coil gun uses electromagnetic acceleration whereas Gauss guns predate the idea of coil guns and instead consists of ferromagnets using a configuration similar to a Newton's Cradle to impart acceleration. History The oldest electromagnetic gun came in the form of the coilgun, the first of which was invented by Norwegian scientist Kristian Birkeland at the University of Kristiania (today Oslo). The invention was officially patented in 1904, although its development reportedly started as early as 1845. According to his accounts, Birkeland accelerated a 500-gram projectile to approximately . In 1933, Texan inventor Virgil Rigsby developed a stationary coil gun that was designed to be used similarly to a machine gun. It was powered by a large electrical motor and generator. It appeared in many contemporary science publications, but never piqued the interest of any armed forces. Construction There are two main types or setups of a coilgun: single-stage and multistage. A single-stage coilgun uses one electromagnetic coil to propel a projectile. A multistage coilgun uses several electromagnetic coils in succession to progressively increase the speed of the projectile. Ferromagnetic projectiles For ferromagnetic projectiles, a single-stage coilgun can be formed by a coil of wire, an electromagnet, with a ferromagnetic projectile placed at one of its ends. This type of coilgun is formed like the solenoid used in an electromechanical relay, i.e. a current-carrying coil which will draw a ferromagnetic object through its center. A large current is pulsed through the coil of wire and a strong magnetic field forms, pulling the projectile to the center of the coil. When the projectile nears this point the electromagnet must be switched off, to prevent the projectile from becoming arrested at the center of the electromagnet. In a multistage design, further electromagnets are then used to repeat this process, progressively accelerating the projectile. In common coilgun designs, the "barrel" of the gun is made up of a track that the projectile rides on, with the driver into the magnetic coils around the track. Power is supplied to the electromagnet from some sort of fast discharge storage device, typically a battery, or capacitors (one per electromagnet), designed for fast energy discharge. A diode is used to protect polarity sensitive components (such as semiconductors or electrolytic capacitors) from damage due to inverse polarity of the voltage after turning off the coil. Many hobbyists use low-cost rudimentary designs to experiment with coilguns, for example using photoflash capacitors from a disposable camera, or a capacitor from a standard cathode-ray tube television as the energy source, and a low inductance coil to propel the projectile forward. Non-ferromagnetic projectiles Some designs have non-ferromagnetic projectiles, of materials such as aluminium or copper, with the armature of the projectile acting as an electromagnet with internal current induced by pulses of the acceleration coils. A superconducting coilgun called a quench gun could be created by successively quenching a line of adjacent coaxial superconducting coils forming a gun barrel, generating a wave of magnetic field gradient traveling at any desired speed. A traveling superconducting coil might be made to ride this wave like a surfboard. The device would be a mass driver or linear synchronous motor with the propulsion energy stored directly in the drive coils. Another method would have non-superconducting acceleration coils and propulsion energy stored outside them but a projectile with superconducting magnets. Though the cost of power switching and other factors can limit projectile energy, a notable benefit of some coilgun designs over simpler railguns is avoiding an intrinsic velocity limit from hypervelocity physical contact and erosion. By having the projectile pulled towards or levitated within the center of the coils as it is accelerated, no physical friction with the walls of the bore occurs. If the bore is a total vacuum (such as a tube with a plasma window), there is no friction at all, which helps prolong the period of reusability. Switching One main obstacle in coilgun design is switching the power through the coils. There are several common solutions—the simplest (and probably least effective) is the spark gap, which releases the stored energy through the coil when the voltage reaches a certain threshold. A better option is to use solid-state switches; these include IGBTs or power MOSFETs (which can be switched off mid-pulse) and SCRs (which release all stored energy before turning off). A quick-and-dirty method for switching, especially for those using a flash camera for the main components, is to use the flash tube itself as a switch. By wiring it in series with the coil, it can silently and non-destructively (assuming that the energy in the capacitor is kept below the tube's safe operating limits) allow a large amount of current to pass through to the coil. Like any flash tube, ionizing the gas in the tube with a high voltage triggers it. However, a large amount of the energy will be dissipated as heat and light, and, because of the tube being a spark gap, the tube will stop conducting once the voltage across it drops sufficiently, leaving some charge remaining on the capacitor. Resistance The electrical resistance of the coils and the equivalent series resistance (ESR) of the current source dissipate considerable power. At low speeds the heating of the coils dominates the efficiency of the coilgun, giving exceptionally low efficiency. However, as speeds climb, mechanical power grows proportional to the square of the speed, but, correctly switched, the resistive losses are largely unaffected, and thus these resistive losses become much smaller in percentage terms. Magnetic circuit Ideally, 100% of the magnetic flux generated by the coil would be delivered to and act on the projectile; in reality this is impossible due to energy losses always present in a real system, which cannot be eliminated. With a simple air-cored solenoid, the majority of the magnetic flux is not coupled into the projectile because of the magnetic circuit's high reluctance. The uncoupled flux generates a magnetic field that stores energy in the surrounding air. The energy that is stored in this field does not simply disappear from the magnetic circuit once the capacitor finishes discharging, instead returning to the coilgun's electric circuit. Because the coilgun's electric circuit is inherently analogous to an LC oscillator, the unused energy returns in the reverse direction ('ringing'), which can seriously damage polarized capacitors such as electrolytic capacitors. Reverse charging can be prevented by a diode connected in reverse-parallel across the capacitor terminals; as a result, the current keeps flowing until the diode and the coil's resistance dissipate the field energy as heat. While this is a simple and frequently utilized solution, it requires an additional expensive high-power diode and a well-designed coil with enough thermal mass and heat dissipation capability in order to prevent component failure. Some designs attempt to recover the energy stored in the magnetic field by using a pair of diodes. These diodes, instead of being forced to dissipate the remaining energy, recharge the capacitors with the right polarity for the next discharge cycle. This will also avoid the need to fully recharge the capacitors, thus significantly reducing charge times. However, the practicality of this solution is limited by the resulting high recharge current through the equivalent series resistance (ESR) of the capacitors; the ESR will dissipate some of the recharge current, generating heat within the capacitors and potentially shortening their lifetime. To reduce component size, weight, durability requirements, and most importantly, cost, the magnetic circuit must be optimized to deliver more energy to the projectile for a given energy input. This has been addressed to some extent by the use of back iron and end iron, which are pieces of magnetic material that enclose the coil and create paths of lower reluctance in order to improve the amount of magnetic flux coupled into the projectile. Results can vary widely, depending on the materials used; hobbyist designs may use, for example, materials ranging anywhere from magnetic steel (more effective, lower reluctance) to video tape (little improvement in reluctance). Moreover, the additional pieces of magnetic material in the magnetic circuit can potentially exacerbate the possibility of flux saturation and other magnetic losses. Ferromagnetic projectile saturation Another significant limitation of the coilgun is the occurrence of magnetic saturation in the ferromagnetic projectile. When the flux in the projectile lies in the linear portion of its material's B(H) curve, the force applied to the core is proportional to the square of coil current (I)—the field (H) is linearly dependent on I, B is linearly dependent on H and force is linearly dependent on the product BI. This relationship continues until the core is saturated; once this happens B will only increase marginally with H (and thus with I), so force gain is linear. Since losses are proportional to I2, increasing current beyond this point eventually decreases efficiency although it may increase the force. This puts an absolute limit on how much a given projectile can be accelerated with a single stage at acceptable efficiency. Projectile magnetization and reaction time Apart from saturation, the B(H) dependency often contains a hysteresis loop and the reaction time of the projectile material may be significant. The hysteresis means that the projectile becomes permanently magnetized and some energy will be lost as a permanent magnetic field of the projectile. The projectile reaction time, on the other hand, makes the projectile reluctant to respond to abrupt B changes; the flux will not rise as fast as desired while current is applied and a B tail will occur after the coil field has disappeared. This delay decreases the force, which would be maximized if the H and B were in phase. Induction coilguns Most of the work to develop coilguns as hyper-velocity launchers has used "air-cored" systems to get around the limitations associated with ferromagnetic projectiles. In these systems, the projectile is accelerated by a moving coil "armature". If the armature is configured as one or more "shorted turns" then induced currents will result as a consequence of the time variation of the current in the static launcher coil (or coils). In principle, coilguns can also be constructed in which the moving coils are fed with current via sliding contacts. However, the practical construction of such arrangements requires the provision of reliable high speed sliding contacts. Although feeding current to a multi-turn coil armature might not require currents as large as those required in a railgun, the elimination of the need for high speed sliding contacts is an obvious potential advantage of the induction coilgun relative to the railgun. Air cored systems also introduce the penalty that much higher currents may be needed than in an "iron cored" system. Ultimately though, subject to the provision of appropriately rated power supplies, air cored systems can operate with much greater magnetic field strengths than "iron cored" systems, so that, ultimately, much higher accelerations and forces should be possible. Formula for exit velocity of coilgun projectile An approximate result for the exit velocity of a projectile having been accelerated by a single-stage coilgun can be obtained by the equation m being the mass of the projectile, defined as kg V being the volume of the projectile, defined as m3 μ0 being the vacuum permeability, defined in SI units as 4π × 10−7 V·s/(A·m) χm being the magnetic susceptibility of the projectile, a dimensionless proportionality constant indicating the degree of magnetization in a material in response to applied magnetic fields. This often must be determined experimentally, and tables containing susceptibility values for certain materials may be found in the CRC Handbook of Chemistry and Physics as well as the Wikipedia article for magnetic susceptibility n being the number of coil turns per unit length of the coil, defined as m−1 And I being the current passing through the coil, defined as A. While this approximation is useful for quickly defining the upper limit of velocity in a coilgun system, more accurate and non-linear second order differential equations do exist. The issues with this formula being that it assumes the projectile lies completely within a uniform magnetic field, that the current dies out instantly once the projectile reaches the center of the coil (eliminating the possibility of coil suck-back), that all potential energy is transferred into kinetic energy (whereas most would go into frictional forces), and that the wires of the coil are infinitely thin and do not stack on one another, all cumulatively increasing the expected exit velocity. Uses Small coilguns are recreationally made by hobbyists, typically up to several joules to tens of joules projectile energy (the latter comparable to a typical air gun and an order of magnitude less than a firearm) while ranging from under one percent to several percent efficiency. In 2018, a Los Angeles-based company Arcflash Labs offered the first coilgun for sale to the general public, the EMG-01A. It fired 6-gram steel slugs at 45 m/s with a muzzle energy of approximately 5 joules. In 2021, they developed a larger model, the GR-1 Gauss rifle which fired 30-gram steel slugs at up to 75 m/s with a muzzle energy of approximately 85 joules, comparable to a PCP air rifle. In 2022 Northshore Sports Club, an American gun club in Lake Forest, Illinois began distributing the CS/LW21, also referred to as the "E-Shotgun", a compact, 15 joule magazine fed coil gun, manufactured by the China North Industries Group Corp. They project distribution to reach 5000 units per year in the US, and the manufacturer has also unveiled plans to supply the Chinese police and military with units for "non-lethal riot control". Much higher efficiency and energy can be obtained with designs of greater expense and sophistication. In 1978, Bondaletov in the USSR achieved record acceleration with a single stage by sending a 2-gram ring to 5000 m/s in 1 cm of length, but the most efficient modern designs tend to involve many stages. It is estimated that greater than 90% efficiency will be required for vastly larger superconducting systems for space launch. An experimental 45-stage, 2.1 m long DARPA coilgun mortar design is 22% efficient, with 1.6 megajoules kinetic energy delivered to a round. Though they face the challenge of competitiveness versus conventional guns (and sometimes railgun alternatives), coilguns are being researched for weaponry. The DARPA Electromagnetic Mortar program is one example, if practical challenges like sufficiently low weight can be achieved. The coilgun would be relatively silent with no smoke giving away its position, though a supersonic projectile would still create a sonic boom. Adjustable, smooth acceleration of the projectile along the barrel length would allow higher velocity, with a predicted range increase of 30% for a 120mm EM mortar over the conventional version of similar length. With no separate propellant charges to load, the researchers envision the firing rate to approximately double. In 2006, a 120mm prototype was under construction for evaluation, though a tenuous time for deployment was then estimated to be 5 to 10+ years by Sandia National Laboratories. In 2011, development was proposed for an 81mm coilgun mortar to operate with a hybrid-electric version of the future Joint Light Tactical Vehicle. Electromagnetic aircraft catapults are planned, including on board future U.S. Gerald R. Ford class aircraft carriers. An experimental induction coilgun version of an Electromagnetic Missile Launcher (EMML) has been tested for launching Tomahawk missiles. A coilgun-based active defense system for tanks is under development at HIT in China. Coilgun potential has been perceived as extending beyond military applications. Few entities could overcome the challenges and corresponding capital investment to fund gigantic coilguns with projectile mass and velocity on the scale of gigajoules of kinetic energy (as opposed to megajoules or less). Such have been proposed as Earth or Moon launchers: An ambitious lunar-base proposal considered in a 1975 NASA study would have involved a 4000-ton coilgun sending 10 million tons of lunar material over several years to L5 in support of massive space colonization utilizing a large 9900-ton power plant). A 1992 NASA study calculated that a 330-ton lunar quench gun (superconducting coilgun) could launch 4400 projectiles annually, each 1.5 tons and mostly liquid oxygen payload, using a relatively small amount of power, 350 kW average. After a NASA Ames study of aerothermal requirements for heat shields with terrestrial surface launch, Sandia National Laboratories investigated electromagnetic launchers for spacecraft and researched other EML applications using both railguns and coilguns. In 1990 a kilometer-long coilgun was proposed for launch of small satellites. Later investigations at Sandia included a 2005 proposal of the StarTram concept for an extremely long coilgun, one version conceived as launching passengers to orbit with survivable acceleration. A mass driver is essentially a coilgun that magnetically accelerates a package consisting of a magnetizable container with a payload. Once accelerated, the container and payload separate, the container is slowed and recycled to receive another payload. See also Electromagnetic propulsion Carl Friedrich Gauss Helical railgun Light-gas gun Mass driver Edwin Fitch Northrup Plasma railgun Railgun Ram accelerator Solenoid Tubular linear motor References External links Prototype of coil gun. CG42- Full auto coil gun Artillery by type Electric motors Magnetic propulsion devices Non-rocket spacelaunch Norwegian inventions
Coilgun
[ "Technology", "Engineering" ]
4,022
[ "Electrical engineering", "Engines", "Electric motors" ]
796,893
https://en.wikipedia.org/wiki/Haag%27s%20theorem
While working on the mathematical physics of an interacting, relativistic, quantum field theory, Rudolf Haag developed an argument against the existence of the interaction picture, a result now commonly known as Haag’s theorem. Haag’s original proof relied on the specific form of then-common field theories, but subsequently generalized by a number of authors, notably Dick Hall and Arthur Wightman, who concluded that no single, universal Hilbert space representation can describe both free and interacting fields. A generalization due to Michael C. Reed and Barry Simon shows that applies to free neutral scalar fields of different masses, which implies that the interaction picture is always inconsistent, even in the case of a free field. Introduction Traditionally, describing a quantum field theory requires describing a set of operators satisfying the canonical (anti)commutation relations, and a Hilbert space on which those operators act. Equivalently, one should give a representation of the free algebra on those operators, modulo the canonical commutation relations (the CCR/CAR algebra); in the latter perspective, the underlying algebra of operators is the same, but different field theories correspond to different (i.e., unitarily inequivalent) representations. Philosophically, the action of the CCR algebra should be irreducible, for otherwise the theory can be written as the combined effects of two separate fields. That principle implies the existence of a cyclic vacuum state. Importantly, a vacuum uniquely determines the algebra representation, because it is cyclic. Two different specifications of the vacuum are common: the minimum-energy eigenvector of the field Hamiltonian, or the state annihilated by the number operator . When these specifications describe different vectors, the vacuum is said to polarize, after the physical interpretation in the case of quantum electrodynamics. Haag's result explains that the same quantum field theory must treat the vacuum very differently when interacting vs. free. Formal description In its modern form, the Haag theorem has two parts: If a quantum field is free and Euclidean-invariant in the spatial dimensions, then that field's vacuum does not polarize. If two Poincaré-invariant quantum fields share the same vacuum, then their first four Wightman functions coincide. Moreover, if one such field is free, then the other must also be a free field of the same mass. This state of affairs is in stark contrast to ordinary non-relativistic quantum mechanics, where there is always a unitary equivalence between the free and interacting representations. That fact is used in constructing the interaction picture, where operators are evolved using a free field representation, while states evolve using the interacting field representation. Within the formalism of quantum field theory (QFT) such a picture generally does not exist, because these two representations are unitarily inequivalent. Thus the quantum field theorist is confronted with the so-called choice problem: One must choose the ‘right’ representation among an uncountably-infinite set of representations which are not equivalent. Physical / heuristic point of view As was already noticed by Haag in his original work, vacuum polarization lies at the core of Haag’s theorem. Any interacting quantum field (or non-interacting fields of different masses) polarizes the vacuum, and as a consequence the vacuum state lies inside a renormalized Hilbert space that differs from the Hilbert space of the free field. Although an isomorphism could always be found that maps one Hilbert space into the other, Haag’s theorem implies that no such mapping could deliver unitarily equivalent representations of the corresponding canonical commutation relations, i.e. unambiguous physical results. Work-arounds Among the assumptions that lead to Haag’s theorem is translation invariance of the system. Consequently, systems that can be set up inside a box with periodic boundary conditions or that interact with suitable external potentials escape the conclusions of the theorem. Haag (1958) and David Ruelle (1962) have presented the Haag–Ruelle scattering theory, which deals with asymptotic free states and thereby serves to formalize some of the assumptions needed for the LSZ reduction formula. These techniques, however, cannot be applied to massless particles and have unsolved issues with bound states. Quantum field theorists’ conflicting reactions While some physicists and philosophers of physics have repeatedly emphasized how seriously Haag’s theorem undermines the foundations of QFT, the majority of practicing quantum field theorists simply dismiss the issue. Most quantum field theory texts geared to practical appreciation of the Standard Model of elementary particle interactions do not even mention it, implicitly assuming that some rigorous set of definitions and procedures may be found to firm up the powerful and well-confirmed heuristic results they report on. For example, asymptotic structure (cf. QCD jets) is a specific calculation in strong agreement with experiment, but nevertheless should fail by dint of Haag’s theorem. The general feeling is that this is not some calculation that was merely stumbled upon, but rather that it embodies a physical truth. The practical calculations and tools are motivated and justified by an appeal to a grand mathematical formalism called QFT. Haag’s theorem suggests that the formalism is not well-founded, yet the practical calculations are sufficiently distant from the abstract formalism that any weaknesses there do not affect (or invalidate) practical results. As was pointed out by Teller (1997):Everyone must agree that as a piece of mathematics Haag’s theorem is a valid result that at least appears to call into question the mathematical foundation of interacting quantum field theory, and agree that at the same time the theory has proved astonishingly successful in application to experimental results. Tracy Lupher (2005) suggested that the wide range of conflicting reactions to Haag’s theorem may partly be caused by the fact that the same exists in different formulations, which in turn were proved within different formulations of QFT such as Wightman’s axiomatic approach or the LSZ formula. According to Lupher,The few who mention it tend to regard it as something important that someone (else) should investigate thoroughly. Lawrence Sklar (2000) further pointed out:There may be a presence within a theory of conceptual problems that appear to be the result of mathematical artifacts. These seem to the theoretician to be not fundamental problems rooted in some deep physical mistake in the theory, but, rather, the consequence of some misfortune in the way in which the theory has been expressed. Haag’s theorem is, perhaps, a difficulty of this kind. David Wallace (2011) has compared the merits of conventional QFT with those of algebraic quantum field theory (AQFT) and observed that... algebraic quantum field theory has unitarily inequivalent representations even on spatially finite regions, but this lack of unitary equivalence only manifests itself with respect to expectation values on arbitrary small spacetime regions, and these are exactly those expectation values which don’t convey real information about the world. He justifies the latter claim with the insights gained from modern renormalization group theory, namely the fact that... we can absorb all our ignorance of how the cutoff [i.e., the short-range cutoff required to carry out the renormalization procedure] is implemented, into the values of finitely many coefficients which can be measured empirically. Concerning the consequences of Haag’s theorem, Wallace’s observation implies that since QFT does not attempt to predict fundamental parameters, such as particle masses or coupling constants, potentially harmful effects arising from unitarily non-equivalent representations remain absorbed inside the empirical values that stem from measurements of these parameters (at a given length scale) and that are readily imported into QFT. Thus they remain invisible to quantum field theorists, in practice. References Further reading Axiomatic quantum field theory Theorems in quantum mechanics No-go theorems
Haag's theorem
[ "Physics", "Mathematics" ]
1,625
[ "Theorems in quantum mechanics", "No-go theorems", "Equations of physics", "Quantum mechanics", "Theorems in mathematical physics", "Physics theorems" ]
796,928
https://en.wikipedia.org/wiki/Consistent%20histories
In quantum mechanics, the consistent histories or simply "consistent quantum theory" interpretation generalizes the complementarity aspect of the conventional Copenhagen interpretation. The approach is sometimes called decoherent histories and in other work decoherent histories are more specialized. First proposed by Robert Griffiths in 1984, this interpretation of quantum mechanics is based on a consistency criterion that then allows probabilities to be assigned to various alternative histories of a system such that the probabilities for each history obey the rules of classical probability while being consistent with the Schrödinger equation. In contrast to some interpretations of quantum mechanics, the framework does not include "wavefunction collapse" as a relevant description of any physical process, and emphasizes that measurement theory is not a fundamental ingredient of quantum mechanics. Consistent histories allows predictions related to the state of the universe needed for quantum cosmology. Key assumptions The interpretation rests on three assumptions: states in Hilbert space describe physical objects, quantum predictions are not deterministic, and physical systems have no single unique description. The third assumption generalizes complementarity and this assumption separates consistent histories from other quantum theory interpretations. Formalism Histories A homogeneous history (here labels different histories) is a sequence of Propositions specified at different moments of time (here labels the times). We write this as: and read it as "the proposition is true at time and then the proposition is true at time and then ". The times are strictly ordered and called the temporal support of the history. Inhomogeneous histories are multiple-time propositions which cannot be represented by a homogeneous history. An example is the logical OR of two homogeneous histories: . These propositions can correspond to any set of questions that include all possibilities. Examples might be the three propositions meaning "the electron went through the left slit", "the electron went through the right slit" and "the electron didn't go through either slit". One of the aims of the approach is to show that classical questions such as, "where are my keys?" are consistent. In this case one might use a large number of propositions each one specifying the location of the keys in some small region of space. Each single-time proposition can be represented by a projection operator acting on the system's Hilbert space (we use "hats" to denote operators). It is then useful to represent homogeneous histories by the time-ordered product of their single-time projection operators. This is the history projection operator (HPO) formalism developed by Christopher Isham and naturally encodes the logical structure of the history propositions. Consistency An important construction in the consistent histories approach is the class operator for a homogeneous history: The symbol indicates that the factors in the product are ordered chronologically according to their values of : the "past" operators with smaller values of appear on the right side, and the "future" operators with greater values of appear on the left side. This definition can be extended to inhomogeneous histories as well. Central to the consistent histories is the notion of consistency. A set of histories is consistent (or strongly consistent) if for all . Here represents the initial density matrix, and the operators are expressed in the Heisenberg picture. The set of histories is weakly consistent if for all . Probabilities If a set of histories is consistent then probabilities can be assigned to them in a consistent way. We postulate that the probability of history is simply which obeys the axioms of probability if the histories come from the same (strongly) consistent set. As an example, this means the probability of " OR " equals the probability of "" plus the probability of "" minus the probability of " AND ", and so forth. Interpretation The interpretation based on consistent histories is used in combination with the insights about quantum decoherence. Quantum decoherence implies that irreversible macroscopic phenomena (hence, all classical measurements) render histories automatically consistent, which allows one to recover classical reasoning and "common sense" when applied to the outcomes of these measurements. More precise analysis of decoherence allows (in principle) a quantitative calculation of the boundary between the classical domain and the quantum domain. According to Roland Omnès, In order to obtain a complete theory, the formal rules above must be supplemented with a particular Hilbert space and rules that govern dynamics, for example a Hamiltonian. In the opinion of others this still does not make a complete theory as no predictions are possible about which set of consistent histories will actually occur. In other words, the rules of consistent histories, the Hilbert space, and the Hamiltonian must be supplemented by a set selection rule. However, Robert B. Griffiths holds the opinion that asking the question of which set of histories will "actually occur" is a misinterpretation of the theory; histories are a tool for description of reality, not separate alternate realities. Proponents of this consistent histories interpretation—such as Murray Gell-Mann, James Hartle, Roland Omnès and Robert B. Griffiths—argue that their interpretation clarifies the fundamental disadvantages of the old Copenhagen interpretation, and can be used as a complete interpretational framework for quantum mechanics. In Quantum Philosophy, Roland Omnès provides a less mathematical way of understanding this same formalism. The consistent histories approach can be interpreted as a way of understanding which sets of classical questions can be consistently asked of a single quantum system, and which sets of questions are fundamentally inconsistent, and thus meaningless when asked together. It thus becomes possible to demonstrate formally why it is that the questions which Einstein, Podolsky and Rosen assumed could be asked together, of a single quantum system, simply cannot be asked together. On the other hand, it also becomes possible to demonstrate that classical, logical reasoning often does apply, even to quantum experiments – but we can now be mathematically exact about the limits of classical logic. See also HPO formalism References External links The Consistent Histories Approach to Quantum Mechanics – Stanford Encyclopedia of Philosophy Interpretations of quantum mechanics Quantum measurement
Consistent histories
[ "Physics" ]
1,214
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
797,238
https://en.wikipedia.org/wiki/Clifford%E2%80%93Klein%20form
In mathematics, a Clifford–Klein form is a double coset space , where G is a reductive Lie group, H a closed subgroup of G, and Γ a discrete subgroup of G that acts properly discontinuously on the homogeneous space G/H. A suitable discrete subgroup Γ may or may not exist, for a given G and H. If Γ exists, there is the question of whether can be taken to be a compact space, called a compact Clifford–Klein form. When H is itself compact, classical results show that a compact Clifford–Klein form exists. Otherwise it may not, and there are a number of negative results. History According to Moritz Epple, the Clifford-Klein forms began when W. K. Clifford used quaternions to twist their space. "Every twist possessed a space-filling family of invariant lines", the Clifford parallels. They formed "a particular structure embedded in elliptic 3-space", the Clifford surface, which demonstrated that "the same local geometry may be tied to spaces that are globally different." Wilhelm Killing thought that for free mobility of rigid bodies there are four spaces: Euclidean, hyperbolic, elliptic and spherical. They are spaces of constant curvature but constant curvature differs from free mobility: it is local, the other is both local and global. Killing's contribution to Clifford-Klein space forms involved formulation in terms of groups, finding new classes of examples, and consideration of the scientific relevance of spaces of constant curvature. He took up the task to develop physical theories of CK space forms. Karl Schwarzchild wrote “The admissible measure of the curvature of space”, and noted in an appendix that physical space may actually be a non-standard space of constant curvature. See also Killing-Hopf theorem Space form References Moritz Epple (2003) From Quaternions to Cosmology: Spaces of Constant Curvature ca. 1873 — 1925, invited address to International Congress of Mathematicians Lie groups Homogeneous spaces
Clifford–Klein form
[ "Physics", "Mathematics" ]
399
[ "Lie groups", "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Geometry", "Symmetry" ]
798,007
https://en.wikipedia.org/wiki/Ribitol
Ribitol, or adonitol, is a crystalline pentose alcohol (C5H12O5) formed by the reduction of ribose. It occurs naturally in the plant Adonis vernalis as well as in the cell walls of some Gram-positive bacteria, in the form of ribitol phosphate, in teichoic acids. It also forms part of the chemical structure of riboflavin and flavin mononucleotide (FMN), which is a nucleotide coenzyme used by many enzymes, the so-called flavoproteins. References External links GMD MS Spectrum Safety MSDS data Biological Magnetic Resonance Data Bank Sugar alcohols Orphan drugs
Ribitol
[ "Chemistry" ]
146
[ "Carbohydrates", "Sugar alcohols" ]
798,278
https://en.wikipedia.org/wiki/Leiden%20Observatory
Leiden Observatory () is an astronomical institute of Leiden University, in the Netherlands. Established in 1633 to house the quadrant of Willebrord Snellius, it is the oldest operating university observatory in the world, with the only older still existing observatory being the Vatican Observatory. The observatory was initially located on the university building in the centre of Leiden before a new observatory building and dome were constructed in the university's botanical garden in 1860. It remained there until 1974 when the department moved to the science campus north-west of the city. Notable astronomers that have worked or directed the observatory include Willem de Sitter, Ejnar Hertzsprung and Jan Oort. History 1633–1860 Leiden University established the observatory in 1633; astronomy had been on the curriculum for a long time, and due to possession of a large quadrant built by Rudolph Snellius, Jacobus Golius requested an observatory in which to use it. The observatory was one of the first purpose-built observatories in Europe. Though Golius used the observatory regularly, no publications came from its use by him. It is not known whether Golius had any instrumentation other than Snellius' quadrant at the observatory. In 1682 Burchardus de Volder became professor of mathematics at the university and thus took over responsibility for the observatory. During his tenure, the observatory was enlarged, including a second turret to house a brass sextant which he purchased, and the rebuilding of the old turret. Both turrets had rotating roofs. Upon retiring in 1705, de Volder handed over a catalogue of instruments which showed that the observatory owned two other quadrants, a 12-inch telescope, two objectives, and several smaller telescopes. For the next two years, Lotharius Zumbach de Coesfeld ran the observatory until his appointment as professor of mathematics in Kassel in 1708. Between then and 1717 the observatory went without a director until Willem 's Gravesande was appointed director. During his time at the observatory, Gravesande purchased a number of new instruments including new telescopes and tools, before his death in 1742. Gravesande's successor was Johan Lulofs who used the observatory to observe Halley's comet in 1759 and solar transits of Mercury (in 1743 and 1753) and Venus (in 1761). In November 1768 when Lulofs died, Dionysius van de Wijnpersse took over responsibility for the observatory until Pieter Nieuwland became its director in 1793 for a year until he died in 1794. For a number of years the curators attempted to find a suitable astronomer to look after the observatory, eventually employing Jan Frederik van Beeck Calkoen in 1799, who left in 1805. In 1817 the observatory towers were pulled down and rebuilt. Frederik Kaiser was appointed lecturer of astronomy and director of the observatory in 1837, and again renovated the observatory, providing the towers with rotatable roofs with full shutters, and reinforcing the north-western tower. Kaiser also acquired a number of new instruments and telescopes with which he made observations including that of comets, planets, and binary stars. As a result of the increased interest in astronomy brought about due to Kaiser's popular writings and teachings, a commission was founded in 1853 to fund a new observatory. From 1859 to 1909 the Netherlands civil time was set according to the local civil time at the observatory; communicated using the telegraphic network. 1860–1974 By 1860 the new observatory building was completed. The new building was constructed in a quiet side of the city inside the university's botanical gardens. It consisted of a number of offices, living quarters for astronomers, and a number of observing domes containing telescopes. In 1873 two new rooms were added to the building in order to house the tools required to verify nautical instruments; tools used to test compasses, sextants and other instruments. Two of the domes were rebuilt, one in 1875 and the other in 1889. More new buildings were constructed before the end of the 19th century including the Western tower in 1878, one to the East in 1898, and another small building to house a gas engine in the same year (used for electricity until the observatory was connected to the city grid). In 1896 the observatory purchased their first photographic telescope, with a dome being built to house it between then and 1898. In 1923 the observatory formed a research agreement with Union Observatory to allow researchers use of both facilities. The first visitor from Leiden was Ejnar Hertzsprung. In 1954 the telescopes were moved to Hartbeespoort. The collaboration lasted until 1972. The old Observatory building of this period was restored from 2008 to 2012, and in the 2010s houses a visitor center and also has tours. 1974- Present The astronomy department moved to the science campus north-west of the city centre in 1974. Although professional astronomical observations are no longer carried out from Leiden itself, the department still calls itself Leiden Observatory. In much of astronomy, the data came from elsewhere and could be analyzed and studied on the campus; for example in modern times the instruments may even be located in space, with data transmitted back to Earth and then studied on a computer display. (An example of this was the Astronomical Netherlands Satellite, launched in 1974.) The archive of the Leiden Observatory is available at Leiden University Library and digitally accessible through Digital Collections Einstein's Chair in the Ten-inch dome Einstein's Chair is an astronomical observing chair at the Leiden Observatory. This chair, made in 1861, is the only piece of furniture in the observatory that dates from that time. The chair gets its name from the fact that it was used by Albert Einstein on several occasions during his visits to the observatory. Einstein was a frequent visitor of the building during his professorship at Leiden university due to his good friendship with the director, Willem de Sitter. The chair can be found in the largest dome of the observatory, the so-called 10-inch dome (named after the 10-inch telescope that is placed inside). The chair is still used by observers and a popular attraction at the observatory. On 21 October 2015, Einstein's Chair got a short segment on the Dutch astronomy program Heel Nederland Kijkt Sterren. During this segment the science populariser Govert Schilling and the science historian David Baneke talked about its origins. Einstein is noted for his visits to Leiden Observatory during World War I. Restoration The old Observatory building facilities (from 1860s) was restored in the 2010s. While not longer the base for the modern Leiden Observatory academically, it does have the astronomical historical items at the facility. Also, a solar telescope was crowd funded to provide live optically transmitted images of the Sun to the Visitor center, which is also known to have offered tours. Archive The archives of the Leiden Observatory and its successive directors, 1829-1992 are held at Leiden University Libraries and are digitally available. Gallery Directors Instruments Examples: Zes-Duims Merz Refractor (ch objective with wooden telescope tube dating to 1830s) Ten-inch Repsold Refractor ( ch Alvin and Clark objective lens on Repsold und Söhne, since 1885) Photographic Double Refractor (since 1897) Zunderman Reflector (46 cm diameter mirror (~18.1 inches), since 1947) Heliostat See also Timeline of telescopes, observatories, and observing technology List of largest optical telescopes in the 19th century References Further reading Telescopes from Leiden Observatory and other collections. 1656 - 1859 (.pdf) From attics to domes: Four centuries of history of Leiden Observatory External links Leiden Observatory web site History of Leiden Observatory (in Dutch) Leiden Observatory Papers Archives of the Leiden Observatory and its successive directors, 1829-1992 Leiden University Libraries The old observatory on GoogleMaps The current observatory on GoogleMaps Astronomical observatories in the Netherlands Leiden University Astronomy institutes and departments Buildings and structures in Leiden 1633 establishments in the Dutch Republic Science and technology in the Dutch Republic Astronomy in the Dutch Republic
Leiden Observatory
[ "Astronomy" ]
1,624
[ "Astronomy organizations", "Astronomy institutes and departments" ]
798,527
https://en.wikipedia.org/wiki/Peak%20ground%20acceleration
Peak ground acceleration (PGA) is equal to the maximum ground acceleration that occurred during earthquake shaking at a location. PGA is equal to the amplitude of the largest absolute acceleration recorded on an accelerogram at a site during a particular earthquake. Earthquake shaking generally occurs in all three directions. Therefore, PGA is often split into the horizontal and vertical components. Horizontal PGAs are generally larger than those in the vertical direction but this is not always true, especially close to large earthquakes. PGA is an important parameter (also known as an intensity measure) for earthquake engineering, The design basis earthquake ground motion (DBEGM) is often defined in terms of PGA. Unlike the Richter and moment magnitude scales, it is not a measure of the total energy (magnitude, or size) of an earthquake, but rather of how much the earth shakes at a given geographic point. The Mercalli intensity scale uses personal reports and observations to measure earthquake intensity but PGA is measured by instruments, such as accelerographs. It can be correlated to macroseismic intensities on the Mercalli scale but these correlations are associated with large uncertainty. The peak horizontal acceleration (PHA) is the most commonly used type of ground acceleration in engineering applications. It is often used within earthquake engineering (including seismic building codes) and it is commonly plotted on seismic hazard maps. In an earthquake, damage to buildings and infrastructure is related more closely to ground motion, of which PGA is a measure, rather than the magnitude of the earthquake itself. For moderate earthquakes, PGA is a reasonably good determinant of damage; in severe earthquakes, damage is more often correlated with peak ground velocity. Geophysics Earthquake energy is dispersed in waves from the hypocentre, causing ground movement omnidirectionally but typically modelled horizontally (in two directions) and vertically. PGA records the acceleration (rate of change of speed) of these movements, while peak ground velocity is the greatest speed (rate of movement) reached by the ground, and peak displacement is the distance moved. These values vary in different earthquakes, and in differing sites within one earthquake event, depending on a number of factors. These include the length of the fault, magnitude, the depth of the quake, the distance from the epicentre, the duration (length of the shake cycle), and the geology of the ground (subsurface). Shallow-focused earthquakes generate stronger shaking (acceleration) than intermediate and deep quakes, since the energy is released closer to the surface. Peak ground acceleration can be expressed in fractions of g (the standard acceleration due to Earth's gravity, equivalent to g-force) as either a decimal or percentage; in m/s2 (1 g = 9.81 m/s2); or in multiples of Gal, where 1 Gal is equal to 0.01 m/s (1 g = 981 Gal). The ground type can significantly influence ground acceleration, so PGA values can display extreme variability over distances of a few kilometers, particularly with moderate to large earthquakes. The varying PGA results from an earthquake can be displayed on a shake map. Due to the complex conditions affecting PGA, earthquakes of similar magnitude can offer disparate results, with many moderate magnitude earthquakes generating significantly larger PGA values than larger magnitude quakes. During an earthquake, ground acceleration is measured in three directions: vertically (V or UD, for up-down) and two perpendicular horizontal directions (H1 and H2), often north–south (NS) and east–west (EW). The peak acceleration in each of these directions is recorded, with the highest individual value often reported. Alternatively, a combined value for a given station can be noted. The peak horizontal ground acceleration (PHA or PHGA) can be reached by selecting the higher individual recording, taking the mean of the two values, or calculating a vector sum of the two components. A three-component value can also be reached, by taking the vertical component into consideration also. In seismic engineering, the effective peak acceleration (EPA, the maximum ground acceleration to which a building responds) is often used, which tends to be ⅔ – ¾ the PGA. Seismic risk and engineering Study of geographic areas combined with an assessment of historical earthquakes allows geologists to determine seismic risk and to create seismic hazard maps, which show the likely PGA values to be experienced in a region during an earthquake, with a probability of exceedance (PE). Seismic engineers and government planning departments use these values to determine the appropriate earthquake loading for buildings in each zone, with key identified structures (such as hospitals, bridges, power plants) needing to survive the maximum considered earthquake (MCE). Damage to buildings is related to both peak ground velocity (PGV) and the duration of the earthquake – the longer high-level shaking persists, the greater the likelihood of damage. Comparison of instrumental and felt intensity Peak ground acceleration provides a measurement of instrumental intensity, that is, ground shaking recorded by seismic instruments. Other intensity scales measure felt intensity, based on eyewitness reports, felt shaking, and observed damage. There is correlation between these scales, but not always absolute agreement since experiences and damage can be affected by many other factors, including the quality of earthquake engineering. Generally speaking, 0.001 g (0.01 m/s) – perceptible by people 0.02  g (0.2  m/s) – people lose their balance 0.50  g (5  m/s) – very high; well-designed buildings can survive if the duration is short. Correlation with the Mercalli scale The United States Geological Survey developed an Instrumental Intensity scale, which maps peak ground acceleration and peak ground velocity on an intensity scale similar to the felt Mercalli scale. These values are used to create shake maps by seismologists around the world. Other intensity scales In the 7-class Japan Meteorological Agency seismic intensity scale, the highest intensity, Shindo 7, covers accelerations greater than 4 m/s (0.41 g). PGA hazard risks worldwide In India, areas with expected PGA values higher than 0.36 g are classed as "Zone 5", or "Very High Damage Risk Zone". Notable earthquakes See also Earthquake simulation Japan Meteorological Agency seismic intensity scale Spectral acceleration References Bibliography Seismology Earthquake engineering Acceleration
Peak ground acceleration
[ "Physics", "Mathematics", "Engineering" ]
1,284
[ "Structural engineering", "Physical quantities", "Acceleration", "Quantity", "Civil engineering", "Earthquake engineering", "Wikipedia categories named after physical quantities" ]
799,521
https://en.wikipedia.org/wiki/Voltage%20multiplier
A voltage multiplier is an electrical circuit that converts AC electrical power from a lower voltage to a higher DC voltage, typically using a network of capacitors and diodes. Voltage multipliers can be used to generate a few volts for electronic appliances, to millions of volts for purposes such as high-energy physics experiments and lightning safety testing. The most common type of voltage multiplier is the half-wave series multiplier, also called the Villard cascade (but actually invented by Heinrich Greinacher). Operation Assuming that the peak voltage of the AC source is +Us, and that the C values are sufficiently high to allow, when charged, that a current flows with no significant change in voltage, then the (simplified) working of the cascade is as follows: going from positive peak (+Us) to negative peak (−Us): The C1 capacitor is charged through diode D1 to Us V (potential difference between left and right plate of the capacitor is Us). going from negative peak to positive peak: The voltage of C1 adds with that of the source, thus charging C2 to 2Us through D2 and discharging C1 in the process. positive to negative peak: Voltage of C1 has dropped to 0 V by the end of the previous step, thus allowing C3 to be charged through D3 to 2Us. negative to positive peak: Voltage of C2 rises to 2Us (analogously to step 2), also charging C4 to 2Us. The output voltage (the sum of voltages of C2 and C4) rises until 4Us is reached. Adding an additional stage will increase the output voltage by twice the peak AC source voltage (minus losses due to the diodes ‒ see the next paragraph). In reality, more cycles are required for C4 to reach the full voltage, and the voltage of each capacitor is lowered by the forward voltage drop () of each diode on the path to that capacitor. For example, the voltage of C4 in the example would be at most since there are 4 diodes between its positive terminal and the source. The total output voltage would be . In a cascade with stages of two diodes and two capacitors, the output voltage is equal to . The term represents the sum of voltage losses caused by diodes, over all capacitors on the output side (i.e. on the right side in the example ‒ C2 and C4). For example if we have 2 stages like in the example, the total loss is times . An additional stage will increase the output voltage by twice the source voltage, minus the forward voltage drop over diodes: . Voltage doubler and tripler A voltage doubler uses two stages to approximately double the DC voltage that would have been obtained from a single-stage rectifier. An example of a voltage doubler is found in the input stage of switch mode power supplies containing a SPDT switch to select either 120 V or 240 V supply. In the 120 V position the input is typically configured as a full-wave voltage doubler by opening one AC connection point of a bridge rectifier, and connecting the input to the junction of two series-connected filter capacitors. For 240 V operation, the switch configures the system as a full-wave bridge, re-connecting the capacitor center-tap wire to the open AC terminal of a bridge rectifier system. This allows 120 or 240 V operation with the addition of a simple SPDT switch. A voltage tripler is a three-stage voltage multiplier. A tripler is a popular type of voltage multiplier. The output voltage of a tripler is in practice below three times the peak input voltage due to their high impedance, caused in part by the fact that as each capacitor in the chain supplies power to the next, it partially discharges, losing voltage doing so. Triplers were commonly used in color television receivers to provide the high voltage for the cathode-ray tube (CRT, picture tube). Triplers are still used in high voltage supplies such as copiers, laser printers, bug zappers and electroshock weapons. Breakdown voltage While the multiplier can be used to produce thousands of volts of output, the individual components do not need to be rated to withstand the entire voltage range. Each component only needs to be concerned with the relative voltage differences directly across its own terminals and of the components immediately adjacent to it. Typically a voltage multiplier will be physically arranged like a ladder, so that the progressively increasing voltage potential is not given the opportunity to arc across to the much lower potential sections of the circuit. Note that some safety margin is needed across the relative range of voltage differences in the multiplier, so that the ladder can survive the shorted failure of at least one diode or capacitor component. Otherwise a single-point shorting failure could successively over-voltage and destroy each next component in the multiplier, potentially destroying the entire multiplier chain. Other circuit topologies Stacking An even number of diode-capacitor cells is used in any column so that the cascade ends on a smoothing cell. If it were odd and ended on a clamping cell the ripple voltage would be very large. Larger capacitors in the connecting column also reduce ripple but at the expense of charging time and increased diode current. Dickson charge pump The Dickson charge pump, or Dickson multiplier, is a modification of the Greinacher/Cockcroft–Walton multiplier. There are, however, several important differences: The Dickson multiplier takes a DC supply as its input so is a form of DC-to-DC converter. In addition to the DC input, the circuit requires a feed of two clock pulse trains with an amplitude swinging between the DC supply rails. These pulse trains are in antiphase. The Dickson multiplier is intended for low-voltage applications, unlike Greinacher/Cockcroft–Walton which is commonly used in high-voltage applications. This is because the final capacitor has to hold the entire output voltage, whereas in the Greinacher/Cockcroft–Walton multiplier, each capacitor holds at most twice the input voltage (thus easily allowing multiplication by a factor of 10 or more). To describe the ideal operation of the circuit, number the diodes D1, D2 etc. from left to right and the capacitors C1, C2 etc. When the clock is low, D1 will charge C1 to Vin. When goes high the top plate of C1 is pushed up to 2Vin. D1 is then turned off and D2 turned on and C2 begins to charge to 2Vin. On the next clock cycle again goes low and now goes high pushing the top plate of C2 to 3Vin. D2 switches off and D3 switches on, charging C3 to 3Vin and so on with charge passing up the chain, hence the name charge pump. The final diode-capacitor cell in the cascade is connected to ground rather than a clock phase and hence is not a multiplier; it is a peak detector which merely provides smoothing. There are a number of factors which reduce the output from the ideal case of nVin. One of these is the threshold voltage, VT of the switching device, that is, the voltage required to turn it on. The output will be reduced by at least nVT due to the volt drops across the switches. Schottky diodes are commonly used in Dickson multipliers for their low forward voltage drop, amongst other reasons. Another difficulty is that there are parasitic capacitances to ground at each node. These parasitic capacitances act as voltage dividers with the circuit's storage capacitors reducing the output voltage still further. Up to a point, a higher clock frequency is beneficial: the ripple is reduced and the high frequency makes the remaining ripple easier to filter. Also the size of capacitors needed is reduced since less charge needs to be stored per cycle. However, losses through stray capacitance increase with increasing clock frequency and a practical limit is around a few hundred kilohertz. Dickson multipliers are frequently found in integrated circuits (ICs) where they are used to increase a low-voltage battery supply to the voltage needed by the IC. It is advantageous to the IC designer and manufacturer to be able to use the same technology and the same basic device throughout the IC. For this reason, in the popular CMOS technology ICs the transistor which forms the basic building block of circuits is the MOSFET. Consequently, the diodes in the Dickson multiplier are often replaced with MOSFETs wired to behave as diodes. The diode-wired MOSFET version of the Dickson multiplier does not work very well at very low voltages because of the large drain-source volt drops of the MOSFETs. Frequently, a more complex circuit is used to overcome this problem. One solution is to connect in parallel with the switching MOSFET another MOSFET biased into its linear region. This second MOSFET has a lower drain-source voltage than the switching MOSFET would have on its own (because the switching MOSFET is driven hard on) and consequently the output voltage is increased. The gate of the linear biased MOSFET is connected to the output of the next stage so that it is turned off while the next stage is charging from the previous stage's capacitor. That is, the linear-biased transistor is turned off at the same time as the switching transistor. An ideal 4-stage Dickson multiplier (5× multiplier) with an input of would have an output of . However, a diode-wired MOSFET 4-stage multiplier might only have an output of . Adding parallel MOSFETs in the linear region improves this to around . More complex circuits still can achieve an output much closer to the ideal case. Many other variations and improvements to the basic Dickson circuit exist. Some attempt to reduce the switching threshold voltage such as the Mandal-Sarpeshkar multiplier or the Wu multiplier. Other circuits cancel out the threshold voltage: the Umeda multiplier does it with an externally provided voltage and the Nakamoto multiplier does it with internally generated voltage. The Bergeret multiplier concentrates on maximising power efficiency. Modification for RF power In CMOS integrated circuits clock signals are readily available, or else easily generated. This is not always the case in RF integrated circuits, but often a source of RF power will be available. The standard Dickson multiplier circuit can be modified to meet this requirement by simply grounding the normal input and one of the clock inputs. RF power is injected into the other clock input, which then becomes the circuit input. The RF signal is effectively the clock as well as the source of power. However, since the clock is injected only into every other node the circuit only achieves a stage of multiplication for every second diode-capacitor cell. The other diode-capacitor cells are merely acting as peak detectors and smoothing the ripple without increasing the multiplication. Cross-coupled switched capacitor A voltage multiplier may be formed of a cascade of voltage doublers of the cross-coupled switched capacitor type. This type of circuit is typically used instead of a Dickson multiplier when the source voltage is or less. Dickson multipliers have increasingly poor power conversion efficiency as the input voltage drops because the voltage drop across the diode-wired transistors becomes much more significant compared to the output voltage. Since the transistors in the cross-coupled circuit are not diode-wired the volt-drop problem is not so serious. The circuit works by alternately switching the output of each stage between a voltage doubler driven by and one driven by . This behaviour leads to another advantage over the Dickson multiplier: reduced ripple voltage at double the frequency. The increase in ripple frequency is advantageous because it is easier to remove by filtering. Each stage (in an ideal circuit) raises the output voltage by the peak clock voltage. Assuming that this is the same level as the DC input voltage then an n stage multiplier will (ideally) output nVin. The chief cause of losses in the cross-coupled circuit is parasitic capacitance rather than switching threshold voltage. The losses occur because some of the energy has to go into charging up the parasitic capacitances on each cycle. Applications The high-voltage supplies for cathode-ray tubes (CRTs) in TVs often use voltage multipliers with the final-stage smoothing capacitor formed by the interior and exterior aquadag coatings on the CRT itself. CRTs were formerly a common component in television sets. Voltage multipliers can still be found in modern TVs, photocopiers, and bug zappers. High voltage multipliers are used in spray painting equipment, most commonly found in automotive manufacturing facilities. A voltage multiplier with an output of about 100kV is used in the nozzle of the paint sprayer to electrically charge the atomized paint particles which then get attracted to the oppositely charged metal surfaces to be painted. This helps reduce the volume of paint used and helps in spreading an even coat of paint. A common type of voltage multiplier used in high-energy physics is the Cockcroft–Walton generator (which was designed by John Douglas Cockcroft and Ernest Thomas Sinton Walton for a particle accelerator for use in research that won them the Nobel Prize in Physics in 1951). See also Marx generator (a device that uses spark gaps instead of diodes as the switching elements and can deliver higher peak currents than diodes can). Boost converter (a DC-to-DC power converter that steps up voltage, frequently using an inductor) Notes Bibliography Campardo, Giovanni; Micheloni, Rino; Novosel, David VLSI-design of Non-volatile Memories, Springer, 2005 . Lin, Yu-Shiang Low Power Circuits for Miniature Sensor Systems, Publisher ProQuest, 2008 . Liu, Mingliang Demystifying Switched Capacitor Circuits, Newnes, 2006 . McGowan, Kevin, Semiconductors: From Book to Breadboard, Cengage Learning, 2012 . Peluso, Vincenzo; Steyaert, Michiel; Sansen, Willy M. C. Design of Low-voltage Low-power CMOS Delta-Sigma A/D Converters, Springer, 1999 . Yuan, Fei CMOS Circuits for Passive Wireless Microsystems, Springer, 2010 . Zumbahlen, Hank Linear Circuit Design Handbook, Newnes, 2008 . External links Basic multiplier circuits Cockcroft Walton multipliers Schematic of Kadette brand (International Radio Corp.) model 1019. A 1937 radio with a vacuum tube (25Z5) voltage multiplier rectifier. Electrical circuits Electric power conversion Rectifiers
Voltage multiplier
[ "Engineering" ]
3,110
[ "Electrical engineering", "Electronic engineering", "Electrical circuits" ]
799,760
https://en.wikipedia.org/wiki/Mahalanobis%20distance
The Mahalanobis distance is a measure of the distance between a point and a distribution , introduced by P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance first appeared in the Journal of The Asiatic Society of Bengal in 1936. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based on measurements (the earliest work related to similarities of skulls are from 1922 and another later work is from 1927). R.C. Bose later obtained the sampling distribution of Mahalanobis distance, under the assumption of equal dispersion. It is a multivariate generalization of the square of the standard score : how many standard deviations away is from the mean of . This distance is zero for at the mean of and grows as moves away from the mean along each principal component axis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standard Euclidean distance in the transformed space. The Mahalanobis distance is thus unitless, scale-invariant, and takes into account the correlations of the data set. Definition Given a probability distribution on , with mean and positive semi-definite covariance matrix , the Mahalanobis distance of a point from is Given two points and in , the Mahalanobis distance between them with respect to iswhich means that . Since is positive semi-definite, so is , thus the square roots are always defined. We can find useful decompositions of the squared Mahalanobis distance that help to explain some reasons for the outlyingness of multivariate observations and also provide a graphical tool for identifying outliers. By the spectral theorem, can be decomposed as for some real matrix. One choice for is the symmetric square root of , which is the standard deviation matrix. This gives us the equivalent definitionwhere is the Euclidean norm. That is, the Mahalanobis distance is the Euclidean distance after a whitening transformation. The existence of is guaranteed by the spectral theorem, but it is not unique. Different choices have different theoretical and practical advantages. In practice, the distribution is usually the sample distribution from a set of IID samples from an underlying unknown distribution, so is the sample mean, and is the covariance matrix of the samples. When the affine span of the samples is not the entire , the covariance matrix would not be positive-definite, which means the above definition would not work. However, in general, the Mahalanobis distance is preserved under any full-rank affine transformation of the affine span of the samples. So in case the affine span is not the entire , the samples can be first orthogonally projected to , where is the dimension of the affine span of the samples, then the Mahalanobis distance can be computed as usual. Intuitive explanation Consider the problem of estimating the probability that a test point in N-dimensional Euclidean space belongs to a set, where we are given sample points that definitely belong to that set. Our first step would be to find the centroid or center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more likely it is to belong to the set. However, we also need to know if the set is spread out over a large range or a small range, so that we can decide whether a given distance from the center is noteworthy or not. The simplistic approach is to estimate the standard deviation of the distances of the sample points from the center of mass. If the distance between the test point and the center of mass is less than one standard deviation, then we might conclude that it is highly probable that the test point belongs to the set. The further away it is, the more likely that the test point should not be classified as belonging to the set. This intuitive approach can be made quantitative by defining the normalized distance between the test point and the set to be , which reads: . By plugging this into the normal distribution, we can derive the probability of the test point belonging to the set. The drawback of the above approach was that we assumed that the sample points are distributed about the center of mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of the test point belonging to the set to depend not only on the distance from the center of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be closer, while in those where the axis is long the test point can be further away from the center. Putting this on a mathematical basis, the ellipsoid that best represents the set's probability distribution can be estimated by building the covariance matrix of the samples. The Mahalanobis distance is the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point. Normal distributions For a normal distribution in any number of dimensions, the probability density of an observation is uniquely determined by the Mahalanobis distance : Specifically, follows the chi-squared distribution with degrees of freedom, where is the number of dimensions of the normal distribution. If the number of dimensions is 2, for example, the probability of a particular calculated being less than some threshold is . To determine a threshold to achieve a particular probability, , use , for 2 dimensions. For number of dimensions other than 2, the cumulative chi-squared distribution should be consulted. In a normal distribution, the region where the Mahalanobis distance is less than one (i.e. the region inside the ellipsoid at distance one) is exactly the region where the probability distribution is concave. The Mahalanobis distance is proportional, for a normal distribution, to the square root of the negative log-likelihood (after adding a constant so the minimum is at zero). Other forms of multivariate location and scatter The sample mean and covariance matrix can be quite sensitive to outliers, therefore other approaches for calculating the multivariate location and scatter of data are also commonly used when calculating the Mahalanobis distance. The Minimum Covariance Determinant approach estimates multivariate location and scatter from a subset numbering data points that has the smallest variance-covariance matrix determinant. The Minimum Volume Ellipsoid approach is similar to the Minimum Covariance Determinant approach in that it works with a subset of size data points, but the Minimum Volume Ellipsoid estimates multivariate location and scatter from the ellipsoid of minimal volume that encapsulates the data points. Each method varies in its definition of the distribution of the data, and therefore produces different Mahalanobis distances. The Minimum Covariance Determinant and Minimum Volume Ellipsoid approaches are more robust to samples that contain outliers, while the sample mean and covariance matrix tends to be more reliable with small and biased data sets. Relationship to normal random variables In general, given a normal (Gaussian) random variable with variance and mean , any other normal random variable (with mean and variance ) can be defined in terms of by the equation Conversely, to recover a normalized random variable from any normal random variable, one can typically solve for . If we square both sides, and take the square-root, we will get an equation for a metric that looks a lot like the Mahalanobis distance: The resulting magnitude is always non-negative and varies with the distance of the data from the mean, attributes that are convenient when trying to define a model for the data. Relationship to leverage Mahalanobis distance is closely related to the leverage statistic, , but has a different scale: Applications Mahalanobis distance is widely used in cluster analysis and classification techniques. It is closely related to Hotelling's T-square distribution used for multivariate statistical testing and Fisher's linear discriminant analysis that is used for supervised classification. In order to use the Mahalanobis distance to classify a test point as belonging to one of N classes, one first estimates the covariance matrix of each class, usually based on samples known to belong to each class. Then, given a test sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class for which the Mahalanobis distance is minimal. Mahalanobis distance and leverage are often used to detect outliers, especially in the development of linear regression models. A point that has a greater Mahalanobis distance from the rest of the sample population of points is said to have higher leverage since it has a greater influence on the slope or coefficients of the regression equation. Mahalanobis distance is also used to determine multivariate outliers. Regression techniques can be used to determine if a specific case within a sample population is an outlier via the combination of two or more variable scores. Even for normal distributions, a point can be a multivariate outlier even if it is not a univariate outlier for any variable (consider a probability density concentrated along the line , for example), making Mahalanobis distance a more sensitive measure than checking dimensions individually. Mahalanobis distance has also been used in ecological niche modelling, as the convex elliptical shape of the distances relates well to the concept of the fundamental niche. Another example of usage is in finance, where Mahalanobis distance has been used to compute an indicator called the "turbulence index", which is a statistical measure of financial markets abnormal behaviour. An implementation as a Web API of this indicator is available online. Software implementations Many programming languages and statistical packages, such as R, Python, etc., include implementations of Mahalanobis distance. See also Bregman divergence (the Mahalanobis distance is an example of a Bregman divergence) Bhattacharyya distance related, for measuring similarity between data sets (and not between a point and a data set) Hamming distance identifies the difference bit by bit of two strings Hellinger distance, also a measure of distance between data sets Similarity learning, for other approaches to learn a distance metric from examples. References External links Mahalanobis distance tutorial – interactive online program and spreadsheet computation Mahalanobis distance (Nov-17-2006) – overview of Mahalanobis distance, including MATLAB code What is Mahalanobis distance? – intuitive, illustrated explanation, from Rick Wicklin on blogs.sas.com Statistical distance Multivariate statistics Distance
Mahalanobis distance
[ "Physics", "Mathematics" ]
2,165
[ "Distance", "Physical quantities", "Statistical distance", "Quantity", "Size", "Space", "Spacetime", "Wikipedia categories named after physical quantities" ]
799,876
https://en.wikipedia.org/wiki/Electric%20susceptibility
In electricity (electromagnetism), the electric susceptibility (; Latin: susceptibilis "receptive") is a dimensionless proportionality constant that indicates the degree of polarization of a dielectric material in response to an applied electric field. The greater the electric susceptibility, the greater the ability of a material to polarize in response to the field, and thereby reduce the total electric field inside the material (and store energy). It is in this way that the electric susceptibility influences the electric permittivity of the material and thus influences many other phenomena in that medium, from the capacitance of capacitors to the speed of light. Definition for linear dielectrics If a dielectric material is a linear dielectric, then electric susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field E to the induced dielectric polarization density P such that where is the polarization density; is the electric permittivity of free space (electric constant); is the electric susceptibility; is the electric field. In materials where susceptibility is anisotropic (different depending on direction), susceptibility is represented as a tensor known as the susceptibility tensor. Many linear dielectrics are isotropic, but it is possible nevertheless for a material to display behavior that is both linear and anisotropic, or for a material to be non-linear but isotropic. Anisotropic but linear susceptibility is common in many crystals. The susceptibility is related to its relative permittivity (dielectric constant) by so in the case of a vacuum, At the same time, the electric displacement D is related to the polarization density P by the following relation: where Molecular polarizability A similar parameter exists to relate the magnitude of the induced dipole moment p of an individual molecule to the local electric field E that induced the dipole. This parameter is the molecular polarizability (α), and the dipole moment resulting from the local electric field Elocal is given by: This introduces a complication however, as locally the field can differ significantly from the overall applied field. We have: where P is the polarization per unit volume, and N is the number of molecules per unit volume contributing to the polarization. Thus, if the local electric field is parallel to the ambient electric field, we have: Thus only if the local field equals the ambient field can we write: Otherwise, one should find a relation between the local and the macroscopic field. In some materials, the Clausius–Mossotti relation holds and reads Ambiguity in the definition The definition of the molecular polarizability depends on the author. In the above definition, and are in SI units and the molecular polarizability has the dimension of a volume (m3). Another definition would be to keep SI units and to integrate into : In this second definition, the polarizability would have the SI unit of C.m2/V. Yet another definition exists where and are expressed in the cgs system and is still defined as Using the cgs units gives the dimension of a volume, as in the first definition, but with a value that is lower. Nonlinear susceptibility In many materials the polarizability starts to saturate at high values of electric field. This saturation can be modelled by a nonlinear susceptibility. These susceptibilities are important in nonlinear optics and lead to effects such as second-harmonic generation (such as used to convert infrared light into visible light, in green laser pointers). The standard definition of nonlinear susceptibilities in SI units is via a Taylor expansion of the polarization's reaction to electric field: (Except in ferroelectric materials, the built-in polarization is zero, .) The first susceptibility term, , corresponds to the linear susceptibility described above. While this first term is dimensionless, the subsequent nonlinear susceptibilities have units of . The nonlinear susceptibilities can be generalized to anisotropic materials in which the susceptibility is not uniform in every direction. In these materials, each susceptibility becomes an ()-degree tensor. Dispersion and causality In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by . The upper limit of this integral can be extended to infinity as well if one defines for . An instantaneous response corresponds to Dirac delta function susceptibility . It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a product, This has a similar form to the Clausius–Mossotti relation: This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material. Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. for ), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility . See also Application of tensor theory in physics Magnetic susceptibility Maxwell's equations Clausius–Mossotti relation Linear response function Green–Kubo relations References Electric and magnetic fields in matter Physical quantities
Electric susceptibility
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,185
[ "Physical phenomena", "Physical quantities", "Quantity", "Electric and magnetic fields in matter", "Materials science", "Condensed matter physics", "Physical properties" ]
799,986
https://en.wikipedia.org/wiki/Fraunhofer%20diffraction
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a sufficiently long distance (a distance satisfying Fraunhofer condition) from the object (in the far-field region), and also when it is viewed at the focal plane of an imaging lens. In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation. The equation was named in honor of Joseph von Fraunhofer although he was not actually involved in the development of the theory. This article explains where the Fraunhofer equation can be applied, and shows Fraunhofer diffraction patterns for various apertures. A detailed mathematical treatment of Fraunhofer diffraction is given in Fraunhofer diffraction equation. Equation When a beam of light is partly blocked by an obstacle, some of the light is scattered around the object, light and dark bands are often seen at the edge of the shadow – this effect is known as diffraction. These effects can be modelled using the Huygens–Fresnel principle; Huygens postulated that every point on a wavefront acts as a source of spherical secondary wavelets and the sum of these secondary wavelets determines the form of the proceeding wave at any subsequent time, while Fresnel developed an equation using the Huygens wavelets together with the principle of superposition of waves, which models these diffraction effects quite well. It is generally not straightforward to calculate the wave amplitude given by the sum of the secondary wavelets (The wave sum is also a wave.), each of which has its own amplitude, phase, and oscillation direction (polarization), since this involves addition of many waves of varying amplitude, phase, and polarization. When two light waves as electromagnetic fields are added together (vector sum), the amplitude of the wave sum depends on the amplitudes, the phases, and even the polarizations of individual waves. On a certain direction where electromagnetic wave fields are projected (or considering a situation where two waves have the same polarization), two waves of equal (projected) amplitude which are in phase (same phase) give the amplitude of the resultant wave sum as double the individual wave amplitudes, while two waves of equal amplitude which are in opposite phases give the zero amplitude of the resultant wave as they cancel out each other. Generally, a two-dimensional integral over complex variables has to be solved and in many cases, an analytic solution is not available. The Fraunhofer diffraction equation is a simplified version of Kirchhoff's diffraction formula and it can be used to model light diffraction when both a light source and a viewing plane (a plane of observation where the diffracted wave is observed) are effectively infinitely distant from a diffracting aperture. With a sufficiently distant light source from a diffracting aperture, the incident light to the aperture is effectively a plane wave so that the phase of the light at each point on the aperture is the same. At a sufficiently distant plane of observation from the aperture, the phase of the wave coming from each point on the aperture varies linearly with the point position on the aperture, making the calculation of the sum of the waves at an observation point on the plane of observation relatively straightforward in many cases. Even the amplitudes of the secondary waves coming from the aperture at the observation point can be treated as same or constant for a simple diffraction wave calculation in this case. Diffraction in such a geometrical requirement is called Fraunhofer diffraction, and the condition where Fraunhofer diffraction is valid is called Fraunhofer condition, as shown in the right box. A diffracted wave is often called Far field if it at least partially satisfies Fraunhofer condition such that the distance between the aperture and the observation plane is . For example, if a 0.5 mm diameter circular hole is illuminated by a laser light with 0.6 μm wavelength, then Fraunhofer diffraction occurs if the viewing distance is greater than 1000 mm. Derivation of Fraunhofer condition The derivation of Fraunhofer condition here is based on the geometry described in the right box. The diffracted wave path r2 can be expressed in terms of another diffracted wave path r1 and the distance b between two diffracting points by using the law of cosines; This can be expanded by calculating the expression's Taylor series to second order with respect to , The phase difference between waves propagating along the paths r2 and r1 are, with the wavenumber where λ is the light wavelength, If so , then the phase difference is . The geometrical implication from this expression is that the paths r2 and r1 are approximately parallel with each other. Since there can be a diffraction - observation plane, the diffracted wave path whose angle with respect to a straight line parallel to the optical axis is close to 0, this approximation condition can be further simplified as where L is the distance between two planes along the optical axis. Due to the fact that an incident wave on a diffracting plane is effectively a plane wave if where L is the distance between the diffracting plane and the point wave source is satisfied, Fraunhofer condition is where L is the smaller of the two distances, one is between the diffracting plane and the plane of observation and the other is between the diffracting plane and the point wave source. Focal plane of a positive lens as the far field plane In the far field, propagation paths for wavelets from every point on an aperture to a point of observation are approximately parallel, and a positive lens (focusing lens) focuses parallel rays toward the lens to a point on the focal plane (the focus point position on the focal plane depends on the angle of the parallel rays with respect to the optical axis). So, if a positive lens with a sufficiently long focal length (so that differences between electric field orientations for wavelets can be ignored at the focus) is placed after an aperture, then the lens practically makes the Fraunhofer diffraction pattern of the aperture on its focal plane as the parallel rays meet each other at the focus. Examples In each of these examples, the aperture is illuminated by a monochromatic plane wave at normal incidence. Diffraction by a narrow rectangular slit The width of the slit is . The Fraunhofer diffraction pattern is shown in the image together with a plot of the intensity vs. angle . The pattern has maximum intensity at , and a series of peaks of decreasing intensity. Most of the diffracted light falls between the first minima. The angle, , subtended by these two minima is given by: Thus, the smaller the aperture, the larger the angle subtended by the diffraction bands. The size of the central band at a distance is given by For example, when a slit of width 0.5 mm is illuminated by light of wavelength 0.6 μm, and viewed at a distance of 1000 mm, the width of the central band in the diffraction pattern is 2.4 mm. The fringes extend to infinity in the direction since the slit and illumination also extend to infinity. If , the intensity of the diffracted light does not fall to zero, and if , the diffracted wave is cylindrical. Semi-quantitative analysis of single-slit diffraction We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. Consider the light diffracted at an angle where the distance is equal to the wavelength of the illuminating light. The width of the slit is the distance . The component of the wavelet emitted from the point A which is travelling in the direction is in anti-phase with the wave from the point at middle of the slit, so that the net contribution at the angle from these two waves is zero. The same applies to the points just below and , and so on. Therefore, the amplitude of the total wave travelling in the direction is zero. We have: The angle subtended by the first minima on either side of the centre is then, as above: There is no such simple argument to enable us to find the maxima of the diffraction pattern. Single-slit diffraction using Huygens' principle We can develop an expression for the far field of a continuous array of point sources of uniform amplitude and of the same phase. Let the array of length a be parallel to the y axis with its center at the origin as indicated in the figure to the right. Then the differential field is: where . However and integrating from to , where . Integrating we then get Letting where the array length in radians is , then, Diffraction by a rectangular aperture The form of the diffraction pattern given by a rectangular aperture is shown in the figure on the right (or above, in tablet format). There is a central semi-rectangular peak, with a series of horizontal and vertical fringes. The dimensions of the central band are related to the dimensions of the slit by the same relationship as for a single slit so that the larger dimension in the diffracted image corresponds to the smaller dimension in the slit. The spacing of the fringes is also inversely proportional to the slit dimension. If the illuminating beam does not illuminate the whole vertical length of the slit, the spacing of the vertical fringes is determined by the dimensions of the illuminating beam. Close examination of the double-slit diffraction pattern below shows that there are very fine horizontal diffraction fringes above and below the main spot, as well as the more obvious horizontal fringes. Diffraction by a circular aperture The diffraction pattern given by a circular aperture is shown in the figure on the right. This is known as the Airy diffraction pattern. It can be seen that most of the light is in the central disk. The angle subtended by this disk, known as the Airy disk, is where is the diameter of the aperture. The Airy disk can be an important parameter in limiting the ability of an imaging system to resolve closely located objects. Diffraction by an aperture with a Gaussian profile The diffraction pattern obtained given by an aperture with a Gaussian profile, for example, a photographic slide whose transmissivity has a Gaussian variation is also a Gaussian function. The form of the function is plotted on the right (above, for a tablet), and it can be seen that, unlike the diffraction patterns produced by rectangular or circular apertures, it has no secondary rings. This technique can be used in a process called apodization—the aperture is covered by a Gaussian filter, giving a diffraction pattern with no secondary rings. The output profile of a single mode laser beam may have a Gaussian intensity profile and the diffraction equation can be used to show that it maintains that profile however far away it propagates from the source. Diffraction by a double slit In the double-slit experiment, the two slits are illuminated by a single light beam. If the width of the slits is small enough (less than the wavelength of the light), the slits diffract the light into cylindrical waves. These two cylindrical wavefronts are superimposed, and the amplitude, and therefore the intensity, at any point in the combined wavefronts depends on both the magnitude and the phase of the two wavefronts. These fringes are often known as Young's fringes. The angular spacing of the fringes is given by The spacing of the fringes at a distance from the slits is given by where is the separation of the slits. The fringes in the picture were obtained using the yellow light from a sodium light (wavelength = 589 nm), with slits separated by 0.25 mm, and projected directly onto the image plane of a digital camera. Double-slit interference fringes can be observed by cutting two slits in a piece of card, illuminating with a laser pointer, and observing the diffracted light at a distance of 1 m. If the slit separation is 0.5 mm, and the wavelength of the laser is 600 nm, then the spacing of the fringes viewed at a distance of 1 m would be 1.2 mm. Semi-quantitative explanation of double-slit fringes The difference in phase between the two waves is determined by the difference in the distance travelled by the two waves. If the viewing distance is large compared with the separation of the slits (the far field), the phase difference can be found using the geometry shown in the figure. The path difference between two waves travelling at an angle is given by When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximal, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel, and the summed intensity is zero. This effect is known as interference. The interference fringe maxima occur at angles where is the wavelength of the light. The angular spacing of the fringes is given by When the distance between the slits and the viewing plane is , the spacing of the fringes is equal to and is the same as above: Diffraction by a grating A grating is defined in Born and Wolf as "any arrangement which imposes on an incident wave a periodic variation of amplitude or phase, or both". A grating whose elements are separated by diffracts a normally incident beam of light into a set of beams, at angles given by: This is known as the grating equation. The finer the grating spacing, the greater the angular separation of the diffracted beams. If the light is incident at an angle , the grating equation is: The detailed structure of the repeating pattern determines the form of the individual diffracted beams, as well as their relative intensity while the grating spacing always determines the angles of the diffracted beams. The image on the right shows a laser beam diffracted by a grating into = 0, and ±1 beams. The angles of the first order beams are about 20°; if we assume the wavelength of the laser beam is 600 nm, we can infer that the grating spacing is about 1.8 μm. Semi-quantitative explanation A simple grating consists of a series of slits in a screen. If the light travelling at an angle from each slit has a path difference of one wavelength with respect to the adjacent slit, all these waves will add together, so that the maximum intensity of the diffracted light is obtained when: This is the same relationship that is given above. See also Fraunhofer diffraction equation Diffraction Huygens–Fresnel principle Kirchhoff's diffraction formula Fresnel diffraction Airy disc Fourier optics References Sources External links Fraunhofer diffraction on ScienceWorld Fraunhofer diffraction on HyperPhysics Diffraction Physical optics
Fraunhofer diffraction
[ "Physics", "Chemistry", "Materials_science" ]
3,208
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
800,155
https://en.wikipedia.org/wiki/Lambertian%20reflectance
Lambertian reflectance is the property that defines an ideal "matte" or diffusely reflecting surface. The apparent brightness of a Lambertian surface to an observer is the same regardless of the observer's angle of view. More precisely, the reflected radiant intensity obeys Lambert's cosine law, which makes the reflected radiance the same in all directions. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria. Examples Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since the glossy coating creates specular highlights. Though not all rough surfaces are Lambertian, this is often a good approximation, and is frequently used when the characteristics of the surface are unknown. Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance. Use in computer graphics In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered. The reflection decreases when the surface is tilted away from being perpendicular to the light source, however, because the area is illuminated by a smaller fraction of the incident radiation. The reflection is calculated by taking the dot product of the surface's unit normal vector, , and a normalized light-direction vector, , pointing from the surface to the light source. This number is then multiplied by the color of the surface and the intensity of the light hitting the surface: , where is the brightness of the diffusely reflected light, is the color and is the intensity of the incoming light. Because , where is the angle between the directions of the two vectors, the brightness will be highest if the surface is perpendicular to the light vector, and lowest if the light vector intersects the surface at a grazing angle. Lambertian reflection from polished surfaces is typically accompanied by specular reflection (gloss), where the surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply. Other waves While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian reflectance. See also List of common shading algorithms Gamma correction References Radiometry Photometry Scattering, absorption and radiative transfer (optics) Shading
Lambertian reflectance
[ "Chemistry", "Engineering" ]
538
[ "Telecommunications engineering", "Scattering", " absorption and radiative transfer (optics)", "Radiometry" ]
800,260
https://en.wikipedia.org/wiki/Janine%20Benyus
Janine M. Benyus (born 1958) is an American natural sciences writer, innovation consultant, and author. After writing books on wildlife and animal behavior, she coined the term Biomimicry to describe intentional problem-solving design inspired by nature. Her book Biomimicry (1997) attracted widespread attention from businesspeople in design, architecture, and engineering as well as from scientists. Benyus argues that by following biomimetic approaches, designers can develop products that will perform better, be less expansive, use less energy, and leave companies less open to legal risk. Life Born in New Jersey, Benyus graduated summa cum laude from Rutgers University with degrees in natural resource management and English literature/writing. Benyus has taught interpretive writing and lectured at the University of Montana, and worked towards restoring and protecting wild lands. She serves on a number of land use committees in her rural county, and is president of Living Education, a nonprofit dedicated to place-based living and learning. Benyus lives in Stevensville, Montana. Biomimicry Benyus has written a number of books on animals and their behavior, but is best known for Biomimicry: Innovation Inspired by Nature (1997). In this book she develops the basic thesis that human beings should consciously emulate nature's genius in their designs. She encourages people to ask "What would Nature do?" and to look at natural forms, processes, and ecosystems in nature to see what works and what lasts. Benyus articulates an approach that strongly emphasizes sustainability within biomimicry practice. sometimes referred to as Conditions Conducive to Life (CCL). Benyus has described the development of sustainable solutions in terms of "Life’s Principles", emphasizing that organisms in nature have evolved methods of working that are not destructive of themselves and their environment. “Nature runs on sunlight, uses only the energy it needs, fits form to function, recycles everything, rewards cooperation, banks on diversity, demands local expertise, curbs excess from within and taps the power of limits”. In 1998, Benyus and Dayna Baumeister co-founded the Biomimicry Guild as an innovation consultancy. Their goal was to help innovators learn from and emulate natural models in order to design sustainable products, processes, and policies that create conditions conducive to life. In 2006, Benyus co-founded The Biomimicry Institute with Dayna Baumeister and Bryony Schwan. Benyus is President of the non-profit organization, whose mission is to naturalize biomimicry in the culture by promoting the transfer of ideas, designs, and strategies from biology to sustainable human systems design. In 2008 the Biomimicry Institute launched AskNature.org, "an encyclopedia of nature's solutions to common design problems". The Biomimicry Institute has become a key communicator in the field of biomimetics, connecting 12,576 member practitioners and organizations in 36 regional networks and 21 countries through its Biomimicry Global Network as of 2020. In 2010, Benyus, Dayna Baumeister, Bryony Schwan, and Chris Allen formed Biomimicry 3.8, connecting their for-profit and nonprofit work by creating a benefit corporation. Biomimicry 3.8, which achieved B-corp certification, offers consultancy, professional training, development for educators, and "inspirational speaking". Among its more than 250 clients are Nike, Kohler. Seventh Generation and C40 Cities. By 2013, over 100 universities had joined the Biomimicry Educator’s Network, offering training in biomimetics. In 2014, the profit and non-profit aspects again became separate entities, with Biomimicry 3.8 engaging in for-profit consultancy and the Biomimicry Institute as a non-profit organization. Benyus has served on various boards, including the Board of Directors for the U.S. Green Building Council and the advisory boards of the Ray C. Anderson Foundation and Project Drawdown. Benyus is an affiliate faculty member in The Biomimicry Center at Arizona State University. Beynus' work has been used as the basis for films including the two-part film Biomimicry: Learning from Nature (2002), directed by Paul Lang and David Springbett for CBC's The Nature of Things and presented by David Suzuki. She was one of the experts in the film Dirt! The Movie (2009) which was voiced by Jamie Lee Curtis. Authored works Illustrated by Juan Carlos Barberis. Illustrated by Juan Carlos Barberis. Awards and honors 2020, Trailblazer Award at Verdical Group's Net Zero Conference 2019, Fellow, American Society of Interior Designers (ASID) 2015, Edward O. Wilson Biodiversity Technology Pioneer Award 2013, Gothenburg Award for Sustainable Development 2012, Design Mind Award. Smithsonian’s Cooper-Hewitt National Design Museum 2011, Heinz Award. with special focus on the environment 2009, Champion of the Earth for Science and Technology, United Nations Environment Programme. 2007, Hero of the Environment, Time International 2006, Women of Discovery Award, WINGS WorldQuest 2004, Rachel Carson Lecture on Environmental Ethics 2003, Lud Browman Award for Science Writing, Friends of the Mansfield Library, University of Montana See also Biomimicry References External links AskNature.org Biomimicry Toolbox 1958 births Living people Rutgers University alumni University of Montana faculty American sustainability advocates People from Stevensville, Montana National Design Award winners American nature writers Biomimetics
Janine Benyus
[ "Engineering", "Biology" ]
1,153
[ "Bioinformatics", "Bionics", "Biological engineering", "Biomimetics" ]
800,623
https://en.wikipedia.org/wiki/Phase%20center
In antenna design theory, the phase center is the point from which the electromagnetic radiation spreads spherically outward, with the phase of the signal being equal at any point on the sphere. Apparent phase center is used to describe the phase center in a limited section of the radiation pattern. If it is used in the context of an antenna array, one has to define a reference point from which the basevectors of the single elements are referred to. The phase center may vary, depended on the beamforming algorithm. It produces the weight vector, that may vary over time depending on the geometrical setup as well as the receive conditions. References J. D. Dyson, “Determination of the Phase Center and Phase Patterns of Antennas,” in Radio Antennas for Aircraft and Aerospace Vehicles, W. T. Blackband (ed.), AGARD Conference Proceedings, No. 15, Slough, England Technivision Services, 1967. Y. Y. Hu, “A Method of Determining Phase Centers and Its Applications to Electromagnetic Horns,” Journal of the Franklin Institute, Vol. 271, pp. 31–39, January 1961. Antennas
Phase center
[ "Engineering" ]
230
[ "Antennas", "Telecommunications engineering" ]
801,420
https://en.wikipedia.org/wiki/Atmospheric%20physics
Within the atmospheric sciences, atmospheric physics is the application of physics to the study of the atmosphere. Atmospheric physicists attempt to model Earth's atmosphere and the atmospheres of the other planets using fluid flow equations, radiation budget, and energy transfer processes in the atmosphere (as well as how these tie into boundary systems such as the oceans). In order to model weather systems, atmospheric physicists employ elements of scattering theory, wave propagation models, cloud physics, statistical mechanics and spatial statistics which are highly mathematical and related to physics. It has close links to meteorology and climatology and also covers the design and construction of instruments for studying the atmosphere and the interpretation of the data they provide, including remote sensing instruments. At the dawn of the space age and the introduction of sounding rockets, aeronomy became a subdiscipline concerning the upper layers of the atmosphere, where dissociation and ionization are important. Remote sensing Remote sensing is the small or large-scale acquisition of information of an object or phenomenon, by the use of either recording or real-time sensing device(s) that is not in physical or intimate contact with the object (such as by way of aircraft, spacecraft, satellite, buoy, or ship). In practice, remote sensing is the stand-off collection through the use of a variety of devices for gathering information on a given object or area which gives more information than sensors at individual sites might convey. Thus, Earth observation or weather satellite collection platforms, ocean and atmospheric observing weather buoy platforms, monitoring of a pregnancy via ultrasound, magnetic resonance imaging (MRI), positron-emission tomography (PET), and space probes are all examples of remote sensing. In modern usage, the term generally refers to the use of imaging sensor technologies including but not limited to the use of instruments aboard aircraft and spacecraft, and is distinct from other imaging-related fields such as medical imaging. There are two kinds of remote sensing. Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding area being observed. Reflected sunlight is the most common source of radiation measured by passive sensors. Examples of passive remote sensors include film photography, infrared, charge-coupled devices, and radiometers. Active collection, on the other hand, emits energy in order to scan objects and areas whereupon a sensor then detects and measures the radiation that is reflected or backscattered from the target. radar, lidar, and SODAR are examples of active remote sensing techniques used in atmospheric physics where the time delay between emission and return is measured, establishing the location, height, speed and direction of an object. Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote sensing applications include monitoring deforestation in areas such as the Amazon Basin, the effects of climate change on glaciers and Arctic and Antarctic regions, and depth sounding of coastal and ocean depths. Military collection during the Cold War made use of stand-off collection of data about dangerous border areas. Remote sensing also replaces costly and slow data collection on the ground, ensuring in the process that areas or objects are not disturbed. Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum, which in conjunction with larger scale aerial or ground-based sensing and analysis, provides researchers with enough information to monitor trends such as El Niño and other natural long and short term phenomena. Other uses include different areas of the earth sciences such as natural resource management, agricultural fields such as land usage and conservation, and national security and overhead, ground-based and stand-off collection on border areas. Radiation Atmospheric physicists typically divide radiation into solar radiation (emitted by the sun) and terrestrial radiation (emitted by Earth's surface and atmosphere). Solar radiation contains variety of wavelengths. Visible light has wavelengths between 0.4 and 0.7 micrometers. Shorter wavelengths are known as the ultraviolet (UV) part of the spectrum, while longer wavelengths are grouped into the infrared portion of the spectrum. Ozone is most effective in absorbing radiation around 0.25 micrometers, where UV-c rays lie in the spectrum. This increases the temperature of the nearby stratosphere. Snow reflects 88% of UV rays, while sand reflects 12%, and water reflects only 4% of incoming UV radiation. The more glancing the angle is between the atmosphere and the sun's rays, the more likely that energy will be reflected or absorbed by the atmosphere. Terrestrial radiation is emitted at much longer wavelengths than solar radiation. This is because Earth is much colder than the sun. Radiation is emitted by Earth across a range of wavelengths, as formalized in Planck's law. The wavelength of maximum energy is around 10 micrometers. Cloud physics Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of clouds. Clouds are composed of microscopic droplets of water (warm clouds), tiny crystals of ice, or both (mixed phase clouds). Under suitable conditions, the droplets combine to form precipitation, where they may fall to the earth. The precise mechanics of how a cloud forms and grows is not completely understood, but scientists have developed theories explaining the structure of clouds by studying the microphysics of individual droplets. Advances in radar and satellite technology have also allowed the precise study of clouds on a large scale. Atmospheric electricity Atmospheric electricity is the term given to the electrostatics and electrodynamics of the atmosphere (or, more broadly, the atmosphere of any planet). The Earth's surface, the ionosphere, and the atmosphere is known as the global atmospheric electrical circuit. Lightning discharges 30,000 amperes, at up to 100 million volts, and emits light, radio waves, X-rays and even gamma rays. Plasma temperatures in lightning can approach 28,000 kelvins and electron densities may exceed 1024/m3. Atmospheric tide The largest-amplitude atmospheric tides are mostly generated in the troposphere and stratosphere when the atmosphere is periodically heated as water vapour and ozone absorb solar radiation during the day. The tides generated are then able to propagate away from these source regions and ascend into the mesosphere and thermosphere. Atmospheric tides can be measured as regular fluctuations in wind, temperature, density and pressure. Although atmospheric tides share much in common with ocean tides they have two key distinguishing features: i) Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are primarily excited by the Moon's gravitational field. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have longer periods of oscillation related to the lunar day (time between successive lunar transits) of about 24 hours 51 minutes. ii) Atmospheric tides propagate in an atmosphere where density varies significantly with height. A consequence of this is that their amplitudes naturally increase exponentially as the tide ascends into progressively more rarefied regions of the atmosphere (for an explanation of this phenomenon, see below). In contrast, the density of the oceans varies only slightly with depth and so there the tides do not necessarily vary in amplitude with depth. Note that although solar heating is responsible for the largest-amplitude atmospheric tides, the gravitational fields of the Sun and Moon also raise tides in the atmosphere, with the lunar gravitational atmospheric tidal effect being significantly greater than its solar counterpart. At ground level, atmospheric tides can be detected as regular but small oscillations in surface pressure with periods of 24 and 12 hours. Daily pressure maxima occur at 10 a.m. and 10 p.m. local time, while minima occur at 4 a.m. and 4 p.m. local time. The absolute maximum occurs at 10 a.m. while the absolute minimum occurs at 4 p.m. However, at greater heights the amplitudes of the tides can become very large. In the mesosphere (heights of ~ 50 – 100 km) atmospheric tides can reach amplitudes of more than 50 m/s and are often the most significant part of the motion of the atmosphere. Aeronomy Aeronomy is the science of the upper region of the atmosphere, where dissociation and ionization are important. The term aeronomy was introduced by Sydney Chapman in 1960. Today, the term also includes the science of the corresponding regions of the atmospheres of other planets. Research in aeronomy requires access to balloons, satellites, and sounding rockets which provide valuable data about this region of the atmosphere. Atmospheric tides play an important role in interacting with both the lower and upper atmosphere. Amongst the phenomena studied are upper-atmospheric lightning discharges, such as luminous events called red sprites, sprite halos, blue jets, and elves. Centers of research In the UK, atmospheric studies are underpinned by the Met Office, the Natural Environment Research Council and the Science and Technology Facilities Council. Divisions of the U.S. National Oceanic and Atmospheric Administration (NOAA) oversee research projects and weather modeling involving atmospheric physics. The US National Astronomy and Ionosphere Center also carries out studies of the high atmosphere. In Belgium, the Belgian Institute for Space Aeronomy studies the atmosphere and outer space. In France, there are several public or private entities researching the atmosphere, as an example météo-France (Météo-France), several laboratories in the national scientific research center (such as the laboratories in the IPSL group). See also Adiabatic lapse rate Atmospheric thermodynamics Baroclinic instability Barotropic vorticity equation Convective instability Coriolis effect Euler equations Exometeorology FluxNet Geostrophic wind Gravity wave Hydrostatic balance Kelvin–Helmholtz instability Madden–Julian oscillation Navier–Stokes equations Potential vorticity Pressure-gradient force Primitive equations Rossby number Rossby radius of deformation Space weather Space physics Thermal wind Vorticity equation References Further reading J. V. Iribarne, H. R. Cho, Atmospheric Physics, D. Reidel Publishing Company, 1980. External links Branches of meteorology Fluid dynamics Applied and interdisciplinary physics
Atmospheric physics
[ "Physics", "Chemistry", "Engineering" ]
2,056
[ "Applied and interdisciplinary physics", "Chemical engineering", "Atmospheric physics", "Piping", "Fluid dynamics" ]
801,463
https://en.wikipedia.org/wiki/Cyclosarin
Cyclosarin or GF (cyclohexyl methylphosphonofluoridate) is an extremely toxic substance used as a chemical weapon. It is a member of the G-series family of nerve agents, a group of chemical weapons discovered and synthesized by a German team led by Gerhard Schrader. The major nerve gases are the G agents, sarin (GB), soman (GD), tabun (GA), and the V agents such as VX. The original agent, tabun, was discovered in Germany in 1936 in the process of work on organophosphorus insecticides. Next came sarin, soman and finally, cyclosarin, a product of commercial insecticide laboratories prior to World War II. As a chemical weapon, it is classified as a weapon of mass destruction by the United Nations. Pursuant to UN Resolution 687 its production and stockpiling was outlawed globally by the Chemical Weapons Convention (CWC) of 1993, although Egypt, Israel, North Korea and South Sudan have not ratified the CWC (thus not outlawing their own stockpiling of chemical weapons). Chemical characteristics Like its predecessor sarin, cyclosarin is a liquid organophosphate nerve agent. Its physical characteristics are, however, quite different from those of sarin. At room temperature, cyclosarin is a colorless liquid whose odor has been variously described as sweet and musty, or resembling peaches or shellac. Unlike sarin, cyclosarin is a persistent liquid, meaning that it has a low vapor pressure and therefore evaporates relatively slowly, at only about 1/69th the rate of sarin and 1/20th that of water. Also unlike sarin, cyclosarin is flammable, with a flash point of 94 °C (201 °F). History First synthesized during World War II as part of Nazi Germany's chemical weapons research on organophosphate compounds after their military potential was recognized, cyclosarin was also studied later in the United States and Great Britain in the early 1950s as part of a systematic study of potential nerve agents. It was never selected for mass production, however, due to its precursors being more expensive than those of other G-series nerve agents such as sarin (GB). To date, Iraq is the only nation known to have manufactured significant quantities of cyclosarin for use as a chemical agent and to deploy it in battle. During the Iran–Iraq War (1980–1988), the Iraqis used sarin and cyclosarin together as a mixture. This was likely done to obtain a more persistent chemical agent as well as in response to an existing embargo placed on alcohol precursors for sarin. Munitions Binary weapons Like other nerve agents, cyclosarin can be shipped in binary munitions. A cyclosarin binary weapon would most likely contain methylphosphonyl difluoride in one capsule, with the other capsule containing a mixture of cyclohexylamine and cyclohexanol. GB-GF mixtures Iraq fielded munitions filled with a mixture of GB (sarin) and GF (cyclosarin). Tests on mice indicated that GB-GF mixtures have a relative toxicity between GF and GB. References United States Central Intelligence Agency. (July 15, 1996). Stability of Iraq's Chemical Weapon Stockpile Retrieved October 30, 2004 Office of the Special Assistant for Gulf War Illnesses. (Oct. 19, 2004). Chemical Properties of Sarin and Cyclosarin Retrieved October 30, 2004 Press release from Centcom confirming that the chemical munitions found by the Poles dated back to before the 1991 Gulf War, and, thus, could not represent a threat. Acetylcholinesterase inhibitors Methylphosphonofluoridates G-series nerve agents German inventions of the Nazi period Cyclohexyl compounds
Cyclosarin
[ "Chemistry" ]
809
[ "Highly-toxic chemical substances", "Harmful chemical substances" ]
801,682
https://en.wikipedia.org/wiki/Long-distance%20relationship
A long-distance relationship (LDR) or long-distance romantic relationship is an intimate relationship between partners who are geographically separated from one another. Partners in LDRs face geographic separation and lack of face-to-face contact. LDRs are particularly prevalent among college students, constituting 25% to 50% of all relationships. Even though scholars have reported a significant number of LDRs in undergraduate populations, long-distance relationships continue to be an understudied phenomenon. Also, a romantic long-distance relationship (rLDR) refers to a relationship when communication opportunities are limited due to geographical factors, and the individuals involved in the relationship have expectations of maintaining a close connection. Characteristics LDRs are qualitatively different from geographically close relationships; that is, relationships in which the partners are able to see each other, face-to-face, most days. According to Rohlfing (1995) he suggests the following unique challenges for those in long-distance relationships: Increased financial burdens to maintain relationships Difficulty maintaining geographically close friendships while in long-distance romantic relationships Difficulty judging the state of a relationship from a distance High expectations by partners for the quality of limited face-to-face meetings in the relationship LDRs with friends and family Not all long-distance relationships are romantic. When individuals go away to school, their relationships with family and friends also become long-distance. Pew Internet (2004) asserts that 79% of adult respondents from the United States reported using the Internet for communication with family and friends. Also, Pew Internet (2002a) states that because of new technologies, college students will have greater social ties with their friends than their family members. Therefore, examining email among college students helps explore how the Internet is affecting college students emotionally and socially. Under the great influence of globalization, together with the advancement in transportation and communication technologies, migration has gradually become a feature of contemporary society. As a result, transnational families have become increasingly common in which family members live in different regions and countries, yet hold a sense of collective unity across national borders. For instance, children choose to leave home to study abroad, parents decide to leave home for better prospects and salaries, or siblings pursue different life paths around the world. Sustaining family relationships A qualitative study that conducted 50 interviews with adult migrant children in Australia and their parents in Italy, Ireland, and the Netherlands found that geographically separated family members generally exchanged all types of care and support that proximate families did, including financial, practical, personal, accommodation, and emotional or moral support. According to Loretta Baldassar, a closely related ethnographic analysis of 30 transnational families between grown-up migrant children living in Australia and their parents in Italy from the 1950s to 2000s illustrated that the exchange of emotional and moral support between parents and children was the fundamental factor for sustaining and staying committed to family relationships in transnational families. The prevalence of Internet technologies has facilitated remote family members’ emotional exchange, and provided them with the opportunity of accessible and affordable long-distance communication on a daily basis for sustaining relationships. Cao (2013) conducted a series of interviews with 14 individuals who constantly communicated with family members living in different time zones, namely the UK, US, Canada, and China. Analysis revealed that among a variety of communication methods, including synchronous means such as telephone and Internet audio/video call (e.g., Skype) and asynchronous methods such as email or text messaging, remote family members relied heavily on synchronous methods for virtual contact. The real-time interactivity from synchronous communication provides a sense of presence, connectedness, and dedication between family members, which is regarded by Cao as an essential component of emotional support. However, it is worth noting that the Internet technologies have not replaced the use of older, less useful forms of communication, in which transnational families still use letters, cards, gifts, and photographs, etc. for showing their care and love. Research has shown that people sustain close relationships using different communication patterns with different family members. While people usually communicate heavily with immediate family members such as parents or children, they tend to communicate less frequently and regularly with other family members including siblings across time zones. It is suggested that siblings feel less obligated to communicate dedicatedly with each other, especially for the younger generation, and they prefer ad hoc communication such as through instant messages to update each other's status. The effects of geographical separation on children’s well-being Globally, there is a considerable number of parents who travel to another country in search of work, leaving behind their children in their home country. These parents hope to provide their children with better future life chances. The impacts of parents’ migration for work on left-behind children's growth are mixed, depending on various factors and the outcomes of transnational living arrangements on children's well-being vary. For instance, through surveying a sample of 755 Mexican households with at least one family member who had migrated to the US, researchers reported that left-behind children might benefit economically from the remittances their parents sent home while suffering emotionally from long-term separation. Similar results were found by Lahaie, Hayes, Piper, and Heymann (2009), a correlational study investigating the relationship between parental migration and children's mental health outcomes using a representative sample of transnational families in Mexico and the US. In addition, whether the mother or father migrates for work also plays a role. Based on the interviews and observations with Filipina transnational families, children tended to experience more emotional problems from transnational motherhood than fatherhood, taking the traditional family gender roles into account. The impacts of parent migration on children's psychological well-being are also distinctive in different countries. With reference to the data collected from the cross-sectional baseline study of Children Health and Migrant Parents in Southeast Asia (CHMPSEA), Graham and Jordan (2011) showed that children of migrant fathers in Indonesia and Thailand were more likely to suffer from poor psychological health when compared to children in non-migrant families, while the findings did not replicate in children from Philippine and Vietnam. Special care arrangement for left-behind children, such as asking the extended family members for help to take on caregiving tasks, affects children's growth substantially. Lahaie et al. (2009) revealed that children who took care of themselves had a higher probability to exhibit behavioral and academic problems when compared to other children with care arrangements. The feeling of being abandoned by parents is proposed to be one of the reasons that the children commit to undesirable behaviors such as quitting school or gang involvement as retaliation. Formats of LDR Long-distance relationships (LDRs) exhibit considerable variation based on their specific contexts. For example, Smith-Osborne and Jani(2014) note that military LDRs are influenced by military culture, where mission priorities often overshadow personal relationships. Similarly, Nickels(2019) investigates relationships involving incarcerated partners, highlighting challenges such as restricted communication, societal stigma, and significant financial burdens. In commuter relationships, where partners reside in different cities, regular visits and communication help maintain their connection. Research by Rhodes(2002) indicates that these relationships can endure through a mix of in-person meetings and digital communication, providing flexibility despite the physical distance. Transnational relationships involve partners from different countries, requiring them to navigate cultural differences alongside geographical separation. These relationships heavily depend on technology for communication, which facilitates bridging cultural divides and enhancing mutual understanding between partners. Challenges According to Tara Suwinyattichaiporn(2017) research, there are some issues in long-distance relationships. Idealization of partners Idealization is a major concern in LDRs that involve unrealistic positive impressions of the partner and the relationship. Additionally, Stafford(2007) in his study concluded that idealization during separation corresponds with postreunion instability. The study concluded that LDRs partners are more likely to have a break up after getting back together than when they are separated. Reunion stability was positively related to face-to-face contact during the time apart but was negatively related to CMC and mail contact. Also, idealization during separation has a negative effect on stability after changing the status of the relationship from separation to proximity. The absence of close contact and the use of computer technologies in communication lead to positive impression management. Although idealization can maintain LDRs by enhancing relational satisfaction and stability, it becomes problematic when couples move to physical proximity because reality tends to differ with the idealized images. Relational uncertainty LDRs are characterised by physical separation of the partners and, therefore, the partners are bound to experience higher levels of uncertainty. Types of uncertainty Self-uncertainty, including skepticism regarding one’s own emotion. Partner uncertainty as a feeling insecure about the partner’s affection and devotion. Relationship uncertainty of the general status of the relationship. LDRs are also characterised by limited contact and fewer visits hence the element of uncertainty is more pronounced. It is most commonly linked to decreased satisfaction. Jealousy Jealousy occurs more frequently in LDRs due to relational uncertainty or use of social media and other similar platforms. Communication challenges LDRs are especially dependent on technology, and while they allow for high-quality interactions, they can also exacerbate such concerns as idealization. LDR partners claim their communication is more purposeful but have higher levels of conflict when they move to live close to each other However, Tara Suwinyattichaiporn(2017) mentioned one positive aspect of long-distance relationships (LDRs) that gives them an advantage over geographically close relationships (GCRs): the opportunity to have higher quality interactions during the time that is spent with each other. According to the survey, LDR couples said that the time they spent communicating or meeting is more meaningful, focused and purposeful than GCR couples couples. Such intentionality may result in more profound emotional entwinement and less daily conflict between them during their communications. Moreover, according to Stafford(2010) the lack of physical proximity and the fact that the relationship is often considered taboo to have such conversations with a partner who lives nearby are normal and beneficial for LDRs . Military long-distance relationship The partners of military personnel deployed abroad experience a significant amount of stress, before and during the deployment. The difference between a military LDR and a regular LDR is that, while the regular LDR there is more communication the military LDR communication is unexpected and controlled by military regulations or there is not much time to talk. Because of the communication restrictions and the overall process of deployment, this leaves the partner back home feeling lonely, and stressing on how to keep a strong relationship moving forward. Other stressors that add to the emotional situation are the realization that the service member is being deployed to a combat zone where their life is threatened. Through all the stages of the deployment the partner will exhibit many emotional problems, such as anxiety, loss, denial, anger, depression, and acceptance. This type of long-distance relationship is very hard to maintain. That is why there are some ways of adjustment that partners follow to stay together and prevent problems from arising. Independence of the non-deployed Partner: First is independence of a non-deployed partner and rejection of traditional gender roles. Due to the fact that the deployed partner is busy with his mission, the non-deployed partner should understand that all responsibility of making  decisions about family, household, handling finances and caring for children should be done by himself Flexibility and adjustment to expectations: Both partners should be flexible and accept that traditional expectations, like consistent communication, shared parenting, may not always be feasible. Moreover, non-deployed partners should accept “military culture” which posits that mission and military duties are priorities over personal relationships and love. These adjustments help to manage expectations and make both partners comfortable together. Limitations of research Most of the knowledge regarding changes in military long-distance relationships is based on research carried out in a particular cultural and geographical settings, namely American. These findings may not capture the differences in military cultures and norms of societies in other countries. Incarcerated Long-Distance Relationships Incarcerated long-distance relationships have their own features that are similar to military ones but with some differences that make them harder to maintain. Stigma and lack of support Non-prisoned partner can face lack of support from surroundings  to stay in a relationship with an incarcerated partner and as a result feel more lonely, isolated and unmotivated to maintain the relationship. Barriers to maintaining the relationship These barriers include limited intimacy due to prison restrictions, high communication costs, physical distance, and stress about the future with an incarcerated partner. Behaviors for maintaining incarcerated LDRs A study identified five key behaviors that support the maintenance of incarcerated LDRs: Positivity: behaving cheerfully and optimistically during interactions. Openness: sharing thoughts, feelings, and discussing the quality of the relationship. Assurances: expressing affection and commitment to the relationship. Sharing tasks: collaborating on shared responsibilities where possible. Social network involvement: spending time with mutual friends and involving family in activities. Limitations of research This research is a quantitative cross-sectional study done at a national level and can be assumed to reflect the results of the country’s cultural and institutional environment. It may not hold true in every country because the structure of prison systems, cultural differences, and financial situations can greatly affect these kinds of relationships. Statistics in the US In 2005 a survey suggested that in the United States, 14 to 15 million people considered themselves to be in a long-distance relationship. By 2015, this number remained at about 14 million. About 32.5% of college relationships are long-distance. The average amount of distance in a long-distance relationship is 125 miles. Couples in a long-distance relationship call each other every 2.7 days. On average, couples in long-distance relationships will visit each other 1.5 times a month. Also couples in long-distance relationships expect to live together around 14 months into the relationship. About 40% of couples in long-distance relationships break up; around 4.5 months into the relationship is the time when couples most commonly start having problems. 70% of couples in a long-distance relationship break up due to unplanned circumstances and events. 75% of all engaged couples have, at some point, been in a long distance relationship, and around 10% of couples continue to maintain a long-distance relationship after marriage. About 3.75 million married couples are in a long-distance relationship in the US alone. Means of staying in contact New communication technologies such as cellular phone plans make communication among individuals at a distance easier than in the past. Before the popularity of internet dating, long-distance relationships were not as common, as the primary forms of communication between the romance lovers usually involved either telephone conversations or corresponding via mail. According to Pew Internet, American citizens were asked how often they used the Internet on a typical day, they reported 56% sending or reading email, 10% reported sending instant messages, and 9% reported using an online social network such as Facebook or Twitter. However, with the advent of the Internet, long-distance relationships have exploded in popularity as they become less challenging to sustain with the use of modern technology. Ultimately, communicating and setting realistic goals can help prevent disconnection and the loss of touch. The increase in long-distance relationships is matched by an increasing number of technologies designed specifically to support intimate couples living apart. In particular there have been a host of devices which have attempted to mimic co-located behaviors at a distance including hugging and even kissing. The success of these technologies has, so far, been limited. Couples who have routine, strategic relational maintenance behaviors, and take advantage of social media can help maintain a long-distance relationship. Having positivity (making interactions cheerful and pleasant), openness (directly discussing the relationship and one's feelings), assurances (reassuring the partner about the relationship and the future), network (relying on support and love of others), shared tasks (performing common tasks) and conflict management (giving the partner advice) are some of the routine and strategic relation maintenance behaviors The differentiation of social media usage during LDR consists of several actions, including video-calls(Instagram, Whatsapp, Snapchat, Viber, Skype, Discord, Telegram etc), phone-calls, and everyday chatting. Research of Abel, Machin and Brownlow(2020) discusses the main themes regarding interactions in a social media context. Families utilize various social media platforms for functional and transactional tasks, engaging in bonding activities through audiovisual calls that are often used for casual conversations or group chats help maintain bonds without intruding on each other's time. However, barriers such as limited internet access, lower socioeconomic status, and digital literacy can hinder these interactions. Also they demonstrate resilience by recreating face-to-face rituals online, maintaining traditions despite geographic separation. According to Morgan R. Kuske(2020), long-distance partners use social media to keep in contact such as  Snapchat, Facebook, Instagram, WhatsApp, and Twitter. It is the most convenient way to have private communication. Shared online activities are also the methods of staying in contact. There will be engaging in online-games (in Discord and other platforms), watching movies together (due to the technological advance of such services), sharing shortcut videos like in Tiktok and Reels in Instagram. Couples can organize virtual dates where they participate in the same activities, like preparing a meal together or watching a film at the same time. Engaging in multiplayer games offers them an interactive way to connect through friendly competition and teamwork. Additionally, utilizing technology to discover new locations together — such as through virtual museum tours or live-streamed travel experiences — can strengthen their sense of partnership. Participating in creative activities, like painting portraits of one another during a video call, can be both enjoyable and insightful. The idea of giving and receiving gifts exists as an important factor of LDR lasting and duration. Gift exchange across distances can disrupt the solitude of distance, becoming a way of exchanging experiences between people who are geographically apart. This research points out that such gifts are most fulfilling because the recipient gets to receive much more than the physical item, it is a message of the self that may be difficult to put into words.Some sites on the Internet consist of different gifts that could be shared with one’s partner. Continuing, when a person lacks money for delivery or transportation, there is an opportunity to make various types of DIV’s, so called “hand-made” gifts. Attention should be paid also to the seriosity of pages and sites, not to ruin the relationship because of the strangers’ blogs. The lack of physical touch is not disappearing. However, scientists, consumers and psychologists already have devices that could try to fulfill the gap that untouchability causes.  Starting from the sensory bracelets, heartbeat rings, and ending with the adult toys, the variety for choice is increasing everyday. The research conducted by Saadatian and Samani(2014) presents the idea of "kiss messaging," a digital technology created to replicate the sensation of kissing. This innovation seeks to close the emotional gap caused by physical separation by offering a way for partners to show affection. Kiss messaging acts as a substitute for physical contact, enabling couples to perform intimate actions that would be unattainable due to distance. Relationship maintenance behaviors Intimate relationship partners constantly work to improve their relationship. There are many ways in which they can make their partner happy and strengthen the overall relationship. The ways in which individuals behave have a major effect on the satisfaction and the durability of the relationship. Researchers have found systems of maintenance behaviors between intimate partners. Maintenance behaviors can be separated into seven categories: assurances in relation to love and commitment in the relationship, openness in sharing their feelings, conflict management, positive interactions, sharing tasks, giving advice to their partner, and using social networks for support (Dainton, 2000; Stafford, Dainton, & Haas, 2000). Dindia and Emmers-Sommer (2006) identified three categories of maintaining behaviors that are used by partners to deal with separation. "Prospective behaviors, such as telling the partner goodbye, which addresses anticipated separation; introspective behaviors, which is communication when the partners are apart; and retrospective behaviors which are basically talking to each other face to face, which reaffirms connection after separation." (Dindia, & Emmers-Sommer, 2006). These are known as the relationship continuity constructional units (RCCUs). Maintenance behaviors as well as the RCCUs are correlated with an increase in relationship satisfaction, as well as, commitment (Pistole et al., 2010). Research In a study of jealousy experience, expression, in LDRs, 114 individuals who were in LDRs indicated how much face-to-face contact they had in a typical week. Thirty-three percent of participants reported no face-to-face contact, whereas 67% reported periodic face-to-face contact with a mean of one to two days. The researchers compared LDRs to GCRs (geographically close relationships) and discovered that those in LDRs with no face-to-face contact experience more jealousy than those with periodic face-to-face contact or those in GCRs. Furthermore, those without periodic face-to-face contact were more likely to use the internet to communicate with their partner. They found that the presence of periodic face-to-face contact is a crucial factor in the satisfaction, commitment, and trust of LDR partners. Those who do not experience periodic face-to-face contact reported significantly lower levels of satisfaction, commitment, and trust. Another study generated a sample of 335 undergraduate students who were in LDRs and became geographically close. Of the reunited couples, 66 individuals terminated their relationships after moving to the same location, whereas 114 continued their relationship. A study done by Stafford, Merolla, and Castle (2006) reported that the transition from being separated geographically to proximal increased partner interference. Based on the Communicative Interdependence Perspective, it was found that when partners switched from technologically mediated communication (TMC) to face-to-face (FtF) or vice versa, they experienced certain levels of discomfort. The transition from FtF to TMC communication can make it difficult to express one's emotions and can be easier to cause miscommunication. It is believed to be plausible that transitions can be a risk factor towards long-distance dating relationships. Based on the analysis of the open-ended responses, 97% of respondents noted some type of relationship change associated with the LD-GC (geographically close) transition. When the respondents were asked about having the ability to have more face-to-face time when GC and the enjoyment of increased time spent together most comments were positive. For example, "We finally got to do all the 'little' things we'd been wanting to do for so long; we get to hold each other, wake up next to each other, eat together, etc." Many Individuals reported a loss of autonomy, following reunion. For example, many individuals liked and missed the "freedom" or "privacy" the distance allowed. Reports of "nagging", demanding or expecting "too much" were also frequent responses. Several individuals reported more conflict and "fighting" in their relationship after it became geographically close. Many said they felt the conflict in their relationship was not only more frequent but also more difficult to resolve. For example, one individual stated that, when his/her relationship was long-distance, they "fought less and if we did fight, problems were solved in a shorter amount of time." For some individuals living in the same location led to increased feelings of jealousy. After witnessing their partner's behavior, some participants said that they became increasingly concerned that their partners were currently "cheating" on them or had "cheated on them in the past." Reunion allowed the discovery of positive as well as negative characteristics about their partner, feeling that the partner had changed in some way since the relationship was long-distance. See also Living apart together Dual-career commuter couples References Bibliography Chris Bell, Kate Brauer-Bell, The Long-Distance Relationship Survival Guide (New York: Ten Speed Press, 2006) Dindia, K., & Emmers-Sommer, E. M., (2006). What partner do to maintain their close relationships. In P. Noller & J. A. Feeney (Eds.) Close relationships: Functions, forms, and processes (pp. 302–324). New York: Psychology Press. Seetha Narayan, The Complete Idiot's Guide to Long-Distance Relationships (Alpha Books: 2005) Rohlfing, M. E. (1995). Doesn't anyone stay in one place anymore? An exploration of the understudied phenomenon of long-distance relationships. In J. Woods & S. Duck (Eds.), Understudied relationships: Off the beaten track (pp. 173–196). Thousand Oaks, CA: Sage Marnocha, Suzanne. "Military Wives' Transition and Coping: Deployment and the Return Home." Military Wives' Transition and Coping: Deployment and the Return Home. International Scholarly Research Notices, 5 Mar. 2012. Web. 3 Nov. 2015. Pew Internet & American Life Project. (2002a). The Internet goes to college: How students are living in the future with today's technology. Retrieved October 15, 2005 Pew Internet & American Life Project (2004). The Internet and daily life: Many Americans use the Internet in everyday activities, but traditional offline habits still dominate. Retrieved October 11, 2007 Suwinyattichaiporn, T., Fontana, J., Shaknitz, L., & Linder, K. (2017). Maintaining long-distance romantic relationships: The college students' perspective. Kentucky Journal of Communication, 36(1), 67–89. Smith-Osborne, A., & Jani, J. (2014). Long-distance military and civilian relationships: Women’s perceptions of the impact of communication technology and military culture. Military Behavioral Health, 2(4), 293–303. https://doi.org/10.1080/21635781.2014.963759 Nickels, B. M. (2019). Love locked up: An exploration of relationship maintenance and perceived barriers for women who have incarcerated partners. Journal of Family Communication, 20(1), 36–50. https://doi.org/10.1080/15267431.2019.1674853 Rhodes, A. R. (2002). Long-Distance Relationships in Dual-Career Commuter Couples: A Review of Counseling Issues. The Family Journal, 10(4), 398–404. doi:10.1177/106648002236758 Janning, M., Gao, W., & Snyder, E. (2017). Constructing Shared “Space”: Meaningfulness in Long-Distance Romantic Relationship Communication Formats. Journal of Family Issues, 39(5), 1281–1303. doi:10.1177/0192513x1769872 Chien, W.-C., & Hassenzahl, M. (2017). Technology-Mediated Relationship Maintenance in Romantic Long-Distance Relationships: An Autoethnographical Research through Design. Human–Computer Interaction, 1–48. doi:10.1080/07370024.2017.1401927 Stafford, L., & Merolla, A. J. (2007). Idealization, reunions, and stability in long-distance dating relationships. Journal of Social and Personal Relationships, 24(1), 37–54. doi:10.1177/0265407507072578 Stafford, L. (2010). Geographic distance and communication during courtship. Journal of Communication, 60(1), 161–183. https://doi.org/10.1111/j.1460-2466.2009.01473.x Abel, S., Machin, T., & Brownlow, C. (2020). Social media, rituals, and long-distance family relationship maintenance: A mixed-methods systematic review. New Media & Society, 146144482095871. doi:10.1177/1461444820958717 Kuske, M. R., Leahy, R. (Sponsor). (2020). Social media use in the maintenance of long-distance romantic relationships in college. UWL Journal of Undergraduate Research, 23, 1–6. Latinytė, R. (2023). Communication Through Shared Experience: Gifts that Overcome Distance. LITUANUS. Saadatian, E., & Samani, R. (2016). Bridging intimacy in long-distance relationships: The advent of sensory technology. Journal of Digital Interactions, 12(3), 45–62. Interpersonal relationships Marriage
Long-distance relationship
[ "Biology" ]
5,970
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
802,190
https://en.wikipedia.org/wiki/Bushing%20%28isolator%29
A bushing or rubber bushing is a type of vibration isolator. It provides an interface between two parts, damping the energy transmitted through the bushing. A common application is in vehicle suspension systems, where a bushing made of rubber (or, more often, synthetic rubber or polyurethane) separates the faces of two metal objects while allowing a certain amount of movement. This movement allows the suspension parts to move freely, for example, when traveling over a large bump, while minimizing transmission of noise and small vibrations through to the chassis of the vehicle. A rubber bushing may also be described as a flexible mounting or antivibration mounting. These bushings often take the form of an annular cylinder of flexible material inside a metallic casing or outer tube. They might also feature an internal crush tube which protects the bushing from being crushed by the fixings which hold it onto a threaded spigot. Many different types of bushing designs exist. An important difference compared with plain bearings is that the relative motion between the two connected parts is accommodated by strain in the rubber, rather than by shear or friction at the interface. Some rubber bushings, such as the D block for a sway bar, do allow sliding at the interface between one part and the rubber. History Charles E. Sorensen credits Walter Chrysler as being a leader in encouraging the adoption of rubber vibration-isolating mounts. In his memoir (1956), he says that, on March 10, 1932, Chrysler called at Ford headquarters to show off a new Plymouth model. "The most radical feature of his car was the novel suspension of its six-cylinder engine so as to cut down vibration. The engine was supported on three points and rested on rubber mounts. Noise and vibration were much less. There was still a lot of movement of the engine when idling, but under a load it settled down. Although it was a great success in the Plymouth, Henry Ford did not like it. For no given reason, he just didn't like it, and that was that. I told Walter that I felt it was a step in the right direction, that it would smooth out all noises and would adapt itself to axles and springs and steering-gear mounts, which would stop the transfer of road noises into the body. Today rubber mounts are used on all cars. They are also found on electric-motor mounts, in refrigerators, radios, television sets—wherever mechanical noises are apparent, rubber is used to eliminate them. We can thank Walter Chrysler for a quieter way of life. Mr. Ford could have installed this new mount at once in the V-8, but he missed the value of it. Later Edsel and I persuaded him. Rubber mounts are now found also in doors, hinges, windshields, fenders, spring hangers, shackles, and lamps—all with the idea of eliminating squeaks and rattles." Lee Iacocca credits Chrysler's chief of engineering during that era, Frederick Zeder, with leading the effort. Iacocca said that Zeder "was the first man to figure out how to get the vibrations out of cars. His solution? He mounted their engines on a rubber base." In Vehicles A bushing is a type of bearing that is used in the suspension system of a vehicle. It is typically used to connect moving parts such as control arms and sway bars to the frame of the vehicle, and also to isolate these parts from each other and from the frame. The main function of a bushing is to reduce the transmission of vibrations and shocks from the road to the rest of the vehicle, which helps to improve the overall ride comfort and reduce noise and harshness inside the vehicle. See also Shock mount References Bibliography DeSilva, C. W., "Vibration and Shock Handbook", CRC, 2005, Harris, C. M., and Peirsol, A. G. "Shock and Vibration Handbook", 2001, McGraw Hill, Hardware (mechanical) Mechanical engineering Mechanical vibrations Skateboarding equipment
Bushing (isolator)
[ "Physics", "Technology", "Engineering" ]
824
[ "Structural engineering", "Machines", "Applied and interdisciplinary physics", "Physical systems", "Construction", "Mechanics", "Mechanical engineering", "Mechanical vibrations", "Hardware (mechanical)" ]
12,893,837
https://en.wikipedia.org/wiki/Energetic%20material
Energetic materials are a class of material with high amount of stored chemical energy that can be released. Typical classes of energetic materials are e.g. explosives, pyrotechnic compositions, propellants (e.g. smokeless gunpowders and rocket fuels), and fuels (e.g. diesel fuel and gasoline). References External links
Energetic material
[ "Physics" ]
73
[ "Materials stubs", "Materials", "Matter" ]
12,893,886
https://en.wikipedia.org/wiki/Security%20convergence
Security convergence refers to the convergence of two historically distinct security functions – physical security and information security – within enterprises; both are integral parts of a coherent risk management program. Security convergence is motivated by the recognition that corporate assets are increasingly information-based. In the past, physical assets demanded the bulk of protection efforts, whereas information assets are demanding increasing attention. Although generally used in relation to cyber-physical convergence, security convergence can also refer to the convergence of security with related risk and resilience disciplines, including business continuity planning and emergency management. Security convergence is often referred to as 'converged security'. Definitions According to the United States Cybersecurity and Infrastructure Security Agency, security convergence is the "formal collaboration between previously disjointed security functions." Survey participants in an ASIS Foundation study The State of Security Convergence in the United States, Europe, and India define security convergence as "getting security/risk management functions to work together seamlessly, closing the gaps and vulnerabilities that exist in the space between functions." In his book Security Convergence: Managing Enterprise Security Risk, Dave Tyson defines security convergence as "the integration of the cumulative security resources of an organization in order to deliver enterprise-wide benefits through enhanced risk mitigation, increased operational effectiveness and efficiency, and cost savings." Background The concept of security convergence has gained currency within the context of the Fourth Industrial Revolution, which, according to founder and Executive Chairman of the World Economic Forum (WEF) Klaus Schwab, "is characterised by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres." Key results of this fusion include developments in cyber-physical systems (CPS) and the growth of the Internet of Things (ioT), which have seen a proliferation in the number and types of internet connected physical objects. In 2017, Gartner predicted that there would be 20 billion internet-connected things by 2020. Security convergence was endorsed as early as 2007 by three leading international organizations for security professionals – ASIS International, ISACA and ISSA – which together co-founded the Alliance for Enterprise Security Risk Management to, in part, promote the concept. Types of convergence Cyber-physical convergence Risk convergence In the context of the Internet of Things, cyber threats more readily translate into physical consequences, and physical security breaches can also extend an organisation's cyber threat surface. According to the United States Cybersecurity and Infrastructure Security Agency, "The adoption and integration of Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices has led to an increasingly interconnected mesh of cyber-physical systems (CPS), which expands the attack surface and blurs the once clear functions of cybersecurity and physical security." According to the WEF Global Risks Report 2020, "Operational technologies are at increased risk because cyberattacks could cause more traditional, kinetic impacts as technology is being extended into the physical world, creating a cyber-physical system". According to the United States Department of Homeland Security, "The consequences of unintentional faults or malicious attacks [on cyber-physical systems] could have severe impact on human lives and the environment." Notable examples of attacks on internet connected facilities include the 2010 Stuxnet attack on Iran's Natanz nuclear facilities and the December 2015 Ukraine power grid cyberattack. “Today’s threats are a result of hybrid and blended attacks utilizing Information Technology (IT), physical infrastructure, and Operational Technology (OT) as the enemy avenue of approach," notes former CISA Assistant Director for Infrastructure Security Brian Harrell. "Highlighting this future threat landscape will ensure better situational awareness and a more rapid response.” Organisational convergence Traditionally distinct, or 'siloed', approaches to physical security and cyber security are viewed by proponents of security convergence as unable to adequately protect an organisation from attacks involving both cyber and physical (cyber-physical) dimensions. The organisational aspect of security convergence focuses on the extent to which an organisation's internal structure is capable of adequately addressing converged security risks. According to the Cybersecurity and Infrastructure Security Agency, "physical security and cybersecurity divisions are often still treated as separate entities. When security leaders operate in these siloes, they lack a holistic view of security threats targeting their enterprise. As a result, attacks are more likely to occur". "Many of the conventional physical and information security risks are viewed in isolation," states a PricewaterhouseCoopers document Convergence of Security Risks. "These risks may converge or overlap at specific points during the risk lifecycle, and as such, could become a blind spot to the organisation or individuals responsible for risk management." In a survey of more than 1,000 senior physical security, cybersecurity, disaster management, and business continuity professionals, the ASIS Foundation study The State of Security Convergence in the United States, Europe, and India found that despite “years of predictions about the inevitability of security convergence, just 24 percent of respondents have converged their physical and cybersecurity functions.” The survey also found that 96 percent of organisations that had converged two or more security functions reported positive results from convergence, with 72 percent reporting that convergence strengthened their overall security. Overall, 78 percent of those surveyed believed that convergence would strengthen their overall security function. Citing the work of Jay Wright Forrester on systems thinking, Optic Security Group CEO Jason Cherrington argues that a system of systems approach provides a useful lens to understanding how security sub-groups within an organisation contribute to an organisation's overall security goals. "In an ideal SoS world, organisations would see their security as a collection of task-oriented or dedicated systems that pool their resources and capabilities together as part of an overall system offering more functionality and performance than the sum of its parts. Importantly, oversight of the overall system would ensure that any gaps between its component systems are identified and failures avoided." Solutions convergence (unified security) The increasing prevalence of hybridised cyber-physical security threats has driven the parallel emergence of a range of converged security solutions that cover both cyber and physical domains. According to Jason Cherrington, "in contemporary security threats we’re seeing a convergence of physical and digital vectors; and that protection against these hybridised threats requires a hybridised approach." According to the United States Cybersecurity and Infrastructure Security Agency: "Organizations with converged cybersecurity and physical security functions are more resilient and better prepared to identify, prevent, mitigate, and respond to threats. Convergence also encourages information sharing and developing unified security policies across security divisions." Bibliography Anderson, K., "Convergence: A Holistic Approach to Risk Management", Network Security, Elsevier, Ltd., Volume 2007, Issue 5, May 2007. Anderson, K., "IT Security Professionals Must Evolve for Changing Market", SC Magazine, October 12, 2006. References External links Alliance for Enterprise Security Risk Management Security Data security Physical security
Security convergence
[ "Engineering" ]
1,428
[ "Cybersecurity engineering", "Data security" ]
12,896,417
https://en.wikipedia.org/wiki/Material%20take%20off
Material take off (MTO) is a term used in engineering and construction, and refers to a list of materials with quantities and types (such as specific grades of steel) that are required to build a designed structure or item. This list is generated by analysis of a blueprint or other design document. The list of required materials for construction is sometimes referred to as the material take off list (MTOL). Material take off is not limited to the amount of required material, but also the weight of the items taken off. This is important when dealing with larger structures, allowing the company that does the take off to determine total weight of the item and how best to move the item (if necessary) when construction is completed. Definition used by the ISA (International Society of Automation) A material take off (MTO) is the process of analyzing the drawings and determining all the materials required to accomplish the design. Thereafter, the material take off is used to create a bill of materials (BOM). Procurement and requisition are activities that occur after the bill of materials is complete, distinct from Inspection. References Whitt, Successful Instrument and Control System Design, ISA Press, 2004. Engineering concepts The final stages of the MTO is as instrumentally visible in the GAD (General Arrangement Drawing) for specific equipment. The MTO sheet is such an important document in projects as it presents a huge detail such as list of all Materials, quantities, weights, material types, material codes etc.
Material take off
[ "Engineering" ]
305
[ "nan" ]
1,258,217
https://en.wikipedia.org/wiki/Transposase
A transposase is any of a class of enzymes capable of binding to the end of a transposon and catalysing its movement to another part of a genome, typically by a cut-and-paste mechanism or a replicative mechanism, in a process known as transposition. The word "transposase" was first coined by the individuals who cloned the enzyme required for transposition of the Tn3 transposon. The existence of transposons was postulated in the late 1940s by Barbara McClintock, who was studying the inheritance of maize, but the actual molecular basis for transposition was described by later groups. McClintock discovered that some segments of chromosomes changed their position, jumping between different loci or from one chromosome to another. The repositioning of these transposons (which coded for color) allowed other genes for pigment to be expressed. Transposition in maize causes changes in color; however, in other organisms, such as bacteria, it can cause antibiotic resistance. Transposition is also important in creating genetic diversity within species and generating adaptability to changing living conditions. Transposases are classified under EC number EC 2.7.7. Genes encoding transposases are widespread in the genomes of most organisms and are the most abundant genes known. During the course of human evolution, as much as 40% of the human genome has moved around via methods such as transposition of transposons. Transposase Tn5 Transposase (Tnp) Tn5 is a member of the RNase superfamily of proteins which includes retroviral integrases. Tn5 can be found in Shewanella and Escherichia bacteria. The transposon codes for antibiotic resistance to kanamycin and other aminoglycoside antibiotics. Tn5 and other transposases are notably inactive. Because DNA transposition events are inherently mutagenic, the low activity of transposases is necessary to reduce the risk of causing a fatal mutation in the host, and thus eliminating the transposable element. One of the reasons Tn5 is so unreactive is because the N- and C-termini are located in relatively close proximity to one another and tend to inhibit each other. This was elucidated by the characterization of several mutations which resulted in hyperactive forms of transposases. One such mutation, L372P, is a mutation of amino acid 372 in the Tn5 transposase. This amino acid is generally a leucine residue in the middle of an alpha helix. When this leucine is replaced with a proline residue the alpha helix is broken, introducing a conformational change to the C-terminal domain, separating it from the N-terminal domain enough to promote higher activity of the protein. The transposition of a transposon often needs only three pieces: the transposon, the transposase enzyme, and the target DNA for the insertion of the transposon. This is the case with Tn5, which uses a cut-and-paste mechanism for moving around transposons. Tn5 and most other transposases contain a DDE motif, which is the active site that catalyzes the movement of the transposon. Aspartate-97, aspartate-188, and glutamate-326 make up the active site, which is a triad of acidic residues. The DDE motif is said to coordinate divalent metal ions, most often magnesium and manganese, which are important in the catalytic reaction. Because transposase is incredibly inactive, the DDE region is mutated so that the transposase becomes hyperactive and catalyzes the movement of the transposon. The glutamate is transformed into an aspartate and the two aspartates into glutamates. Through this mutation, the study of Tn5 becomes possible, but some steps in the catalytic process are lost as a result. There are several steps which catalyze the movement of the transposon, including Tnp binding, synapsis (the creation of a synaptic complex), cleavage, target capture, and strand transfer. Transposase then binds to the DNA strand and creates a clamp over the transposon end of the DNA and inserts into the active site. Once the transposase binds to the transposon, it produces a synaptic complex in which two transposases are bound in a cis/trans relationship with the transposon. In cleavage, the magnesium ions activate oxygen from water molecules and expose them to nucleophilic attack. This allows the water molecules to nick the 3' strands on both ends and create a hairpin formation, which separates the transposon from the donor DNA. Next, the transposase moves the transposon to a suitable location. Not much is known about the target capture, although there is a sequence bias which has not yet been determined. After target capture, the transposase attacks the target DNA nine base pairs apart, resulting in the integration of the transposon into the target DNA. As mentioned before, due to the mutations of the DDE, some steps of the process are lost—for example, when this experiment is performed in vitro, and SDS heat treatment denatures the transposase. However, it is still uncertain what happens to the transposase in vivo. The study of transposase Tn5 is of general importance because of its similarities to HIV-1 and other retroviral diseases. By studying Tn5, much can also be discovered about other transposases and their activities. Tn5 is utilized in genome sequencing by using the Tn5 to append sequencing adaptors and fragment the DNA in a single enzymatic reaction in 2010, reducing the time and input requirements over traditional next-generation sequencing library preparation. The Tn5-based strategy can simplify the library preparation protocol significantly and can even can be incorporated into the direct colony-PCR for large numbers of bacterial isolates with no obvious coverage bias. The main disadvantages are less control of fragmented size compared to enzymatic fragmentation and mechanical fragmentation, and a bias toward high G-C content. This means of library preparation is also used in the ATAC-seq technique. Sleeping Beauty transposase The Sleeping Beauty (SB) transposase is the recombinase that drives the Sleeping Beauty transposon system. SB transposase belongs to the DD[E/D] family of transposases, which in turn belong to a large superfamily of polynucleotidyl transferases that includes RNase H, RuvC Holliday resolvase, RAG proteins, and retroviral integrases. The SB system is used primarily in vertebrate animals for gene transfer, including gene therapy, and gene discovery. The engineered SB100X is an enzyme that directs the high levels of transposon integration. Tn7 transposon The Tn7 transposon is a mobile genetic element found in many prokaryotes such as Escherichia coli (E. coli), and was first discovered as a DNA sequence in bacterial chromosomes and naturally occurring plasmids that encoded resistance to the antibiotics trimethoprim and streptomycin. Specifically classified as a transposable element (transposon), the sequence can duplicate and move itself within a genome by utilizing a self-encoded recombinase enzyme called a transposase, resulting in effects such as creating or reversing mutations and changing genome size. The Tn7 transposon has developed two mechanisms to promote its propagation among prokaryotes. Like many other bacterial transposons, Tn7 transposes at low-frequency and inserts into many different sites with little to no site-selectivity. Through this first pathway, Tn7 is preferentially directed into conjugable plasmids, which can be replicated and distributed between bacteria. However, Tn7 is unique in that it also transposes at high-frequency into a single specific site in bacterial chromosomes called attTn7. This specific sequence is an essential and highly conserved gene found in many strains of bacteria. However, the recombination is not deleterious to the host bacterium as Tn7 actually transposes downstream of the gene after recognizing it, resulting in a safe way to propagate the transposon without killing the host. This highly evolved and sophisticated target-site selection pathway suggests this pathway evolved to promote coexistence between the transposon and it host, as well as Tn7's successful transmission into future generations of bacterium. The Tn7 transposon is 14 kb long and codes for five enzymes. The ends of the DNA sequence consists of two segments that the Tn7 transposase interacts with during recombination. The left segment (Tn7-L) is 150 bp long and the right sequence (Tn7-R) is 90 bp long. Both ends of the transposon contain a series of 22 bp binding sites that the Tn7 transposase recognizes and binds to. Within the transposon are five discrete genes encoding for proteins that make up the transposition machinery. In addition, the transposon contains an integron, a DNA segment containing several cassettes of genes encoding for antibiotic-resistance. The Tn7 transposon codes for five proteins: TnsA, TnsB, TnsC, TnsD, and TnsE. TnsA and TnsB interact together to form the Tn7 transposase enzyme TnsAB. The enzyme specifically recognizes and binds to the ends of the DNA sequence of the transposon, and excises it by introducing double-stranded DNA breaks to each end. The excised sequence is then inserted to another target DNA site. Much like other characterized transposons, the mechanism for Tn7 transposition involves cleavage of the 3' ends from the donating DNA by the TnsA protein of the TnsAB transposase. However, Tn7 is also uniquely cleaved near the 5' ends, about 5 bp from the 5' end towards the Tn7 transposon, by the TnsB protein of TnsAB. After the insertion of the transposon into the target DNA site, the 3' ends are covalently linked to the target DNA, but the 5 bp gaps are still present at the 5' ends. As a result, repair of these gaps leads to a further 5 bp duplication at the target site. The TnsC protein interacts with the transposase enzyme and the target DNA to promote the excision and insertion processes. The ability of TnsC to activate the transposase depends on its interaction with a target DNA along with its appropriate targeting protein, TnsD or TnsE. The TnsD and TnsE proteins are alternative target selectors that are also DNA binding activators that promote excision and insertion of Tn7. Their ability to interact with a particular target DNA is key to the target-site selection of Tn7. The proteins TnsA, TnsB, and TnsC thus form the core machinery of Tn7: TnsA and TnsB interact together to form the transposase, while TnsC functions as a regulator of the transposase's activity, communicating between the transposase and TnsD and TnsE. When the TnsE protein interacts with the TnsABC core machinery, Tn7 preferentially directs insertions into conjugable plasmids. When the TnsD protein interacts with TnsABC, Tn7 preferentially directs insertions downstream into a single essential and highly conserved site in the bacterial chromosome. This site, attTn7, is specifically recognized by TnsD. References External links EC 2.7.7 Molecular biology Mobile genetic elements
Transposase
[ "Chemistry", "Biology" ]
2,469
[ "Biochemistry", "Molecular genetics", "Mobile genetic elements", "Molecular biology" ]
1,258,371
https://en.wikipedia.org/wiki/Scott%20core%20theorem
In mathematics, the Scott core theorem is a theorem about the finite presentability of fundamental groups of 3-manifolds due to G. Peter Scott, . The precise statement is as follows: Given a 3-manifold (not necessarily compact) with finitely generated fundamental group, there is a compact three-dimensional submanifold, called the compact core or Scott core, such that its inclusion map induces an isomorphism on fundamental groups. In particular, this means a finitely generated 3-manifold group is finitely presentable. A simplified proof is given in , and a stronger uniqueness statement is proven in . References 3-manifolds Theorems in group theory Theorems in topology
Scott core theorem
[ "Mathematics" ]
140
[ "Theorems in topology", "Topology stubs", "Topology", "Mathematical problems", "Mathematical theorems" ]
1,258,607
https://en.wikipedia.org/wiki/Proof%20assistant
In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human–machine collaboration. This involves some sort of interactive proof editor, or other interface, with which a human can guide the search for proofs, the details of which are stored in, and some steps provided by, a computer. A recent effort within this field is making these tools use artificial intelligence to automate the formalization of ordinary mathematics. System comparison ACL2 – a programming language, a first-order logical theory, and a theorem prover (with both interactive and automatic modes) in the Boyer–Moore tradition. Coq – Allows the expression of mathematical assertions, mechanically checks proofs of these assertions, helps to find formal proofs, and extracts a certified program from the constructive proof of its formal specification. HOL theorem provers – A family of tools ultimately derived from the LCF theorem prover. In these systems the logical core is a library of their programming language. Theorems represent new elements of the language and can only be introduced via "strategies" which guarantee logical correctness. Strategy composition gives users the ability to produce significant proofs with relatively few interactions with the system. Members of the family include: HOL4 – The "primary descendant", still under active development. Support for both Moscow ML and Poly/ML. Has a BSD-style license. HOL Light – A thriving "minimalist fork". OCaml based. ProofPower – Went proprietary, then returned to open source. Based on Standard ML. IMPS, An Interactive Mathematical Proof System. Isabelle is an interactive theorem prover, successor of HOL. The main code-base is BSD-licensed, but the Isabelle distribution bundles many add-on tools with different licenses. Jape – Java based. Lean LEGO Matita – A light system based on the Calculus of Inductive Constructions. MINLOG – A proof assistant based on first-order minimal logic. Mizar – A proof assistant based on first-order logic, in a natural deduction style, and Tarski–Grothendieck set theory. PhoX – A proof assistant based on higher-order logic which is eXtensible. Prototype Verification System (PVS) – a proof language and system based on higher-order logic. TPS and ETPS – Interactive theorem provers also based on simply-typed lambda calculus, but based on an independent formulation of the logical theory and independent implementation. User interfaces A popular front-end for proof assistants is the Emacs-based Proof General, developed at the University of Edinburgh. Coq includes CoqIDE, which is based on OCaml/Gtk. Isabelle includes Isabelle/jEdit, which is based on jEdit and the Isabelle/Scala infrastructure for document-oriented proof processing. More recently, Visual Studio Code extensions have been developed for Coq, Isabelle by Makarius Wenzel, and for Lean 4 by the leanprover developers. Formalization extent Freek Wiedijk has been keeping a ranking of proof assistants by the amount of formalized theorems out of a list of 100 well-known theorems. As of September 2023, only five systems have formalized proofs of more than 70% of the theorems, namely Isabelle, HOL Light, Coq, Lean, and Metamath. Notable formalized proofs The following is a list of notable proofs that have been formalized within proof assistants. See also Prover9 – is an automated theorem prover for first-order and equational logic Notes References External links Theorem Prover Museum "Introduction" in Certified Programming with Dependent Types. Introduction to the Coq Proof Assistant (with a general introduction to interactive theorem proving) Interactive Theorem Proving for Agda Users A list of theorem proving tools Catalogues Digital Math by Category: Tactic Provers Automated Deduction Systems and Groups Theorem Proving and Automated Reasoning Systems Database of Existing Mechanized Reasoning Systems NuPRL: Other Systems (By Frank Pfenning). DMOZ: Science: Math: Logic and Foundations: Computational Logic: Logical Frameworks Argument technology Automated theorem proving de:Maschinengestütztes Beweisen
Proof assistant
[ "Mathematics" ]
865
[ "Mathematical logic", "Computational mathematics", "Automated theorem proving" ]
1,258,983
https://en.wikipedia.org/wiki/Dry%20quicksand
Dry quicksand is loose sand whose bulk density is reduced by blowing air through it and which yields easily to weight or pressure. It acts similarly to normal quicksand, but it does not contain any water and does not operate on the same principle. Dry quicksand can also be a resulting phenomenon of contractive dilatancy. Historically, the existence of dry quicksand was doubted, and the reports of humans and complete caravans being lost in dry quicksand were considered to be folklore. In 2004, it was created in the laboratory, but it is still not clear what its actual prevalence in nature is. Scientific research Writing in Nature, physicist Detlef Lohse and coworkers of University of Twente in Enschede, Netherlands allowed air to flow through very fine sand (typical grain diameter was about 40 micrometers) in a container with a perforated base. They then turned the air stream off before the start of the experiment and allowed the sand to settle: the packing fraction of this sand was only 41% (compared to 55–60% for untreated sand). Lohse found that a weighted table tennis ball (radius 2 cm, mass 133 g), when released from just above the surface of the sand, would sink to about five diameters. Lohse also observed a "straight jet of sand [shooting] violently into the air after about 100 ms". Objects are known to make a splash when they hit sand, but this type of jet had never been described before. Lohse concluded that: In nature, dry quicksands may evolve from the sedimentation of very fine sand after it has been blown into the air and, if large enough, might be a threat to humans. Indeed, reports that travellers and whole vehicles have been swallowed instantly may even turn out to be credible in the light of our results. During the planning of the Project Apollo Moon missions, dry quicksand on the Moon was considered as a potential danger to the missions. The successful landings of the unmanned Surveyor probes a few years earlier and their observations of a solid, rocky surface largely discounted this possibility, however. The large plates at the end of legs of the Apollo Lunar Module were designed to reduce this danger, but the astronauts did not encounter dry quicksand. See also Fech fech Fluidization Dilatancy (granular material) Kekexili: Mountain Patrol (film that features dry quicksand) References External links Pictures of the quicksand experiment by Lohse et al. . Links to video of the quicksand experiment by Lohse et al. . Sediments Geological hazards Granularity of materials Soil mechanics ru:Зыбучий песок
Dry quicksand
[ "Physics", "Chemistry" ]
564
[ "Applied and interdisciplinary physics", "Soil mechanics", "Materials", "Particle technology", "Granularity of materials", "Matter" ]
1,260,054
https://en.wikipedia.org/wiki/Ectogenesis
Ectogenesis (from the Greek ἐκτός, "outside", and genesis) is the growth of an organism in an artificial environment, outside the body in which it would normally be found, such as the growth of an embryo or fetus outside the mother's body, or the growth of bacteria outside the body of a host. The term was coined by British scientist J. B. S. Haldane in 1924. Human embryos and fetuses Ectogenesis of human embryos and fetuses would require an artificial uterus. An artificial uterus would have to be supplied with nutrients and oxygen from some source to nurture the fetus, as well as dispose of waste material. There would likely be a need for an interface between such a supplier, filling this function of the placenta. As a replacement organ, an artificial uterus could be used to assist women with damaged, diseased or removed uteri to allow the fetus to be conceived to term. It also has the potential to move the threshold of fetal viability to a much earlier stage of pregnancy. This would have implications for the ongoing controversy regarding human reproductive rights. Ectogenesis could also be a means by which homosexual, impotent, disabled, and single men and women could have genetic offspring without the use of surrogate pregnancy or a sperm donor, and allow women to have children without going through the pregnancy cycle. Synthetic embryo In 2022, Jacob Hanna and his team at the Weizmann Institute of Science created early "embryo-like structures'" from mice stem cells. Their research was published by Cell on 1 August 2022. The world's first synthetic embryo does not require sperm, eggs, nor fertilization, and were grown from only embryonic stem cells (ESCs) or also from stem cells other than ESCs. The structure had an intestinal tract, early brain, and a beating heart and a placenta with a yolk sac around the embryo. The researchers said it could lead to better understanding of organ and tissue development, new sources of cells and tissues for human transplantation, although human synthetic embryos are a long ways off. Also in August 2022, a study described how University of Cambridge, alongside the same Weizmann Institute of Science scientists, created a synthetic embryo with a brain and a beating heart by using stem cells (also some stem cells other than ESCs). No human eggs nor sperm were used. They showed natural-like development and some survived until day 8.5 where early organogenesis, including formation of foundations of a brain, occurs. Scientists hope it can be used to create synthetic human organs for transplantation. The embryos grew in vitro and subsequently ex utero in an artificial womb published the year before by the Hanna team in Nature, and was used in both studies. Potential applications include "uncovering the role of different genes in birth defects or developmental disorders", gaining "direct insight into the origins of a new life", "understand[ing] why some pregnancies fail", and developing sources "of organs and tissues for people who need them". The term "synthetic embryo" in the title of the second study was later changed to the alternative term "embryo model". On 6 September 2023, Nature published research that the Weizmann Institute team created the first complete human day 14 post-implantation embryo models, using naïve ES cells expanded in special naive conditions developed by the same team in 2021. It also uses reprogrammed genetically unmodified naïve stem cells to become any type of body tissue. The embryo model (termed and abbreviated as SEM) mimics all the key structures like a "textbook image" of a human day-14 embryo. Bioethical considerations The development of artificial uteri and ectogenesis raises a few bioethical and legal considerations, and also has important implications for reproductive rights and the abortion debate. Artificial uteri may expand the range of fetal viability, raising questions about the role that fetal viability plays within abortion law. For example, within severance theory, abortion rights only include the right to remove the fetus, and do not always extend to the termination of the fetus. In the abortion debate, the death of the fetus has historically been considered an unavoidable side effect rather than the primary goal of an abortion. If transferring the fetus from a woman's womb to an artificial uterus becomes possible, then the choice to terminate a pregnancy in this way could result in a living child. Thus, the pregnancy could be aborted at any point, which respects the woman's right to bodily autonomy, without impinging on the moral status of the embryo or fetus. There are theoretical concerns that children who develop in an artificial uterus may lack "some essential bond with their mothers that other children have", a secondary issue to woman's rights over their own body. In the 1970 book The Dialectic of Sex, feminist Shulamith Firestone wrote that differences in biological reproductive roles are a source of gender inequality. Firestone singled out pregnancy and childbirth, making the argument that an artificial womb would free "women from the tyranny of their reproductive biology." See also Amniotic fluid Apheresis Ectopic pregnancy Endometrium Extracorporeal membrane oxygenation Hemodialysis In vitro fertilization Liver dialysis Placenta Tissue engineering Total parenteral nutrition Uterus References Further reading Developmental biology
Ectogenesis
[ "Biology" ]
1,117
[ "Behavior", "Developmental biology", "Reproduction" ]
1,260,106
https://en.wikipedia.org/wiki/Flexure
A flexure is a flexible element (or combination of elements) engineered to be compliant in specific degrees of freedom. Flexures are a design feature used by design engineers (usually mechanical engineers) for providing adjustment or compliance in a design. Flexure types Most compound flexure designs are composed of three fundamental types of flexure: Pin flexure - a thin bar or cylinder of material, constrains three degrees of freedom when geometry matches a notch cutout Blade flexure - thin sheet of material, constrains three degrees of freedom Notch flexure - thin cutout on both sides of a thick piece of material, constrains five degrees of freedom Since single flexure features are limited both in travel capability and degrees of freedom available, compound flexure systems are designed using combinations of these component features. Using compound flexures, complex motion profiles with specific degrees of freedom and relatively long travel distances are possible. Design aspects In the field of precision engineering (especially high-precision motion control), flexures have several key advantages. High precision alignment tasks might not be possible when friction or stiction are present. Additionally, conventional bearings or linear slides often exhibit positioning hysteresis due to backlash and friction. Flexures are able to achieve much lower resolution limits (in some cases measured in the nanometer scale), because they depend on bending and/or torsion of flexible elements, rather than surface interaction of many parts (as with a ball bearing). This makes flexures a critical design feature used in optical instrumentation such as interferometers. Due to their mode of action, flexures are used for limited range motions and cannot replace long-travel or continuous-rotation adjustments. Additionally, special care must be taken to design the flexure to avoid material yielding or fatigue, both of which are potential failure modes in a flexure design. Design examples Living hinge: Flexure which acts as a hinge. Preferred for their simplicity, as they can be included as a feature in a single piece of material (as in a Tic Tac box's lid). Leaf spring: Leaf Springs are commonly used in vehicle suspensions. Leaf springs are an example of a flexure system with one compliant degree of freedom. Flex Pivot: Frictionless pivoting component, for use in precision alignment applications. NASA's Mars Exploration Rovers and the Mars Science Laboratory rover Curiosity have engineered flexures in their wheels which act as vibration isolation and suspension for the rovers. See also Flexure bearing Compliant mechanism References Mechanical engineering
Flexure
[ "Physics", "Engineering" ]
510
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,260,390
https://en.wikipedia.org/wiki/Splash%20%28fluid%20mechanics%29
In fluid mechanics, a splash is a sudden disturbance to the otherwise quiescent free surface of a liquid (usually water). The disturbance is typically caused by a solid object suddenly hitting the surface, although splashes can occur in which moving liquid supplies the energy. This use of the word is onomatopoeic; in the past, the term "plash" has also been used. Splash also happens when a liquid droplet impacts on a liquid or a solid surface; in this case, a symmetric corona (resembling a coronet) is usually formed as shown in Harold Edgerton's famous milk splash photography, as milk is opaque. Historically, Worthington (1908) was the first one who systematically investigated the splash dynamics using photographs. Splashes are characterized by transient ballistic flow, and are governed by the Reynolds number and the Weber number. In the image of a brick splashing into water, one can identify freely moving airborne water droplets, a phenomenon typical of high Reynolds number flows; the intricate non-spherical shapes of the droplets show that the Weber number is high. Also seen are entrained air bubbles in the body of the water, and an expanding ring of disturbance propagating away from the impact site. Sand is said to splash if hit sufficiently hard (see dry quicksand) and sometimes the impact of a meteorite is referred to as splashing, if small bits of ejecta are formed. Physicist Lei Xu and coworkers at the University of Chicago discovered that the splash due to the impact of a small drop of ethanol onto a dry solid surface could be suppressed by reducing the pressure below a specific threshold. For drops of diameter 3.4 mm falling through air, this pressure was about 20 kilopascals, or 0.2 atmosphere. A plate made of a hard material on which a stream of liquid is designed to fall is called a "splash plate". It may serve to protect the ground from erosion by falling water, such as beneath an artificial waterfall or water outlet in soft ground. Splash plates are also part of spray nozzles, such as in irrigation sprinkler systems. See also Drop impact Slosh, other free surface phenomenon References Fluid dynamics
Splash (fluid mechanics)
[ "Chemistry", "Engineering" ]
450
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
1,261,603
https://en.wikipedia.org/wiki/Cold%20trap
In vacuum applications, a cold trap is a device that condenses all vapors except the permanent gases (hydrogen, oxygen, and nitrogen) into a liquid or solid. The most common objective is to prevent vapors being evacuated from an experiment from entering a vacuum pump where they would condense and contaminate it. Particularly large cold traps are necessary when removing large amounts of liquid as in freeze drying. Cold traps also refer to the application of cooled surfaces or baffles to prevent oil vapours from flowing from a pump and into a chamber. In such a case, a baffle or a section of pipe containing a number of cooled vanes, will be attached to the inlet of an existing pumping system. By cooling the baffle, either with a cryogen such as a dry ice mixture, or by use of an electrically driven Peltier element, oil vapour molecules that strike the baffle vanes will condense and thus be removed from the pumped cavity. Applications Pumps that use oil either as their working fluid (diffusion pumps), or as their lubricant (mechanical rotary pumps), are often the sources of contamination in vacuum systems. Placing a cold trap at the mouth of such a pump greatly lowers the risk that oil vapours will backstream into the cavity. Cold traps can also be used for experiments involving vacuum lines such as small-scale very low temperature distillations/condensations. This is accomplished through the use of a coolant such as liquid nitrogen or a freezing mixture of dry ice in acetone or a similar solvent with a low melting point. Liquid nitrogen is only used when dry ice or other cryogenic approaches will not condense the desired gasses since liquid nitrogen will also condense oxygen. Any oxygen gas content in the vacuum line or any leak in the vacuum line will result in liquid oxygen mixed with the target vapors, often with explosive results. When performed on a larger scale, this technique is called freeze-drying, and the cold trap is referred to as the condenser. Cold traps are also used in cryopump systems to generate hard vacua by condensing the major constituents of the atmosphere (nitrogen, oxygen, carbon dioxide and water) into their liquid or solid forms. An igloo or other snow bivouac may exploit the same principle – confinement of denser cool air within an impermeable lower volume – to reduce cold air reaching the occupants through use of a sump or cill around a raised sleeping platform within. Hazards Care should be taken when using a cold trap not to condense oxygen gas into the cold trap, visible as light blue liquid. Liquid oxygen is potentially explosive, and this is especially true if the trap has been used to trap solvent. Oxygen can be condensed into a cold trap if a pump has sucked air through the trap when the trap is very cold, e.g. when cooled with liquid nitrogen. Besides oxygen, many hazardous gases emitted in reactions, e.g. sulfur dioxide, chloromethane, condense into cold traps. Construction Cold traps (C in the figure) usually consist of two parts: The bottom is a large, thick round tube with ground-glass joints (B in the figure), and the second is a cap (A in the figure), also with ground-glass connections. The length of the tube is usually selected so that, when assembled, the total reached is about half the length of the tube. Arrangement Cold traps should be assembled such that the down tube is connected to the source of gas whilst the cap is connected to the source of vacuum. Reversing this, connecting the down tube to the source of vacuum, places the inlet of the vacuum directly above the condensate, increasing the chances of vapour phase condensate moving up the (uncooled) down tube (towards the pump) or, should the trap begin to fill to an appreciable volume, liquid phase condensate being pulled into the pump. See also Sublimation Cold finger References Laboratory glassware Vacuum systems Gases Gas technologies
Cold trap
[ "Physics", "Chemistry", "Engineering" ]
834
[ "Matter", "Vacuum systems", "Phases of matter", "Vacuum", "Statistical mechanics", "Gases" ]
1,262,296
https://en.wikipedia.org/wiki/Trouton%27s%20rule
In thermodynamics, Trouton's rule states that the (molar) entropy of vaporization is almost the same value, about 85–88 J/(K·mol), for various kinds of liquids at their boiling points. The entropy of vaporization is defined as the ratio between the enthalpy of vaporization and the boiling temperature. It is named after Frederick Thomas Trouton. It is expressed as a function of the gas constant : A similar way of stating this (Trouton's ratio) is that the latent heat is connected to boiling point roughly as Trouton’s rule can be explained by using Boltzmann's definition of entropy to the relative change in free volume (that is, space available for movement) between the liquid and vapour phases. It is valid for many liquids; for instance, the entropy of vaporization of toluene is 87.30 J/(K·mol), that of benzene is 89.45 J/(K·mol), and that of chloroform is 87.92 J/(K·mol). Because of its convenience, the rule is used to estimate the enthalpy of vaporization of liquids whose boiling points are known. The rule, however, has some exceptions. For example, the entropies of vaporization of water, ethanol, formic acid and hydrogen fluoride are far from the predicted values. The entropy of vaporization of at its boiling point has the extraordinarily high value of 136.9 J/(K·mol). The characteristic of those liquids to which Trouton’s rule cannot be applied is their special interaction between molecules, such as hydrogen bonding. The entropy of vaporization of water and ethanol shows positive deviance from the rule; this is because the hydrogen bonding in the liquid phase lessens the entropy of the phase. In contrast, the entropy of vaporization of formic acid has negative deviance. This fact indicates the existence of an orderly structure in the gas phase; it is known that formic acid forms a dimer structure even in the gas phase. Negative deviance can also occur as a result of a small gas-phase entropy owing to a low population of excited rotational states in the gas phase, particularly in small molecules such as methane a small moment of inertia giving rise to a large rotational constant , with correspondingly widely separated rotational energy levels and, according to Maxwell–Boltzmann distribution, a small population of excited rotational states, and hence a low rotational entropy. The validity of Trouton's rule can be increased by considering Here, if , the right hand side of the equation equals , and we find the original formulation for Trouton's rule. References Further reading - Publication of Trouton's rule Atkins, Peter (1978). Physical Chemistry Oxford University Press Chemistry theories Thermodynamic properties
Trouton's rule
[ "Physics", "Chemistry", "Mathematics" ]
598
[ "Thermodynamic properties", "Physical quantities", "Quantity", "Thermodynamics", "nan" ]
1,262,398
https://en.wikipedia.org/wiki/Dephasing
In physics, dephasing is a mechanism that recovers classical behaviour from a quantum system. It refers to the ways in which coherence caused by perturbation decays over time, and the system returns to the state before perturbation. It is an important effect in molecular and atomic spectroscopy, and in the condensed matter physics of mesoscopic devices. The reason can be understood by describing the conduction in metals as a classical phenomenon with quantum effects all embedded into an effective mass that can be computed quantum mechanically, as also happens to resistance that can be seen as a scattering effect of conduction electrons. When the temperature is lowered and the dimensions of the device are meaningfully reduced, this classical behaviour should disappear and the laws of quantum mechanics should govern the behavior of conducting electrons seen as waves that move ballistically inside the conductor without any kind of dissipation. Most of the time this is what one observes. But it appeared as a surprise to uncover that the so-called dephasing time, that is the time it takes for the conducting electrons to lose their quantum behavior, becomes finite rather than infinite when the temperature approaches zero in mesoscopic devices violating the expectations of the theory of Boris Altshuler, Arkady Aronov and David E. Khmelnitskii. This kind of saturation of the dephasing time at low temperatures is an open problem even as several proposals have been put forward. The coherence of a sample is explained by the off-diagonal elements of a density matrix. An external electric or magnetic field can create coherences between two quantum states in a sample if the frequency corresponds to the energy gap between the two states. The coherence terms decay with the dephasing time or spin–spin relaxation, T2. After coherence is created in a sample by light, the sample emits a polarization wave, the frequency of which is equal to and the phase of which is inverted from the incident light. In addition, the sample is excited by the incident light and a population of molecules in the excited state is generated. The light passing through the sample is absorbed because of these two processes, and it is expressed by an absorption spectrum. The coherence decays with the time constant, T2, and the intensity of the polarization wave is reduced. The population of the excited state also decays with the time constant of the longitudinal relaxation, T1. The time constant T2 is usually much smaller than T1, and the bandwidth of the absorption spectrum is related to these time constants by the Fourier transform, so the time constant T2 is a main contributor to the bandwidth. The time constant T2 has been measured with ultrafast time-resolved spectroscopy directly, such as in photon echo experiments. What is the dephasing rate of a particle that has an energy E if it is subject to a fluctuating environment that has a temperature T? In particular what is the dephasing rate close to equilibrium (E~T), and what happens in the zero temperature limit? This question has fascinated the mesoscopic community during the last two decades (see references below). See also Dephasing rate SP formula References Other (And references therein.) Wave mechanics Quantum optics Quantum information science Mesoscopic physics
Dephasing
[ "Physics", "Materials_science" ]
674
[ "Physical phenomena", "Quantum optics", "Classical mechanics", "Quantum mechanics", "Waves", "Wave mechanics", "Condensed matter physics", "Mesoscopic physics" ]
1,262,460
https://en.wikipedia.org/wiki/Bond-dissociation%20energy
The bond-dissociation energy (BDE, D0, or DH°) is one measure of the strength of a chemical bond . It can be defined as the standard enthalpy change when is cleaved by homolysis to give fragments A and B, which are usually radical species. The enthalpy change is temperature-dependent, and the bond-dissociation energy is often defined to be the enthalpy change of the homolysis at 0 K (absolute zero), although the enthalpy change at 298 K (standard conditions) is also a frequently encountered parameter. As a typical example, the bond-dissociation energy for one of the C−H bonds in ethane () is defined as the standard enthalpy change of the process , DH°298() = ΔH° = 101.1(4) kcal/mol = 423.0 ± 1.7 kJ/mol = 4.40(2) eV (per bond). To convert a molar BDE to the energy needed to dissociate the bond per molecule, the conversion factor 23.060 kcal/mol (96.485 kJ/mol) for each eV can be used. A variety of experimental techniques, including spectrometric determination of energy levels, generation of radicals by pyrolysis or photolysis, measurements of chemical kinetics and equilibrium, and various calorimetric and electrochemical methods have been used to measure bond dissociation energy values. Nevertheless, bond dissociation energy measurements are challenging and are subject to considerable error. The majority of currently known values are accurate to within ±1 or 2 kcal/mol (4–10 kJ/mol). Moreover, values measured in the past, especially before the 1970s, can be especially unreliable and have been subject to revisions on the order of 10 kcal/mol (e.g., benzene C–H bonds, from 103 kcal/mol in 1965 to the modern accepted value of 112.9(5) kcal/mol). Even in modern times (between 1990 and 2004), the O−H bond of phenol has been reported to be anywhere from 85.8 to 91.0 kcal/mol. On the other hand, the bond dissociation energy of H2 at 298 K has been measured to high precision and accuracy: DH°298(H−H) = 104.1539(1) kcal/mol or 435.780 kJ/mol. Definitions and related parameters The term bond-dissociation energy is similar to the related notion of bond-dissociation enthalpy (or bond enthalpy), which is sometimes used interchangeably. However, some authors make the distinction that the bond-dissociation energy (D0) refers to the enthalpy change at 0 K, while the term bond-dissociation enthalpy is used for the enthalpy change at 298 K (unambiguously denoted DH°298). The former parameter tends to be favored in theoretical and computational work, while the latter is more convenient for thermochemical studies. For typical chemical systems, the numerical difference between the quantities is small, and the distinction can often be ignored. For a hydrocarbon RH, where R is significantly larger than H, for instance, the relationship D0(R−H) ≈ DH°298(R−H) − 1.5 kcal/mol is a good approximation. Some textbooks ignore the temperature dependence, while others have defined the bond-dissociation energy to be the reaction enthalpy of homolysis at 298 K. The bond dissociation energy is related to but slightly different from the depth of the associated potential energy well of the bond, De, known as the electronic energy. This is due to the existence of a zero-point energy ε0 for the vibrational ground state, which reduces the amount of energy needed to reach the dissociation limit. Thus, D0 is slightly less than De, and the relationship D0 = De − ε0 holds. The bond dissociation energy is an enthalpy change of a particular chemical process, namely homolytic bond cleavage, and "bond strength" as measured by the BDE should not be regarded as an intrinsic property of a particular bond type but rather as an energy change that depends on the chemical context. For instance, Blanksby and Ellison cites the example of ketene (H2C=CO), which has a C=C bond dissociation energy of 79 kcal/mol, while ethylene (H2C=CH2) has a bond dissociation energy of 174 kcal/mol. This vast difference is accounted for by the thermodynamic stability of carbon monoxide (CO), formed upon the C=C bond cleavage of ketene. The difference in availability of spin states upon fragmentation further complicates the use of BDE as a measure of bond strength for head-to-head comparisons, and force constants have been suggested as an alternative. Historically, the vast majority of tabulated bond energy values are bond enthalpies. More recently, however, the free energy analogue of bond-dissociation enthalpy, known as the bond-dissociation free energy (BDFE), has become more prevalent in the chemical literature. The BDFE of a bond A–B can be defined in the same way as the BDE as the standard free energy change (ΔG°) accompanying homolytic dissociation of AB into A and B. However, it is often thought of and computed stepwise as the sum of the free-energy changes of heterolytic bond dissociation (A–B → A+ + :B−), followed by one-electron reduction of A (A+ + e− → A•) and one-electron oxidation of B (:B− → •B + e−). In contrast to the BDE, which is usually defined and measured in the gas phase, the BDFE is often determined in the solution phase with respect to a solvent like DMSO, since the free-energy changes for the aforementioned thermochemical steps can be determined from parameters like acid dissociation constants (pKa) and standard redox potentials (ε°) that are measured in solution. Bond energy Except for diatomic molecules, the bond-dissociation energy differs from the bond energy. While the bond-dissociation energy is the energy of a single chemical bond, the bond energy is the average of all the bond-dissociation energies of the bonds of the same type for a given molecule. For a homoleptic compound EXn, the E–X bond energy is (1/n) multiplied by the enthalpy change of the reaction EXn → E + nX. Average bond energies given in tables are the average values of the bond energies of a collection of species containing "typical" examples of the bond in question. For example, dissociation of HO−H bond of a water molecule (H2O) requires 118.8 kcal/mol (497.1 kJ/mol). The dissociation of the remaining hydroxyl radical requires 101.8 kcal/mol (425.9 kJ/mol). The bond energy of the covalent O−H bonds in water is said to be 110.3 kcal/mol (461.5 kJ/mol), the average of these values. In the same way, for removing successive hydrogen atoms from methane the bond-dissociation energies are 105 kcal/mol (439 kJ/mol) for D(CH3−H), 110 kcal/mol (460 kJ/mol) for D(CH2−H), 101 kcal/mol (423 kJ/mol) for D(CH−H) and finally 81 kcal/mol (339 kJ/mol) for D(C−H). The bond energy is, thus, 99 kcal/mol, or 414 kJ/mol (the average of the bond-dissociation energies). None of the individual bond-dissociation energies equals the bond energy of 99 kcal/mol. Strongest bonds and weakest bonds According to BDE data, the strongest single bonds are Si−F bonds. The BDE for H3Si−F is 152 kcal/mol, almost 50% stronger than the H3C−F bond (110 kcal/mol). The BDE for F3Si−F is even larger, at 166 kcal/mol. One consequence to these data are that many reactions generate silicon fluorides, such as glass etching, deprotection in organic synthesis, and volcanic emissions. The strength of the bond is attributed to the substantial electronegativity difference between silicon and fluorine, which leads to a substantial contribution from both ionic and covalent bonding to the overall strength of the bond. The C−C single bond of diacetylene (HC≡C−C≡CH) linking two sp-hybridized carbon atoms is also among the strongest, at 160 kcal/mol. The strongest bond for a neutral compound, including multiple bonds, is found in carbon monoxide at 257 kcal/mol. The protonated forms of CO, HCN and N2 are said to have even stronger bonds, although another study argues that the use of BDE as a measure of bond strength in these cases is misleading. On the other end of the scale, there is no clear boundary between a very weak covalent bond and an intermolecular interaction. Lewis acid–base complexes between transition metal fragments and noble gases are among the weakest of bonds with substantial covalent character, with (CO)5W:Ar having a W–Ar bond dissociation energy of less than 3.0 kcal/mol. Held together entirely by the van der Waals force, helium dimer, He2, has the lowest measured bond dissociation energy of only 0.022 kcal/mol. Homolytic versus heterolytic dissociation Bonds can be broken symmetrically or asymmetrically. The former is called homolysis and is the basis of the usual BDEs. Asymmetric scission of a bond is called heterolysis. For molecular hydrogen, the alternatives are: {| style="border-spacing: 1em 0; margin-left: -1em" |- | Symmetric: | H2 → 2 H• | ΔH° = 104.2 kcal/mol (see table below) |- | Asymmetric: | H2 → H+ + H− | ΔH° = 400.4 kcal/mol (gas phase) |- | Asymmetric: | H2 → H+ + H− | ΔG° = 34.2 kcal/mol (in water) (pKaaq = 25.1) |} In the gas phase, the enthalpy of heterolysis is larger than that of homolysis, due to the need to separate unlike charges. However, this value is lowered substantially in the presence of a solvent. Representative bond enthalpies The data tabulated below shows how bond strengths vary over the periodic table. There is great interest, especially in organic chemistry, concerning relative strengths of bonds within a given group of compounds, and representative bond dissociation energies for common organic compounds are shown below. See also Bond energy Electronegativity Ionization energy Electron affinity Lattice energy References Dissociation energy Binding energy
Bond-dissociation energy
[ "Chemistry" ]
2,469
[ "Chemical bond properties" ]
1,262,556
https://en.wikipedia.org/wiki/Methylamine
Methylamine, also known as methanamine, is an organic compound with a formula of . This colorless gas is a derivative of ammonia, but with one hydrogen atom being replaced by a methyl group. It is the simplest primary amine. Methylamine is sold as a solution in methanol, ethanol, tetrahydrofuran, or water, or as the anhydrous gas in pressurized metal containers. Industrially, methylamine is transported in its anhydrous form in pressurized railcars and tank trailers. It has a strong odor similar to rotten fish. Methylamine is used as a building block for the synthesis of numerous other commercially available compounds. Industrial production Methylamine has been produced industrially since the 1920s (originally by Commercial Solvents Corporation for dehairing of animal skins). This was made possible by and his wife Eugenia who discovered amination of alcohols, including methanol, on alumina or kaolin catalyst after WWI, filed two patent applications in 1919 and published an article in 1921. It is now prepared commercially by the reaction of ammonia with methanol in the presence of an aluminosilicate catalyst. Dimethylamine and trimethylamine are co-produced; the reaction kinetics and reactant ratios determine the ratio of the three products. The product most favored by the reaction kinetics is trimethylamine. In this way, an estimated 115,000 tons were produced in 2005. Laboratory methods Methylamine was first prepared in 1849 by Charles-Adolphe Wurtz via the hydrolysis of methyl isocyanate and related compounds. An example of this process includes the use of the Hofmann rearrangement, to yield methylamine from acetamide and bromine. In the laboratory, methylamine hydrochloride is readily prepared by various other methods. One method entails treating formaldehyde with ammonium chloride. The colorless hydrochloride salt can be converted to an amine by the addition of a strong base, such as sodium hydroxide (NaOH): Another method entails reducing nitromethane with zinc and hydrochloric acid. Another method of methylamine production is spontaneous decarboxylation of glycine with a strong base in water. Reactivity and applications Methylamine is a good nucleophile as it is an unhindered amine. As an amine it is considered a weak base. Its use in organic chemistry is pervasive. Some reactions involving simple reagents include: with phosgene to methyl isocyanate, with carbon disulfide and sodium hydroxide to the sodium methyldithiocarbamate, with chloroform and base to methyl isocyanide and with ethylene oxide to methylethanolamines. Liquid methylamine has solvent properties analogous to those of liquid ammonia. Representative commercially significant chemicals produced from methylamine include the pharmaceuticals ephedrine and theophylline, the pesticides carbofuran, carbaryl, and metham sodium, and the solvents N-methylformamide and N-methylpyrrolidone. The preparation of some surfactants and photographic developers require methylamine as a building block. Biological chemistry Methylamine arises as a result of putrefaction and is a substrate for methanogenesis. Additionally, methylamine is produced during PADI4-dependent arginine demethylation. Safety The LD50 (mouse, s.c.) is 2.5 g/kg. The Occupational Safety and Health Administration (OSHA) and National Institute for Occupational Safety and Health (NIOSH) have set occupational exposure limits at 10 ppm or 12 mg/m3 over an eight-hour time-weighted average. Regulation In the United States, methylamine is controlled as a List 1 precursor chemical by the Drug Enforcement Administration due to its use in the illicit production of methamphetamine. In popular culture Fictional characters Walter White and Jesse Pinkman use aqueous methylamine as part of a process to synthesize methamphetamine in the AMC series Breaking Bad. See also Methylammonium halide References Alkylamines 1849 introductions Organic compounds with 1 carbon atom Substances discovered in the 19th century Methyl compounds
Methylamine
[ "Chemistry" ]
879
[ "Organic compounds", "Organic compounds with 1 carbon atom" ]
23,694,557
https://en.wikipedia.org/wiki/Calculus%20of%20functors
In algebraic topology, a branch of mathematics, the calculus of functors or Goodwillie calculus is a technique for studying functors by approximating them by a sequence of simpler functors; it generalizes the sheafification of a presheaf. This sequence of approximations is formally similar to the Taylor series of a smooth function, hence the term "calculus of functors". Many objects of central interest in algebraic topology can be seen as functors, which are difficult to analyze directly, so the idea is to replace them with simpler functors which are sufficiently good approximations for certain purposes. The calculus of functors was developed by Thomas Goodwillie in a series of three papers in the 1990s and 2000s, and has since been expanded and applied in a number of areas. Examples A motivational example, of central interest in geometric topology, is the functor of embeddings of one manifold M into another manifold N, whose first derivative in the sense of calculus of functors is the functor of immersions. As every embedding is an immersion, one obtains an inclusion of functors – in this case the map from a functor to an approximation is an inclusion, but in general it is simply a map. As this example illustrates, the linear approximation of a functor (on a topological space) is its sheafification, thinking of the functor as a presheaf on the space (formally, as a functor on the category of open subsets of the space), and sheaves are the linear functors. This example was studied by Goodwillie and Michael Weiss. Definition Here is an analogy: with the Taylor series method from calculus, you can approximate the shape of a smooth function f around a point x by using a sequence of increasingly accurate polynomial functions. In a similar way, with the calculus of functors method, you can approximate the behavior of certain kind of functor F at a particular object X by using a sequence of increasingly accurate polynomial functors. To be specific, let M be a smooth manifold and let O(M) be the category of open subspaces of M, i.e., the category where the objects are the open subspaces of M, and the morphisms are inclusion maps. Let F be a contravariant functor from the category O(M) to the category Top of topological spaces with continuous morphisms. This kind of functor, called a Top-valued presheaf on M, is the kind of functor you can approximate using the calculus of functors method: for a particular open set X∈O(M), you may want to know what sort of a topological space F(X) is, so you can study the topology of the increasingly accurate approximations F0(X), F1(X), F2(X), and so on. In the calculus of functors method, the sequence of approximations consists of (1) functors , and so on, as well as (2) natural transformations for each integer k. These natural transforms are required to be compatible, meaning that the composition equals the map and thus form a tower and can be thought of as "successive approximations", just as in a Taylor series one can progressively discard higher order terms. The approximating functors are required to be "k-excisive" – such functors are called polynomial functors by analogy with Taylor polynomials – which is a simplifying condition, and roughly means that they are determined by their behavior around k points at a time, or more formally are sheaves on the configuration space of k points in the given space. The difference between the kth and st functors is a "homogeneous functor of degree k" (by analogy with homogeneous polynomials), which can be classified. For the functors to be approximations to the original functor F, the resulting approximation maps must be n-connected for some number n, meaning that the approximating functor approximates the original functor "in dimension up to n"; this may not occur. Further, if one wishes to reconstruct the original functor, the resulting approximations must be n-connected for n increasing to infinity. One then calls F an analytic functor, and says that "the Taylor tower converges to the functor", in analogy with Taylor series of an analytic function. Branches There are three branches of the calculus of functors, developed in the order: manifold calculus, such as embeddings, homotopy calculus, and orthogonal calculus. Homotopy calculus has seen far wider application than the other branches. History The notion of a sheaf and sheafification of a presheaf date to early category theory, and can be seen as the linear form of the calculus of functors. The quadratic form can be seen in the work of André Haefliger on links of spheres in 1965, where he defined a "metastable range" in which the problem is simpler. This was identified as the quadratic approximation to the embeddings functor in Goodwillie and Weiss. References External links Thomas Goodwillie John Klein Michael S. Weiss Algebraic topology Functors
Calculus of functors
[ "Mathematics" ]
1,067
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Algebraic topology", "Fields of abstract algebra", "Topology", "Mathematical relations", "Functors", "Category theory" ]
23,695,030
https://en.wikipedia.org/wiki/Paneitz%20operator
In the mathematical field of differential geometry, the Paneitz operator is a fourth-order differential operator defined on a Riemannian manifold of dimension n. It is named after Stephen Paneitz, who discovered it in 1983, and whose preprint was later published posthumously in . In fact, the same operator was found earlier in the context of conformal supergravity by E. Fradkin and A. Tseytlin in 1982 (Phys Lett B 110 (1982) 117 and Nucl Phys B 1982 (1982) 157 ). It is given by the formula where Δ is the Laplace–Beltrami operator, d is the exterior derivative, δ is its formal adjoint, V is the Schouten tensor, J is the trace of the Schouten tensor, and the dot denotes tensor contraction on either index. Here Q is the scalar invariant where Δ is the positive Laplacian. In four dimensions this yields the Q-curvature. The operator is especially important in conformal geometry, because in a suitable sense it depends only on the conformal structure. Another operator of this kind is the conformal Laplacian. But, whereas the conformal Laplacian is second-order, with leading symbol a multiple of the Laplace–Beltrami operator, the Paneitz operator is fourth-order, with leading symbol the square of the Laplace–Beltrami operator. The Paneitz operator is conformally invariant in the sense that it sends conformal densities of weight to conformal densities of weight . Concretely, using the canonical trivialization of the density bundles in the presence of a metric, the Paneitz operator P can be represented in terms of a representative the Riemannian metric g as an ordinary operator on functions that transforms according under a conformal change according to the rule The operator was originally derived by working out specifically the lower-order correction terms in order to ensure conformal invariance. Subsequent investigations have situated the Paneitz operator into a hierarchy of analogous conformally invariant operators on densities: the GJMS operators. The Paneitz operator has been most thoroughly studied in dimension four where it appears naturally in connection with extremal problems for the functional determinant of the Laplacian (via the Polyakov formula; see ). In dimension four only, the Paneitz operator is the "critical" GJMS operator, meaning that there is a residual scalar piece (the Q curvature) that can only be recovered by asymptotic analysis. The Paneitz operator appears in extremal problems for the Moser–Trudinger inequality in dimension four as well CR Paneitz operator There is a close connection between 4 dimensional Conformal Geometry and 3 dimensional CR geometry associated with the study of CR manifolds. There is a naturally defined fourth order operator on CR manifolds introduced by C. Robin Graham and John Lee that has many properties similar to the classical Paneitz operator defined on 4 dimensional Riemannian manifolds. This operator in CR geometry is called the CR Paneitz operator. The operator defined by Graham and Lee though defined on all odd dimensional CR manifolds, is not known to be conformally covariant in real dimension 5 and higher. The conformal covariance of this operator has been established in real dimension 3 by Kengo Hirachi. It is always a non-negative operator in real dimension 5 and higher. Here unlike changing the metric by a conformal factor as in the Riemannian case discussed above, one changes the contact form on the CR 3 manifold by a conformal factor. Non-negativity of the CR Paneitz operator in dimension 3 is a CR invariant condition as proved below. This follows by the conformal covariant properties of the CR Paneitz operator first observed by Kengo Hirachi. Furthermore, the CR Paneitz operator plays an important role in obtaining the sharp eigenvalue lower bound for Kohn's Laplacian. This is a result of Sagun Chanillo, Hung-Lin Chiu and Paul C. Yang. This sharp eigenvalue lower bound is the exact analog in CR Geometry of the famous André Lichnerowicz lower bound for the Laplace–Beltrami operator on compact Riemannian manifolds. It allows one to globally embed, compact, strictly pseudoconvex, abstract CR manifolds into . More precisely, the conditions in [3] to embed a CR manifold into are phrased CR invariantly and non-perturbatively. There is also a partial converse of the above result where the authors, J. S. Case, S. Chanillo, P. Yang, obtain conditions that guarantee when embedded, compact CR manifolds have non-negative CR Paneitz operators. The formal definition of the CR Paneitz operator on CR manifolds of real dimension three is as follows( the subscript is to remind the reader that this is a fourth order operator) denotes the Kohn Laplacian which plays a fundamental role in CR Geometry and several complex variables and was introduced by Joseph J. Kohn. One may consult The tangential Cauchy–Riemann complex (Kohn Laplacian, Kohn–Rossi complex) for the definition of the Kohn Laplacian. Further, denotes the Webster-Tanaka Torsion tensor and the covariant derivative of the function with respect to the Webster-Tanaka connection. Accounts of the Webster-Tanaka, connection, Torsion and curvature tensor may be found in articles by John M. Lee and Sidney M. Webster. There is yet another way to view the CR Paneitz operator in dimension 3. John M. Lee constructed a third order operator which has the property that the kernel of consists of exactly the CR pluriharmonic functions (real parts of CR holomorphic functions). The Paneitz operator displayed above is exactly the divergence of this third order operator . The third order operator is defined as follows: Here is the Webster-Tanaka torsion tensor. The derivatives are taken using the Webster-Tanaka connection and is the dual 1-form to the CR-holomorphic tangent vector that defines the CR structure on the compact manifold. Thus sends functions to forms. The divergence of such an operator thus will take functions to functions. The third order operator constructed by J. Lee only characterizes CR pluriharmonic functions on CR manifolds of real dimension three. Hirachi's covariant transformation formula for on three dimensional CR manifolds is as follows. Let the CR manifold be , where is the contact form and the CR structure on the kernel of that is on the contact planes. Let us transform the background contact form by a conformal transformation to . Note this new contact form obtained by a conformal change of the old contact form or background contact form, has not changed the kernel of . That is and have the same kernel, i.e. the contact planes have remained unchanged. The CR structure has been kept unchanged. The CR Paneitz operator for the new contact form is now seen to be related to the CR Paneitz operator for the contact form by the formula of Hirachi: Next note the volume forms on the manifold satisfy Using the transformation formula of Hirachi, it follows that, Thus we easily conclude that: is a CR invariant. That is the integral displayed above has the same value for different contact forms describing the same CR structure . The operator is a real self-adjoint operator. On CR manifolds like where the Webster-Tanaka torsion tensor is zero, it is seen from the formula displayed above that only the leading terms involving the Kohn Laplacian survives. Next from the tensor commutation formulae given in [5], one can easily check that the operators commute when the Webster-Tanaka torsion tensor vanishes. More precisely one has where Thus are simultaneously diagonalizable under the zero torsion assumption. Next note that and have the same sequence of eigenvalues that are also perforce real. Thus we conclude from the formula for that CR structures having zero torsion have CR Paneitz operators that are non-negative. The article [4] among other things shows that real ellipsoids in carry a CR structure inherited from the complex structure of whose CR Paneitz operator is non-negative. This CR structure on ellipsoids has non-vanishing Webster-Tanaka torsion. Thus [4] provides the first examples of CR manifolds where the CR Paneitz operator is non-negative and the Torsion tensor too does not vanish. Since we have observed above that the CR Paneitz is the divergence of an operator whose kernel is the pluriharmonic functions, it also follows that the kernel of the CR Paneitz operator contains all CR Pluriharmonic functions. So the kernel of the CR Paneitz operator in sharp contrast to the Riemannian case, has an infinite dimensional kernel. Results on when the kernel is exactly the pluriharmonic functions, the nature and role of the supplementary space in the kernel etc., may be found in the article cited as [4] below. One of the principal applications of the CR Paneitz operator and the results in [3] are to the CR analog of the Positive Mass theorem due to Jih-Hsin Cheng, Andrea Malchiodi and Paul C. Yang. This allows one to obtain results on the CR Yamabe problem. More facts related to the role of the CR Paneitz operator in CR geometry can be obtained from the article CR manifold. See also Calabi conjecture Monge-Ampere equations Positive mass conjecture Yamabe conjecture References . . . Conformal geometry Differential geometry Differential operators
Paneitz operator
[ "Mathematics" ]
2,016
[ "Mathematical analysis", "Differential operators" ]
23,699,292
https://en.wikipedia.org/wiki/Distributed%20source%20coding
Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other. By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression (see distributed video coding). One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder. History In 1973, David Slepian and Jack Keil Wolf proposed the information theoretical lossless compression bound on distributed compression of two correlated i.i.d. sources X and Y. After that, this bound was extended to cases with more than two sources by Thomas M. Cover in 1975, while the theoretical results in the lossy compression case are presented by Aaron D. Wyner and Jacob Ziv in 1976. Although the theorems on DSC were proposed on 1970s, it was after about 30 years that attempts were started for practical techniques, based on the idea that DSC is closely related to channel coding proposed in 1974 by Aaron D. Wyner. The asymmetric DSC problem was addressed by S. S. Pradhan and K. Ramchandran in 1999, which focused on statistically dependent binary and Gaussian sources and used scalar and trellis coset constructions to solve the problem. They further extended the work into the symmetric DSC case. Syndrome decoding technology was first used in distributed source coding by the DISCUS system of SS Pradhan and K Ramachandran (Distributed Source Coding Using Syndromes). They compress binary block data from one source into syndromes and transmit data from the other source uncompressed as side information. This kind of DSC scheme achieves asymmetric compression rates per source and results in asymmetric DSC. This asymmetric DSC scheme can be easily extended to the case of more than two correlated information sources. There are also some DSC schemes that use parity bits rather than syndrome bits. The correlation between two sources in DSC has been modeled as a virtual channel which is usually referred as a binary symmetric channel. Starting from DISCUS, DSC has attracted significant research activity and more sophisticated channel coding techniques have been adopted into DSC frameworks, such as Turbo Code, LDPC Code, and so on. Similar to the previous lossless coding framework based on Slepian–Wolf theorem, efforts have been taken on lossy cases based on the Wyner–Ziv theorem. Theoretical results on quantizer designs was provided by R. Zamir and S. Shamai, while different frameworks have been proposed based on this result, including a nested lattice quantizer and a trellis-coded quantizer. Moreover, DSC has been used in video compression for applications which require low complexity video encoding, such as sensor networks, multiview video camcorders, and so on. With deterministic and probabilistic discussions of correlation model of two correlated information sources, DSC schemes with more general compressed rates have been developed. In these non-asymmetric schemes, both of two correlated sources are compressed. Under a certain deterministic assumption of correlation between information sources, a DSC framework in which any number of information sources can be compressed in a distributed way has been demonstrated by X. Cao and M. Kuijper. This method performs non-asymmetric compression with flexible rates for each source, achieving the same overall compression rate as repeatedly applying asymmetric DSC for more than two sources. Then, by investigating the unique connection between syndromes and complementary codewords of linear codes, they have translated the major steps of DSC joint decoding into syndrome decoding followed by channel encoding via a linear block code and also via its complement code, which theoretically illustrated a method of assembling a DSC joint decoder from linear code encoders and decoders. Theoretical bounds The information theoretical lossless compression bound on DSC (the Slepian–Wolf bound) was first purposed by David Slepian and Jack Keil Wolf in terms of entropies of correlated information sources in 1973. They also showed that two isolated sources can compress data as efficiently as if they were communicating with each other. This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975. Similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources. Slepian–Wolf bound Distributed Coding is the coding of two or more dependent sources with separate encoders and joint decoder. Given two statistically dependent i.i.d. finite-alphabet random sequences X and Y, Slepian–Wolf theorem includes theoretical bound for the lossless coding rate for distributed coding of the two sources as below: If both the encoder and decoder of the two sources are independent, the lowest rate we can achieve for lossless compression is and for and respectively, where and are the entropies of and . However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved. As long as the total rate of and is larger than their joint entropy and none of the sources is encoded with a rate larger than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences. A special case of distributed coding is compression with decoder side information, where source is available at the decoder side but not accessible at the encoder side. This can be treated as the condition that has already been used to encode , while we intend to use to encode . The whole system is operating in an asymmetric way (compression rate for the two sources are asymmetric). Wyner–Ziv bound Shortly after Slepian–Wolf theorem on lossless distributed compression was published, the extension to lossy compression with decoder side information was proposed as Wyner–Ziv theorem. Similarly to lossless case, two statistically dependent i.i.d. sources and are given, where is available at the decoder side but not accessible at the encoder side. Instead of lossless compression in Slepian–Wolf theorem, Wyner–Ziv theorem looked into the lossy compression case. The Wyner–Ziv theorem presents the achievable lower bound for the bit rate of at given distortion . It was found that for Gaussian memoryless sources and mean-squared error distortion, the lower bound for the bit rate of remain the same no matter whether side information is available at the encoder or not. Virtual channel Deterministic model Probabilistic model Asymmetric DSC vs. symmetric DSC Asymmetric DSC means that, different bitrates are used in coding the input sources, while same bitrate is used in symmetric DSC. Taking a DSC design with two sources for example, in this example and are two discrete, memoryless, uniformly distributed sources which generate set of variables and of length 7 bits and the Hamming distance between and is at most one. The Slepian–Wolf bound for them is: This means, the theoretical bound is and symmetric DSC means 5 bits for each source. Other pairs with are asymmetric cases with different bit rate distributions between and , where , and , represent two extreme cases called decoding with side information. Practical distributed source coding Slepian–Wolf coding – lossless distributed coding It was understood that Slepian–Wolf coding is closely related to channel coding in 1974, and after about 30 years, practical DSC started to be implemented by different channel codes. The motivation behind the use of channel codes is from two sources case, the correlation between input sources can be modeled as a virtual channel which has input as source and output as source . The DISCUS system proposed by S. S. Pradhan and K. Ramchandran in 1999 implemented DSC with syndrome decoding, which worked for asymmetric case and was further extended to symmetric case. The basic framework of syndrome based DSC is that, for each source, its input space is partitioned into several cosets according to the particular channel coding method used. Every input of each source gets an output indicating which coset the input belongs to, and the joint decoder can decode all inputs by received coset indices and dependence between sources. The design of channel codes should consider the correlation between input sources. A group of codes can be used to generate coset partitions, such as trellis codes and lattice codes. Pradhan and Ramchandran designed rules for construction of sub-codes for each source, and presented result of trellis-based coset constructions in DSC, which is based on convolution code and set-partitioning rules as in Trellis modulation, as well as lattice code based DSC. After this, embedded trellis code was proposed for asymmetric coding as an improvement over their results. After DISCUS system was proposed, more sophisticated channel codes have been adapted to the DSC system, such as Turbo Code, LDPC Code and Iterative Channel Code. The encoders of these codes are usually simple and easy to implement, while the decoders have much higher computational complexity and are able to get good performance by utilizing source statistics. With sophisticated channel codes which have performance approaching the capacity of the correlation channel, corresponding DSC system can approach the Slepian–Wolf bound. Although most research focused on DSC with two dependent sources, Slepian–Wolf coding has been extended to more than two input sources case, and sub-codes generation methods from one channel code was proposed by V. Stankovic, A. D. Liveris, etc. given particular correlation models. General theorem of Slepian–Wolf coding with syndromes for two sources Theorem: Any pair of correlated uniformly distributed sources, , with , can be compressed separately at a rate pair such that , where and are integers, and . This can be achieved using an binary linear code. Proof: The Hamming bound for an binary linear code is , and we have Hamming code achieving this bound, therefore we have such a binary linear code with generator matrix . Next we will show how to construct syndrome encoding based on this linear code. Let and be formed by taking first rows from , while is formed using the remaining rows of . and are the subcodes of the Hamming code generated by and respectively, with and as their parity check matrices. For a pair of input , the encoder is given by and . That means, we can represent and as , , where are the representatives of the cosets of with regard to respectively. Since we have with . We can get , where , . Suppose there are two different input pairs with the same syndromes, that means there are two different strings , such that and . Thus we will have . Because minimum Hamming weight of the code is , the distance between and is . On the other hand, according to together with and , we will have and , which contradict with . Therefore, we cannot have more than one input pairs with the same syndromes. Therefore, we can successfully compress the two dependent sources with constructed subcodes from an binary linear code, with rate pair such that , where and are integers, and . Log indicates Log2. Slepian–Wolf coding example Take the same example as in the previous Asymmetric DSC vs. Symmetric DSC part, this part presents the corresponding DSC schemes with coset codes and syndromes including asymmetric case and symmetric case. The Slepian–Wolf bound for DSC design is shown in the previous part. Asymmetric case In the case where and , the length of an input variable from source is 7 bits, therefore it can be sent lossless with 7 bits independent of any other bits. Based on the knowledge that and have Hamming distance at most one, for input from source , since the receiver already has , the only possible are those with at most 1 distance from . If we model the correlation between two sources as a virtual channel, which has input and output , as long as we get , all we need to successfully "decode" is "parity bits" with particular error correction ability, taking the difference between and as channel error. We can also model the problem with cosets partition. That is, we want to find a channel code, which is able to partition the space of input into several cosets, where each coset has a unique syndrome associated with it. With a given coset and , there is only one that is possible to be the input given the correlation between two sources. In this example, we can use the binary Hamming Code , with parity check matrix . For an input from source , only the syndrome given by is transmitted, which is 3 bits. With received and , suppose there are two inputs and with same syndrome . That means , which is . Since the minimum Hamming weight of Hamming Code is 3, . Therefore, the input can be recovered since . Similarly, the bits distribution with , can be achieved by reversing the roles of and . Symmetric case In symmetric case, what we want is equal bitrate for the two sources: 5 bits each with separate encoder and joint decoder. We still use linear codes for this system, as we used for asymmetric case. The basic idea is similar, but in this case, we need to do coset partition for both sources, while for a pair of received syndromes (corresponds to one coset), only one pair of input variables are possible given the correlation between two sources. Suppose we have a pair of linear code and and an encoder-decoder pair based on linear codes which can achieve symmetric coding. The encoder output is given by: and . If there exists two pair of valid inputs and generating the same syndromes, i.e. and , we can get following( represents Hamming weight): , where , where Thus: where and . That means, as long as we have the minimum distance between the two codes larger than , we can achieve error-free decoding. The two codes and can be constructed as subcodes of the Hamming code and thus has minimum distance of . Given the generator matrix of the original Hamming code, the generator matrix for is constructed by taking any two rows from , and is constructed by the remaining two rows of . The corresponding parity-check matrix for each sub-code can be generated according to the generator matrix and used to generate syndrome bits. Wyner–Ziv coding – lossy distributed coding In general, a Wyner–Ziv coding scheme is obtained by adding a quantizer and a de-quantizer to the Slepian–Wolf coding scheme. Therefore, a Wyner–Ziv coder design could focus on the quantizer and corresponding reconstruction method design. Several quantizer designs have been proposed, such as a nested lattice quantizer, trellis code quantizer and Lloyd quantization method. Large scale distributed quantization Unfortunately, the above approaches do not scale (in design or operational complexity requirements) to sensor networks of large sizes, a scenario where distributed compression is most helpful. If there are N sources transmitting at R bits each (with some distributed coding scheme), the number of possible reconstructions scales . Even for moderate values of N and R (say N=10, R = 2), prior design schemes become impractical. Recently, an approach, using ideas borrowed from Fusion Coding of Correlated Sources, has been proposed where design and operational complexity are traded against decoder performance. This has allowed distributed quantizer design for network sizes reaching 60 sources, with substantial gains over traditional approaches. The central idea is the presence of a bit-subset selector which maintains a certain subset of the received (NR bits, in the above example) bits for each source. Let be the set of all subsets of the NR bits i.e. Then, we define the bit-subset selector mapping to be Note that each choice of the bit-subset selector imposes a storage requirement (C) that is exponential in the cardinality of the set of chosen bits. This allows a judicious choice of bits that minimize the distortion, given the constraints on decoder storage. Additional limitations on the set of allowable subsets are still needed. The effective cost function that needs to be minimized is a weighted sum of distortion and decoder storage The system design is performed by iteratively (and incrementally) optimizing the encoders, decoder and bit-subset selector till convergence. Non-asymmetric DSC Non-asymmetric DSC for more than two sources The syndrome approach can still be used for more than two sources. Consider binary sources of length- . Let be the corresponding coding matrices of sizes . Then the input binary sources are compressed into of total bits. Apparently, two source tuples cannot be recovered at the same time if they share the same syndrome. In other words, if all source tuples of interest have different syndromes, then one can recover them losslessly. General theoretical result does not seem to exist. However, for a restricted kind of source so-called Hamming source that only has at most one source different from the rest and at most one bit location not all identical, practical lossless DSC is shown to exist in some cases. For the case when there are more than two sources, the number of source tuple in a Hamming source is . Therefore, a packing bound that obviously has to satisfy. When the packing bound is satisfied with equality, we may call such code to be perfect (an analogous of perfect code in error correcting code). A simplest set of to satisfy the packing bound with equality is . However, it turns out that such syndrome code does not exist. The simplest (perfect) syndrome code with more than two sources have and . Let , and such that are any partition of . can compress a Hamming source (i.e., sources that have no more than one bit different will all have different syndromes). For example, for the symmetric case, a possible set of coding matrices are See also Linear code Syndrome decoding Low-density parity-check code Turbo Code References Information theory Coding theory Wireless sensor network Data transmission
Distributed source coding
[ "Mathematics", "Technology", "Engineering" ]
3,832
[ "Discrete mathematics", "Coding theory", "Telecommunications engineering", "Applied mathematics", "Wireless networking", "Wireless sensor network", "Computer science", "Information theory" ]
23,702,522
https://en.wikipedia.org/wiki/Joule%20thief
A joule thief is a minimalist self-oscillating voltage booster that is small, low-cost, and easy to build, typically used for driving small loads, such as driving an LED using a 1.5 volt battery. It can use nearly all of the energy in a single-cell electric battery, even far below the voltage where other circuits consider the battery fully discharged (or "dead"); hence the name, which suggests the notion that the circuit is "stealing" energy or "joules" from the source – the term is a pun on "jewel thief". The circuit is a variant of the blocking oscillator that forms an unregulated voltage boost converter. History The joule thief is not a new concept. Basically, it adds an LED to the output of a self-oscillating voltage booster, which was patented many decades ago. US Patent 1949383, filed in 1930, "Electronic device", describes a vacuum tube based oscillator circuit to convert a low voltage into a high voltage. US Patent 2211852, filed in 1937, "Blocking oscillator apparatus", describes a vacuum tube based blocking oscillator. US Patent 2745012, filed in 1951, "Transistor blocking oscillators", describes three versions of a transistor based blocking oscillator. US Patent 2780767, filed in 1955, "Circuit arrangement for converting a low voltage into a high direct voltage". US Patent 2881380, filed in 1956, "Voltage converter". US Patent 4734658, filed in 1987, "Low voltage driven oscillator circuit", describes a very low voltage driven oscillator circuit, capable of operating from as little as 0.1 volts (lower voltage than a joule thief will operate). This is achieved by using a JFET, which does not require the forward biasing of a PN junction for its operation, because it is used in the depletion mode. In other words, the drain–source already conducts, even when no bias voltage is applied. This patent was intended for use with thermoelectric power sources. In November 1999 issue of Everyday Practical Electronics (EPE) magazine, the "Ingenuity Unlimited" (reader ideas) section had a novel circuit idea entitled "One Volt LED - A Bright Light" by from Swindon, Wiltshire, UK. Three example circuits were shown for operating LEDs from supply voltages below 1.5 Volts. The basic circuits consisted of a transformer-feedback NPN transistor voltage converter based on the blocking oscillator. After testing three transistors (ZTX450 at 73% efficiency, ZTX650 at 79%, and BC550 at 57%), it was determined that a transistor with lower Vce(sat) yielded better efficiency results. Also, a resistor with lower resistance would yield a high current. Operation The circuit works by rapidly switching the transistor. Initially, current begins to flow through the resistor, secondary winding, and base-emitter junction (see diagram) which causes the transistor to begin conducting collector current through the primary winding. Since the two windings are connected in opposing directions, this induces a voltage in the secondary winding which is positive (due to the winding polarity, see dot convention) which turns the transistor on with higher bias. This self-stroking/positive-feedback process almost instantly turns the transistor on as hard as possible (putting it in the saturation region), making the collector-emitter path look like essentially a closed switch (since VCE will be only about 0.1 volts, assuming that the base current is high enough). With the primary winding effectively across the battery, the current increases at a rate proportional to the supply voltage divided by the inductance. Transistor switch-off takes place by different mechanisms dependent upon supply voltage. The gain of a transistor is not linear with VCE. At low supply voltages (typically 0.75 V and below) the transistor requires a larger base current to maintain saturation as the collector current increases. Hence, when it reaches a critical collector current, the base drive available becomes insufficient and the transistor starts to pinch off and the previously described positive feedback action occurs turning it hard off. To summarize, once the current in the coils stops increasing for any reason, the transistor goes into the cutoff region (and opens the collector-emitter "switch"). The magnetic field collapses, inducing however much voltage is necessary to make the load conduct, or for the secondary-winding current to find some other path. When the field is back to zero, the whole sequence repeats; with the battery ramping-up the primary-winding current until the transistor switches on. If the load on the circuit is very small the rate of rise and ultimate voltage at the collector is limited only by stray capacitances, and may rise to more than 100 times the supply voltage. For this reason, it is imperative that a load is always connected so that the transistor is not damaged. Because VCE is mirrored back to the secondary, failure of the transistor due to a small load will occur through the reverse VBE limit for the transistor being exceeded (this occurs at a much lower value than VCEmax). The transistor dissipates very little energy, even at high oscillating frequencies, because it spends most of its time in the fully on or fully off state, so either voltage over or current through the transistor is zero, thus minimizing the . A simple modification of the previous schematic replaces the LED with three components to create a simple zener diode based voltage regulator. Diode D1 acts as a half-wave rectifier to allow capacitor C to charge up only when a higher voltage is available from the joule thief on the left side of diode D1. The Zener diode D2 limits the output voltage. As there is no regulation, any excess of energy not consumed by the load, will be dissipated as heat in the zener diode with consequent low efficiency of conversion. When a more constant output voltage is desired, the joule thief can be given a closed-loop control. In the example circuit, the Schottky diode D1 blocks the charge built up on capacitor C1 from flowing back to the switching transistor Q1 when it is turned on. A 5.6 Volt Zener diode D2 and transistor Q2 forms the feedback control: when the voltage across the capacitor C1 is higher than the threshold voltage formed by Zener voltage of D2 plus the base-emitter turn-on voltage of transistor Q2, transistor Q2 is turned on diverting the base current of the switching transistor Q1, impeding the oscillation and prevents the voltage across capacitor C1 from rising even further. When the voltage across C1 drops below the threshold voltage Q2 turns off, allowing the oscillation to happen again. This very simple circuit has the drawback of temperature-dependent output voltage due to BJT2 (Vbe), and a relatively high ripple, but can be filtered with a simple LC pi network with low losses. In the example circuit, is included a low dropout regulator which contributes to regulating further the output voltage and lowers the ripple, but has the penalty of low conversion efficiency See also References External links Joule Thief simulation - archived 2021-05-01 Simulation and efficiency comparison of different Joule Thief versions - archived 2017-10-30 Supercharged Joule Thief at Higher Efficiency, (Larger Schematic) Joule Thief - Modified Version Clive Mitchell on making his Joule thief Video make joule thief Electric power conversion Electronic oscillators Power electronics Power supplies
Joule thief
[ "Engineering" ]
1,664
[ "Electronic engineering", "Power electronics" ]
23,702,563
https://en.wikipedia.org/wiki/GSI%20anomaly
One of the experimental facilities at the German laboratory GSI Helmholtz Centre for Heavy Ion Research in Darmstadt is an Experimental Storage Ring (ESR) with electron cooling in which large numbers of highly charged radioactive ions can be stored for extended periods of time. This facility provides the means to make precise measurements of their decay modes. The absence of most or all of the electrons in the ions simplifies theoretical treatments of their influence on the decay. Also, such a high degree of ionization is typical in stellar environments where such decays play an important role in nucleosynthesis. In 2007 an ESR experiment reported the observation of unexpected modulation in time of the rate of electron capture decays of highly ionized heavy atoms — 140Pr58+, which have a lifetime of 3.39 min. Such findings were soon repeated by the same group, and were extended to include the decay of 142Pm60+ (lifetime 40.5 s). The oscillations in decay rate had time periods near to 7 s and amplitudes of about 20%. Such a phenomenon had not been previously observed, and was difficult to understand. The experimental group considered it very improbable that the appearance of the phenomenon is due to a technical artefact because they report that their detection technique provides—during the whole observation time—complete and uninterrupted information upon the status of each stored ion. As this type of weak decay involves the production of an electron neutrino, attempts at the time were made to relate the observed oscillations to neutrino oscillations, but this proposal was highly controversial. In 2013, a similar experimental group at the ESR now called the Two-Body-Weak-Decays Collaboration reported further observations of the phenomenon with measurements on 142Pm60+ with much higher precision in period and amplitude. The same period was observed, but the amplitude was only about a half of that previously seen. Moreover, a follow-up high-statistics study (2019) did not observe any time modulation: indicating the observed anomaly was purely statistical, with no physical origin. References maintains a collection of research papers on the GSI K-Capture Anomaly Nuclear physics Experimental particle physics
GSI anomaly
[ "Physics" ]
450
[ "Experimental particle physics", "Experimental physics", "Particle physics", "Nuclear physics" ]
23,703,637
https://en.wikipedia.org/wiki/Hyparrhenia%20hirta
Hyparrhenia hirta is a species of grass known by the common names common thatching grass and Coolatai grass. It is native to much of Africa and Eurasia, and it is known on other continents as an introduced species. In eastern Australia it is a tenacious noxious weed. In South Africa, where it is native, it is very common and one of the most widely used thatching grasses. It is also used for grazing livestock and weaving mats and baskets. This is a perennial grass forming clumps 30 centimetres to one metre tall with tough, dense bases sprouting from rhizomes. The inflorescence atop the wiry stem is a panicle of hairy spikelets with bent awns up to 3.5 centimetres long. The grass can grow in a variety of habitat types, in dry conditions, heavy, rocky, eroded soils, and disturbed areas. References External links Jepson Manual Treatment USDA Plants Profile Grass Manual Treatment Andropogoneae Flora of West Tropical Africa Flora of Southern Africa Flora of South Africa Flora of Lebanon Building materials
Hyparrhenia hirta
[ "Physics", "Engineering" ]
223
[ "Building engineering", "Architecture", "Construction", "Materials", "Matter", "Building materials" ]
10,656,445
https://en.wikipedia.org/wiki/Derivation%20of%20the%20Navier%E2%80%93Stokes%20equations
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics. Basic assumptions The Navier–Stokes equations are based on the assumption that the fluid, at the scale of interest, is a continuum – a continuous substance rather than discrete particles. Another necessary assumption is that all the fields of interest including pressure, flow velocity, density, and temperature are at least weakly differentiable. The equations are derived from the basic principles of continuity of mass, conservation of momentum, and conservation of energy. Sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. This finite volume is denoted by and its bounding surface . The control volume can remain fixed in space or can move with the fluid. The material derivative Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the Eulerian derivative, while the derivative following a moving parcel is called the advective or material (or Lagrangian) derivative. The material derivative is defined as the linear operator: where is the flow velocity. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position (see advection). This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule in which all independent variables are checked for change along the path (which is to say, the total derivative). For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by observing the movement of a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the flow. Continuity equations The Navier–Stokes equation is a special continuity equation. A continuity equation may be derived from conservation principles of: mass, momentum, energy. A continuity equation (or conservation law) is an integral relation stating that the rate of change of some integrated property defined over a control volume must be equal to the rate at which it is lost or gained through the boundaries of the volume plus the rate at which it is created or consumed by sources and sinks inside the volume. This is expressed by the following integral continuity equation: where is the flow velocity of the fluid, is the outward-pointing unit normal vector, and represents the sources and sinks in the flow, taking the sinks as positive. The divergence theorem may be applied to the surface integral, changing it into a volume integral: Applying the Reynolds transport theorem to the integral on the left and then combining all of the integrals: The integral must be zero for any control volume; this can only be true if the integrand itself is zero, so that: From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if is a vector, in which case the vector-vector product in the second term will be a dyad. Conservation of mass Mass may be considered also. When the intensive property is considered as the mass, by substitution into the general continuity equation, and taking (no sources or sinks of mass): where is the mass density (mass per unit volume), and is the flow velocity. This equation is called the mass continuity equation, or simply the continuity equation. This equation generally accompanies the Navier–Stokes equation. In the case of an incompressible fluid, (the density following the path of a fluid element is constant) and the equation reduces to: which is in fact a statement of the conservation of volume. Conservation of momentum A general momentum equation is obtained when the conservation relation is applied to momentum. When the intensive property is considered as the mass flux (also momentum density), that is, the product of mass density and flow velocity , by substitution into the general continuity equation: where is a dyad, a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (a first-rank tensor). Using the formula for the divergence of a dyad, we then have Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it is important to understand that this is not simply an element by element gradient. Rearranging : The leftmost expression enclosed in parentheses is, by mass continuity (shown before), equal to zero. Noting that what remains on the left side of the equation is the material derivative of flow velocity: This appears to simply be an expression of Newton's second law () in terms of body forces instead of point forces. Each term in any case of the Navier–Stokes equations is a body force. A shorter though less rigorous way to arrive at this result would be the application of the chain rule to acceleration: where . The reason why this is "less rigorous" is that we haven't shown that the choice of is correct; however it does make sense since with that choice of path the derivative is "following" a fluid "particle", and in order for Newton's second law to work, forces must be summed following a particle. For this reason the convective derivative is also known as the particle derivative. Cauchy momentum equation The generic density of the momentum source seen previously is made specific first by breaking it up into two new terms, one to describe internal stresses and one for external forces, such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that where is the Cauchy stress tensor, and accounts for body forces present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. is a rank two symmetric tensor given by its covariant components. In orthogonal coordinates in three dimensions it is represented as the 3 × 3 matrix: where the are normal stresses and shear stresses. This matrix is split up into two terms: where is the 3 × 3 identity matrix and is the deviatoric stress tensor. Note that the mechanical pressure is equal to the negative of the mean normal stress: The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor in the equation above must be zero for a fluid at rest. Note that is traceless. The Cauchy equation may now be written in another more explicit form: This equation is still incomplete. For completion, one must make hypotheses on the forms of and , that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families and on the pressure. Some of these hypotheses lead to the Euler equations (fluid dynamics), other ones lead to the Navier–Stokes equations. Additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation. Application to different fluids The general form of the equations of motion is not "ready for use", the stress tensor is still unknown so that more information is needed; this information is normally some knowledge of the viscous behavior of the fluid. For different types of fluid flow this results in specific forms of the Navier–Stokes equations. Newtonian fluid Compressible Newtonian fluid The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids, In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes: The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient. The fluid is isotropic. For a fluid at rest, must be zero (so that hydrostatic pressure results). The above list states the classic argument that the shear strain rate tensor (the (symmetric) shear part of the velocity gradient) is a pure shear tensor and does not include any inflow/outflow part (any compression/expansion part). This means that its trace is zero, and this is achieved by subtracting in a symmetric way from the diagonal elements of the tensor. The compressional contribution to viscous stress is added as a separate diagonal tensor. Applying these assumptions will lead to : or in tensor form That is, the deviatoric of the deformation rate tensor is identified to the deviatoric of the stress tensor, up to a factor . is the Kronecker delta. and are proportionality constants associated with the assumption that stress depends on strain linearly; is called the first coefficient of viscosity or shear viscosity (usually just called "viscosity") and is the second coefficient of viscosity or volume viscosity (and it is related to bulk viscosity). The value of , which produces a viscous effect associated with volume change, is very difficult to determine, not even its sign is known with absolute certainty. Even in compressible flows, the term involving is often negligible; however it can occasionally be important even in nearly incompressible flows and is a matter of controversy. When taken nonzero, the most common approximation is . A straightforward substitution of into the momentum conservation equation will yield the Navier–Stokes equations, describing a compressible Newtonian fluid: The body force has been decomposed into density and external acceleration, that is, . The associated mass continuity equation is: In addition to this equation, an equation of state and an equation for the conservation of energy is needed. The equation of state to use depends on context (often the ideal gas law), the conservation of energy will read: Here, is the specific enthalpy, is the temperature, and is a function representing the dissipation of energy due to viscous effects: With a good equation of state and good functions for the dependence of parameters (such as viscosity) on the variables, this system of equations seems to properly model the dynamics of all known gases and most liquids. Incompressible Newtonian fluid For the special (but very common) case of incompressible flow, the momentum equations simplify significantly. Using the following assumptions: Viscosity will now be a constant The second viscosity effect The simplified mass continuity equation This gives incompressible Navier-Stokes equations, describing incompressible Newtonian fluid: then looking at the viscous terms of the momentum equation for example we have: Similarly for the and momentum directions we have and . The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant. Non-Newtonian fluids A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is a function of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here. Bingham fluid In Bingham fluids, the situation is slightly different: These are fluids capable of bearing some stress before they start flowing. Some common examples are toothpaste and clay. Power-law fluid A power law fluid is an idealised fluid for which the shear stress, , is given by This form is useful for approximating all sorts of general fluids, including shear thinning (such as latex paint) and shear thickening (such as corn starch water mixture). Stream function formulation In the analysis of a flow, it is often desirable to reduce the number of equations and/or the number of variables. The incompressible Navier–Stokes equation with mass continuity (four equations in four unknowns) can be reduced to a single equation with a single dependent variable in 2D, or one vector equation in 3D. This is enabled by two vector calculus identities: for any differentiable scalar and vector . The first identity implies that any term in the Navier–Stokes equation that may be represented as the gradient of a scalar will disappear when the curl of the equation is taken. Commonly, pressure and external acceleration will be eliminated, resulting in (this is true in 2D as well as 3D): where it is assumed that all body forces are describable as gradients (for example it is true for gravity), and density has been divided so that viscosity becomes kinematic viscosity. The second vector calculus identity above states that the divergence of the curl of a vector field is zero. Since the (incompressible) mass continuity equation specifies the divergence of flow velocity being zero, we can replace the flow velocity with the curl of some vector so that mass continuity is always satisfied: So, as long as flow velocity is represented through , mass continuity is unconditionally satisfied. With this new dependent vector variable, the Navier–Stokes equation (with curl taken as above) becomes a single fourth order vector equation, no longer containing the unknown pressure variable and no longer dependent on a separate mass continuity equation: Apart from containing fourth order derivatives, this equation is fairly complicated, and is thus uncommon. Note that if the cross differentiation is left out, the result is a third order vector equation containing an unknown vector field (the gradient of pressure) that may be determined from the same boundary conditions that one would apply to the fourth order equation above. 2D flow in orthogonal coordinates The true utility of this formulation is seen when the flow is two dimensional in nature and the equation is written in a general orthogonal coordinate system, in other words a system where the basis vectors are orthogonal. Note that this by no means limits application to Cartesian coordinates, in fact most of the common coordinates systems are orthogonal, including familiar ones like cylindrical and obscure ones like toroidal. The 3D flow velocity is expressed as (note that the discussion not used coordinates so far): where are basis vectors, not necessarily constant and not necessarily normalized, and are flow velocity components; let also the coordinates of space be . Now suppose that the flow is 2D. This does not mean the flow is in a plane, rather it means that the component of flow velocity in one direction is zero and the remaining components are independent of the same direction. In that case (take component 3 to be zero): The vector function is still defined via: but this must simplify in some way also since the flow is assumed 2D. If orthogonal coordinates are assumed, the curl takes on a fairly simple form, and the equation above expanded becomes: Examining this equation shows that we can set and retain equality with no loss of generality, so that: the significance here is that only one component of remains, so that 2D flow becomes a problem with only one dependent variable. The cross differentiated Navier–Stokes equation becomes two equations and one meaningful equation. The remaining component is called the stream function. The equation for can simplify since a variety of quantities will now equal zero, for example: if the scale factors and also are independent of . Also, from the definition of the vector Laplacian Manipulating the cross differentiated Navier–Stokes equation using the above two equations and a variety of identities will eventually yield the 1D scalar equation for the stream function: where is the biharmonic operator. This is very useful because it is a single self-contained scalar equation that describes both momentum and mass conservation in 2D. The only other equations that this partial differential equation needs are initial and boundary conditions. {| class="toccolours collapsible collapsed" width="60%" style="text-align:left" !Derivation of the scalar stream function equation |- | Distributing the curl: Replacing curl of the curl with the Laplacian and expanding convection and viscosity: Above, the curl of a gradient is zero, and the divergence of is zero. Negating: Expanding the curl of the cross product into four terms: Only one of four terms of the expanded curl is nonzero. The second is zero because it is the dot product of orthogonal vectors, the third is zero because it contains the divergence of flow velocity, and the fourth is zero because the divergence of a vector with only component three is zero (since it is assumed that nothing (except maybe ) depends on component three). This vector equation is one meaningful scalar equation and two equations. |} The assumptions for the stream function equation are: The flow is incompressible and Newtonian. Coordinates are orthogonal. Flow is 2D: The first two scale factors of the coordinate system are independent of the last coordinate: , otherwise extra terms appear. The stream function has some useful properties: Since , the vorticity of the flow is just the negative of the Laplacian of the stream function. The level curves of the stream function are streamlines. The stress tensor The derivation of the Navier–Stokes equation involves the consideration of forces acting on fluid elements, so that a quantity called the stress tensor appears naturally in the Cauchy momentum equation. Since the divergence of this tensor is taken, it is customary to write out the equation fully simplified, so that the original appearance of the stress tensor is lost. However, the stress tensor still has some important uses, especially in formulating boundary conditions at fluid interfaces. Recalling that , for a Newtonian fluid the stress tensor is: If the fluid is assumed to be incompressible, the tensor simplifies significantly. In 3D cartesian coordinates for example: is the strain rate tensor, by definition: See also Derivation of Navier–Stokes equation from discrete LBE First law of thermodynamics (fluid mechanics) References Surface Tension Module , by John W. M. Bush, at MIT OCW Galdi, An Introduction to the Mathematical Theory of the Navier–Stokes Equations: Steady-State Problems. Springer 2011 Equations of fluid dynamics Aerodynamics Partial differential equations
Derivation of the Navier–Stokes equations
[ "Physics", "Chemistry", "Engineering" ]
4,012
[ "Equations of fluid dynamics", "Equations of physics", "Aerodynamics", "Aerospace engineering", "Fluid dynamics" ]
19,721,482
https://en.wikipedia.org/wiki/Pumplinx
PumpLinx is a 3-D computational fluid dynamics (CFD) software developed for the analysis of fluid pumps, motors, compressors, valves, propellers, hydraulic systems, and other fluid devices with rotating or sliding components. Features The software imports 3-D geometry from CAD data in the form of STL files. It has geometry Conformal Adaptive Binary-Tree mesh generation tool which creates 3-D grid from CAD surfaces. For liquid devices, PumpLinx has a cavitation model to account for the effect of liquid vapor, free/dissolved gas, and liquid compressibility. PumpLinx provides templates for different categories of devices, including: axial piston pumps, centrifugal pumps, gerotors, gear pumps, progressive cavity pumps, propellers, radial piston pumps, rotary vane pumps, submersible pumps, and valves. Those templates create an initial grid for special rotors; for example, grids around gears of a gear pump, and then re-meshes the grid for a moving simulation, and provide device specific input and output. The output from the code include velocities, pressures, temperatures, and gas volume fractions of the flow field, together with integrated engineering data such as loads and torques. PumpLinx uses a single Graphical User Interface (GUI) for grid generation, model set-up, execution, and post processing. Market The software is used primarily by component and system engineers in the automotive, hydraulic, and aerospace industry as a virtual test-bed to study efficiency, cavitation, pressure ripple, and noise for hydrodynamic pumps, and fluid power equipment. References External links Simerics Website Cradle Consulting Software companies based in Alabama Fluid dynamics Computational fluid dynamics Computer-aided engineering software Companies based in Huntsville, Alabama Companies established in 2005 Software companies of the United States
Pumplinx
[ "Physics", "Chemistry", "Engineering" ]
375
[ "Computational fluid dynamics", "Chemical engineering", "Computational physics", "Piping", "Fluid dynamics" ]
19,725,090
https://en.wikipedia.org/wiki/Cold
Cold is the presence of low temperature, especially in the atmosphere. In common usage, cold is often a subjective perception. A lower bound to temperature is absolute zero, defined as 0.00K on the Kelvin scale, an absolute thermodynamic temperature scale. This corresponds to on the Celsius scale, on the Fahrenheit scale, and on the Rankine scale. Since temperature relates to the thermal energy held by an object or a sample of matter, which is the kinetic energy of the random motion of the particle constituents of matter, an object will have less thermal energy when it is colder and more when it is hotter. If it were possible to cool a system to absolute zero, all motion of the particles in a sample of matter would cease and they would be at complete rest in the classical sense. The object could be described as having zero thermal energy. Microscopically in the description of quantum mechanics, however, matter still has zero-point energy even at absolute zero, because of the uncertainty principle. Cooling Cooling refers to the process of becoming cold, or lowering in temperature. This could be accomplished by removing heat from a system, or exposing the system to an environment with a lower temperature. Coolants are fluids used to cool objects, prevent freezing and prevent erosion in machines. Air cooling is the process of cooling an object by exposing it to air. This will only work if the air is at a lower temperature than the object, and the process can be enhanced by increasing the surface area, increasing the coolant flow rate, or decreasing the mass of the object. Another common method of cooling is exposing an object to ice, dry ice, or liquid nitrogen. This works by conduction; the heat is transferred from the relatively warm object to the relatively cold coolant. Laser cooling and magnetic evaporative cooling are techniques used to reach very low temperatures. History Early history In ancient times, ice was not adopted for food preservation but used to cool wine which the Romans had also done. According to Pliny, Emperor Nero invented the ice bucket to chill wines instead of adding it to wine to make it cold as it would dilute it. Some time around 1700 BC Zimri-Lim, king of Mari Kingdom in northwest Iraq had created an "icehouse" called bit shurpin at a location close to his capital city on the banks of the Euphrates. In the 7th century BC the Chinese had used icehouses to preserve vegetables and fruits. During the Tang dynastic rule in China (618–907 AD) a document refers to the practice of using ice that was in vogue during the Eastern Chou Dynasty (770–256 BC) by 94 workmen employed for "Ice-Service" to freeze everything from wine to dead bodies. Shachtman says that in the 4th century AD, the brother of the Japanese emperor Nintoku gave him a gift of ice from a mountain. The Emperor was so happy with the gift that he named the first of June as the "Day of Ice" and ceremoniously gave blocks of ice to his officials. Even in ancient times, Shachtman says, in Egypt and India, night cooling by evaporation of water and heat radiation, and the ability of salts to lower the freezing temperature of water was practiced. The ancient people of Rome and Greece were aware that boiled water cooled quicker than the ordinary water; the reason for this is that with boiling of water carbon dioxide and other gases, which are deterrents to cooling, are removed; but this fact was not known till the 17th century. From the 17th century Shachtman says that King James VI and I supported the work of Cornelis Drebbel as a magician to perform tricks such as producing thunder, lightning, lions, birds, trembling leaves and so forth. In 1620 he gave a demonstration in Westminster Abbey to the king and his courtiers on the power of cold. On a summer day, Shachtman says, Drebbel had created a chill (lowered the temperature by several degrees) in the hall of the Abbey, which made the king shiver and run out of the hall with his entourage. This was an incredible spectacle, says Shachtman. Several years before, Giambattista della Porta had demonstrated at the Abbey "ice fantasy gardens, intricate ice sculptures" and also iced drinks for banquets in Florence. The only reference to the artificial freezing created by Drebbel was by Francis Bacon. His demonstration was not taken seriously as it was considered one of his magic tricks, as there was no practical application then. Drebbel had not revealed his secrets. Shachtman says that Lord Chancellor Bacon, an advocate of experimental science, had tried in Novum Organum, published in the late 1620s, to explain the artificial freezing experiment at Westminster Abbey, though he was not present during the demonstration, as "Nitre (or rather its spirit) is very cold, and hence nitre or salt when added to snow or ice intensifies the cold of the latter, the nitre by adding to its own cold, but the salt by supplying activity to the cold snow." This explanation on the cold inducing aspects of nitre and salt was tried then by many scientists. Shachtman says it was the lack of scientific knowledge in physics and chemistry that had held back progress in the beneficial use of ice until a drastic change in religious opinions in the 17th century. The intellectual barrier was broken by Francis Bacon and Robert Boyle who followed him in this quest for knowledge of cold. Boyle did extensive experimentation during the 17th century in the discipline of cold, and his research on pressure and volume was the forerunner of research in the field of cold during the 19th century. He explained his approach as "Bacon's identification of heat and cold as the right and left hands of nature". Boyle also refuted some of the theories mooted by Aristotle on cold by experimenting on transmission of cold from one material to the other. He proved that water was not the only source of cold but gold, silver and crystal, which had no water content, could also change to severe cold condition. 19th century In the United States from about 1850 till end of 19th century export of ice was second only to cotton. The first ice box was developed by Thomas Moore, a farmer from Maryland in 1810 to carry butter in an oval shaped wooden tub. The tub was provided with a metal lining in its interior and surrounded by a packing of ice. A rabbit skin was used as insulation. Moore also developed an ice box for domestic use with the container built over a space of which was filled with ice. In 1825, ice harvesting by use of a horse drawn ice cutting device was invented by Nathaniel J. Wyeth. The cut blocks of uniform size ice was a cheap method of food preservation widely practiced in the United States. Also developed in 1855 was a steam powered device to haul 600 tons of ice per hour. More innovations ensued. Devices using compressed air as a refrigerants were invented. 20th century Iceboxes were in widespread use from the mid-19th century to the 1930s, when the refrigerator was introduced into the home. Most municipally consumed ice was harvested in winter from snow-packed areas or frozen lakes, stored in ice houses, and delivered domestically as iceboxes became more common. In 1913, refrigerators for home use were invented. In 1923 Frigidaire introduced the first self-contained unit. The introduction of Freon in the 1920s expanded the refrigerator market during the 1930s. Home freezers as separate compartments (larger than necessary just for ice cubes) were introduced in 1940. Frozen foods, previously a luxury item, became commonplace. Physiological effects Cold has numerous physiological and pathological effects on the human body, as well as on other organisms. Cold environments may promote certain psychological traits, as well as having direct effects on the ability to move. Shivering is one of the first physiological responses to cold. Even at low temperatures, the cold can massively disrupt blood circulation. Extracellular water freezes and tissue is destroyed. It affects fingers, toes, nose, ears and cheeks particularly often. They discolor, swell, blister, and bleed. The so-called frostnip leads to local frostbite or even to the death of entire body parts. Only temporary cold reactions of the skin are without consequences. As blood vessels contract, they become cool and pale, with less oxygen getting into the tissue. Warmth stimulates blood circulation again and is painful but harmless. Comprehensive protection against the cold is particularly important for children and for sports. Extreme cold temperatures may lead to frostbite, sepsis, and hypothermia, which in turn may result in death. Common myths A common, but false, statement states that cold weather itself can induce the identically named common cold. No scientific evidence of this has been found, although the disease, alongside influenza and others, does increase in prevalence with colder weather. Notable cold locations and objects The National Institute of Standards and Technology in Boulder, Colorado using a new technique, managed to chill a microscopic mechanical drum to 360 microkelvins, making it the coldest object on record. Theoretically, using this technique, an object could be cooled to absolute zero. The coldest known temperature ever achieved is a state of matter called the Bose–Einstein condensate which was first theorized to exist by Satyendra Nath Bose in 1924 and first created by Eric Cornell, Carl Wieman, and co-workers at JILA on 5 June 1995. They did this by cooling a dilute vapor consisting of approximately two thousand rubidium-87 atoms to below 170 nK (one nK or nanokelvin is a billionth (10−9) of a kelvin) using a combination of laser cooling (a technique that won its inventors Steven Chu, Claude Cohen-Tannoudji, and William D. Phillips the 1997 Nobel Prize in Physics) and magnetic evaporative cooling. 90377 Sedna is one of the coldest known objects within the Solar System. Orbiting at an average distance of 84 billion miles, Sedna has an average surface temperature of -400°F (-240°C). The lunar crater Hermite was described after a 2009 survey by NASA's Lunar Reconnaissance Orbiter as the "coldest known place in the Solar System", with temperatures at 26 kelvins (−413 °F, −247 °C). The Boomerang Nebula is the coldest known natural location in the universe, with a temperature that is estimated at 1 K (−272.15 °C, −457.87 °F). The Dwarf Planet Haumea is one of the coldest known objects in our solar system. With a Temperature of -401 degrees Fahrenheit or -241 degrees Celsius The Planck spacecraft's instruments are kept at 0.1 K (−273.05 °C, −459.49 °F) via passive and active cooling. Absent any other source of heat, the temperature of the Universe is roughly 2.725 kelvins, due to the Cosmic microwave background radiation, a remnant of the Big Bang. Neptune's moon Triton has a surface temperature of 38.15 K (−235 °C, −391 °F) Uranus with a black-body temperature of 58.2 K (−215.0 °C, −354.9 °F). Saturn with a black-body temperature of 81.1 K (−192.0 °C, −313.7 °F). Mercury, despite being close to the Sun, is actually cold during its night, with a temperature of about 93.15 K (−180 °C, −290 °F). Mercury is cold during its night because it has no atmosphere to trap in heat from the Sun. Jupiter with a black-body temperature of 110.0 K (−163.2 °C, −261.67 °F). Mars with a black-body temperature of 210.1 K (−63.05 °C, −81.49 °F). The coldest continent on Earth is Antarctica. The coldest place on Earth is the Antarctic Plateau, an area of Antarctica around the South Pole that has an altitude of around . The lowest reliably measured temperature on Earth of 183.9 K (−89.2 °C, −128.6 °F) was recorded there at Vostok Station on 21 July 1983. The Poles of Cold are the places in the Southern and Northern Hemispheres where the lowest air temperatures have been recorded. (See List of weather records). The cold deserts of the North Pole, known as the tundra region, experiences an annual snow fall of a few inches and temperatures recorded are as low as 203.15 K (−70 °C, −94 °F). Only a few small plants survive in the generally frozen ground (thaws only for a short spell). Cold deserts of the Himalayas are a feature of a rain-shadow zone created by the mountain peaks of the Himalaya range that runs from Pamir Knot extending to the southern border of the Tibetan plateau; however this mountain range is also the reason for the monsoon rain fall in the Indian subcontinent. This zone is located in an elevation of about 3,000 m, and covers Ladakh, Lahaul, Spiti and Pooh. In addition, there are inner valleys within the main Himalayas such as Chamoli, some areas of Kinnaur, Pithoragarh and northern Sikkim which are also categorized as cold deserts. Mythology and culture Niflheim was a realm of primordial ice and cold with nine frozen rivers in Norse Mythology. The "Hell in Dante's Inferno" is stated as Cocytus a frozen lake where Virgil and Dante were deposited. See also Technical, scientific Entertainment, myth s Meteorological: Geographical and climatological: References Bibliography External links Thermodynamics
Cold
[ "Physics", "Chemistry", "Mathematics" ]
2,839
[ "Thermodynamics", "Dynamical systems" ]
19,725,253
https://en.wikipedia.org/wiki/Interbasin%20transfer
Interbasin transfer or transbasin diversion are (often hyphenated) terms used to describe man-made conveyance schemes which move water from one river basin where it is available, to another basin where water is less available or could be utilized better for human development. The purpose of such water resource engineering schemes can be to alleviate water shortages in the receiving basin, to generate electricity, or both. Rarely, as in the case of the Glory River which diverted water from the Tigris to Euphrates River in modern Iraq, interbasin transfers have been undertaken for political purposes. While ancient water supply examples exist, the first modern developments were undertaken in the 19th century in Australia, India and the United States, feeding large cities such as Denver and Los Angeles. Since the 20th century many more similar projects have followed in other countries, including Israel and China, and contributions to the Green Revolution in India and hydropower development in Canada. Since conveyance of water between natural basins are described as both a subtraction at the source and as an addition at the destination, such projects may be controversial in some places and over time; they may also be seen as controversial due to their scale, costs and environmental or developmental impacts. In Texas, for example, a 2007 Texas Water Development Board report analyzed the costs and benefits of IBTs in Texas, concluding that while some are essential, barriers to IBT development include cost, resistance to new reservoir construction and environmental impacts. Despite the costs and other concerns involved, IBTs play an essential role in the state's 50-year water planning horizon. Of 44 recommended ground and surface water conveyance and transfer projects included in the 2012 Texas State Water Plan, 15 would rely on IBTs. While developed countries often have exploited the most economical sites already with large benefits, many large-scale diversion/transfer schemes have been proposed in developing countries such as Brazil, African countries, India and China. These more modern transfers have been justified because of their potential economic and social benefits in more heavily populated areas, stemming from increased water demand for irrigation, industrial and municipal water supply, and renewable energy needs. These projects are also justified because of possible climate change and a concern over decreased water availability in the future; in that light, these projects thus tend to hedge against ensuing droughts and increasing demand. Projects conveying water between basins economically are often large and expensive, and involve major public and/or private infrastructure planning and coordination. In some cases where desired flow is not provided by gravity alone, additional use of energy is required for pumping water to the destination. Projects of this type can also be complicated in legal terms, since water and riparian rights are affected; this is especially true if the basin of origin is a transnational river. Furthermore, these transfers can have significant environmental impacts on aquatic ecosystems at the source. In some cases water conservation measures at the destination can make such water transfers less immediately necessary to alleviate water scarcity, delay their need to be built, or reduce their initial size and cost. Existing transfers There are dozens of large inter-basin transfers around the world, most of them concentrated in Australia, Canada, China, India and the United States. The oldest interbasin transfers date back to the late 19th century, with an exceptionally old example being the Roman gold mine at Las Médulas in Spain. Their primary purpose usually is either to alleviate water scarcity or to generate hydropower. Primarily for the alleviation of water scarcity Africa From the Oum Er-Rbia River to supply Casablanca in Morocco with drinking water From the tributaries of Ichkeul Lake in Tunisia to supply Tunis with drinking water From Lake Nasser on the Nile to the New Valley Project in the Western Desert of Egypt The Lesotho Highlands Water Project to supply water to Gauteng in South Africa Americas The Los Angeles Aqueduct completed in 1913 transferring water from the Owens Valley to Los Angeles The Colorado River Aqueduct built in 1933–1941 to supply Southern California with water The All-American Canal built in the 1930s to bring water from the Colorado River to the Imperial Irrigation District in Southern California The California State Water Project built in stages in the 1960s and 1970s to transfer water from Northern to Southern California. It includes the California Aqueduct and the Edmonston Pumping Plant, which lifts water nearly up and over the Tehachapi Mountains through 10 miles of tunnels for municipal water supply in the Los Angeles Metropolitan area. The Cutzamala System built in stages from the late 1970s to the late 1990s to transfer water from the Cutzamala River to Mexico City for use as drinking water, lifting it over more than 1000 meters. It utilizes 7 reservoirs, a 127 km long aqueduct with 21 km of tunnels, 7.5 km open canal, and a water treatment plant. Its cost was US$1.3 billion. See also Water resources management in Mexico The Central Utah Project to supply the Wasatch Front with urban water and for irrigation The San Juan–Chama Project to bring water from the Colorado River basin into the Rio Grande basin for urban and agricultural purposes in northern New Mexico and municipal water supply for Santa Fe and Albuquerque The New Croton Aqueduct, completed in 1890, brings water from the New Croton Reservoir in Westchester and Putnam counties. The Catskill Aqueduct, completed in 1916, is significantly larger than New Croton and brings water from two reservoirs in the eastern Catskill Mountains. The Delaware Aqueduct, completed in 1945, taps tributaries of the Delaware River in the western Catskill Mountains and provides approximately half of New York City's water supply. The Colorado–Big Thompson Project, built between 1938 and 1957, diverts water from the upper Colorado River basin east underneath the Continental Divide to the South Platte basin. The Little Snake - Douglas Creek System, built in two stages between 1963 and 1988, moves water under the Continental Divide in southern Wyoming from the upper Colorado River basin to the North Platte basin. This is then traded for water from elsewhere in the North Platte basin, which is diverted to provide water for Cheyenne. Among other transfers, the Massachusetts Water Resources Authority moves water from the Quabbin Reservoir (completed 1939) and Ware River in the Connecticut River basin and the Wachusett Reservoir (completed 1908) in the Merrimack River basin, to provide drinking water to more densely populated areas in Eastern Massachusetts. Some of the flow is also used for hydropower. The Transfer of the São Francisco River in Brazil began in 2007, diverting water from the São Francisco River to the surrounding dry sertão region of four of the country's northeastern states. The Central Arizona Project (CAP) in the USA is not an interbasin transfer per se, although it shares many characteristics with interbasin transfers as it transports large amounts of water over a long distance and difference in altitude. The CAP transfers water from the Colorado River to Central Arizona for both agriculture and municipal water supply to substitute for depleted groundwater. However, the water remains within the watershed of the Colorado River, though transferred into the Gila sub-basin. Asia The Narmada Canal Project offtaking from Sardar Sarovar in western India transfers water from the Narmada Basin to areas coming under other river basins in Gujarat (Mahi, Sabarmati and other small river basins in North Gujarat, Saurashtra and Kutch) and Rajasthan (Luni and other basins of Jalore and Barmer districts) for irrigation, drinking water, industrial use, etc. The canal is designed to transfer water annually from the Narmada Basin to areas under other basins in Gujarat and Rajasthan. (9 MAF for Gujarat and 0.5 MAF for Rajasthan). The Periyar Project in Southern India from the Periyar River in Kerala to the Vaigai basin in Tamil Nadu. It consists of a dam and a tunnel with a discharging capacity of 40.75 cubic meters per second. The project was commissioned in 1895 and provides irrigation to 81,000 hectares, in addition to providing power through a plant with a capacity of 140 MW. The Parambikulam Aliyar project, also in Southern India, consists of seven streams, five flowing towards the west and two towards the east, which have been dammed and interlinked by tunnels. The project transfers water from the Chalakudy River basin to the Bharatapuzha and Cauvery basins for irrigation in Coimbatore district of Tamil Nadu and the Chittur area of Kerala states. It also serves for power generation with a capacity of 185 MW. The Kurnool Cudappah Canal in Southern India is a scheme started by a private company in 1863, transferring water from the Krishna River basin to the Pennar basin. It includes a 304 km long canal with a capacity of 84.9 cubic meters per second for irrigation. The Telugu Ganga project in Southern India. This project primarily meets the water supply needs of Chennai metropolitan area, but is also used for irrigation. It brings Krishna River water through 406 km of canals. The project, which was approved in 1977 and completed in 2004, involved the cooperation of four Indian States: Maharashtra, Karnataka, Andhra Pradesh and Tamil Nadu. The Indira Gandhi Canal (formerly known as the Rajasthan Canal) linking the Ravi River, the Beas River and the Sutlej River through a system of dams, hydropower plants, tunnels, canals and irrigation systems in Northern India built in the 1960s to irrigate the Thar Desert. The National Water Carrier in Israel, transferring water from the Sea of Galilee (Jordan River Basin) to the Mediterranean coast lifting water over 372 meters. Its water is used both in agriculture and for municipal water supply. The Mahaweli Ganga Project in Sri Lanka includes several inter basin transfers. The Irtysh–Karaganda Canal in central Kazakhstan is about 450 km long with a maximum capacity of 75 cubic meters per second. It was built between 1962 and 1974 and involves a lift of 14 to 22 m. The South–North Water Transfer Project in China, as well as other smaller-scale projects, such as the Irtysh–Karamay–Ürümqi Canal. Part of the water flowing northwards down Tung Chung River in northern Lantau is diverted across the mountain ridge to Shek Pik Reservoir in southern Lantau. The IRTS (Inter-Reservoirs Transfer Scheme) which transfers water from the Kowloon Byewash Reservoir to the Lower Shing Mun Reservoir, in length and in diameter.‌ Lingqu in Kwangsi Province Australia The 530 km-long Goldfields Water Supply Scheme of Western Australia built from 1896 to 1903 Europe Various transfers from the Ebro River in Spain, which flows to the Mediterranean, to basins draining to the Atlantic, such as Ebro-Besaya transfer of 1982 to supply the industrial area of Torrelavega, the Cerneja-Ordunte transfer to the Bilbao Metropolitan area of 1961, as well as the Zadorra-Arratia transfer that also supplies Bilbao through the Barazar waterfall (Source:Spanish Wikipedia article on the Ebro River. See Water supply and sanitation in Spain). The North Crimea Canal (Ukraine), transporting water from the Dniepr River to the Crimean Peninsula. Characteristics of major existing interbasin transfers and other large-scale water transfers to alleviate water scarcity For the generation of hydropower Africa The Drakensberg Pumped Storage Scheme from the Tugela River that flows into the Indian Ocean into the Vaal River in South Africa, which ultimately drains into the Orange River and the Atlantic Ocean. Its purpose is hydropower generation Australia The Snowy Mountains Scheme in Australia, built between 1949 and 1974 at the cost (at that time) of A$800 million; a dollar value equivalent in 1999 and 2004 to A$6 billion (US$4.5 billion). The Barnard River Scheme, also in Australia, constructed between 1983 and 1985. Canada In Canada, sixteen interbasin transfers have been implemented for hydropower development. The most important is the James Bay Project from the Caniapiscau River and the Eastmain River into the La Grande River, built in the 1970s. The water flow was reduced by 90% at the mouth of the Eastmain River, by 45% where the Caniapiscau River flows into the Koksoak River, and by 35% at the mouth of the Koksoak River. The water flow of the La Grande River, on the other hand, was doubled, increasing from 1,700 m³/s to 3,400 m³/s (and from 500 m³/s to 5,000 m³/s in winter) at the mouth of the La Grande River. Other interbasin transfers include: British Columbia Campbell–Heber Diversion Coquitlam–Buntzen Diversion Kemano hydroelectric power station diverting water from the Nechako River in British Columbia to the sea. Vernon Irrigation District Diversion Manitoba Churchill Diversion–Southern Indian Lake New Brunswick Saint John water supply Newfoundland and Labrador Bay d'Espoir Diversions Churchill Falls hydroelectric power station built between 1967 and 1971 Deer Lake Diversion Smallwood Reservoir–Julian Diversion Smallwood Reservoir–Kanairiktok Diversion Smallwood Reservoir–Naskaupi Diversion Northwest Territories Wellington Lake Hydro Project Diversion (with Saskatchewan) Nova Scotia Ingram Diversion Jordan Diversion Wreck Cove Diversions Ontario Long Lake Diversion Ogoki Diversion Opasatika Diversion Root River Diversion Quebec Barrière Diversion Boyd–Sakami Diversion Lac de la Frégate Diversion Laforge Diversion Manouane Diversion Mégiscane Diversion Rupert Diversion Sault aux Cochons Diversion Saskatchewan Cypress Lake Diversion (with Alberta) Pasquia Land Resettlement Diversion (with Manitoba) Qu'Appelle River Diversion at Lake Diefenbaker Swift Current Diversion Asia The Nam Theun II Project in Laos from the Nam Theun River to the Xe Bang Fai River, both tributaries of the Mekong River, completed in 2008. For other purposes The Chicago Sanitary and Ship Canal in the US, which serves to divert polluted water from Lake Michigan. Transfers under construction The Eastern and Central Routes of the South–North Water Transfer Project in China from the Yangtse River to the Yellow River and Beijing. Proposed transfers Nearly all proposed interbasin transfers are in developing countries. The objective of most transfers is the alleviation of water scarcity in the receiving basin(s). Unlike in the case of existing transfers, there are very few proposed transfers whose objective is the generation of hydropower. Africa From the Ubangi River in Congo to the Chari River which empties into Lake Chad. The plan was first proposed in the 1960s and again in the 1980s and 1990s by Nigerian engineer J. Umolu (ZCN Scheme) and Italian firm Bonifica (Transaqua Scheme). In 1994, the Lake Chad Basin Commission (LCBC) proposed a similar project and at a March, 2008 Summit, the Heads of State of the LCBC member countries committed to the diversion project. In April, 2008, the LCBC advertised a request for proposals for a World Bank-funded feasibility study. Americas The transfer of the São Francisco River from the São Francisco River to the dry sertão in the four northeastern states of Ceará, Rio Grande do Norte, Paraíba and Pernambuco in Brazil. The project is estimated to cost US$2 billion and was given the green light to go ahead by the Supreme Court of Brazil in December 2007. On a much smaller scale, the transfer of up to 36 million gallons of water per day (130,000 cubic meter/day) to Concord and Kannapolis from the Catawba River and the Yadkin River in North Carolina, USA. Shoal Creek Reservoir in north Georgia, from Dawson Forest (Etowah River) to the city of Atlanta (Chattahoochee River). Asia The so-called "Peninsular river component" of India's National Water Development Plan envisages to divert the Mahanadi River surplus to the Godavari and the surplus therefrom to the Krishna, Pennar and Cauvery, with "terminal dams" on the Mahanadi and the Godavari to enable irrigation. The Peninsular component also envisages three more transfers — (a) to divert a part of the waters of the west flowing rivers of Kerala to the arid east to meet the needs of Tamil Nadu; (b) to interlink the west flowing rivers north of Mumbai and south of Tapi to provide irrigation to areas in Saurashtra, Kachchh and coastal Maharashtra and to augment the drinking water supplies to Mumbai; and (c) to interlink the southern tributaries of the Yamuna and provide irrigation facilities in parts of Madhya Pradesh and Rajasthan. From the Chalakudy River to the Bharathapuzha River in Kerala, India 14 transfers in Northern India. The so-called "Himalayan river component" envisages transfers from the Kosi River, Gandak River and Ghaghara River to the west; a link between the Brahmaputra River to the Ganges River to augment the dry weather flows of the Ganges; and a link between the Ganges and the Yamuna River "to serve drought-prone areas of Haryana, Rajasthan, Gujarat as also south Uttar Pradesh and south Bihar". The Bheri Babai Diversion Multipurpose Project on the Ghaghara River in Nepal(Hydropower and irrigation) From Northern Russia and Siberia to Central Asia through the Northern river reversal. The proposal, originally dating to Joseph Stalin's and Nikita Khrushchev's eras, included a Western and Eastern route, in the European and Asian parts of the then Soviet Union respectively. The suggested Western route would be from the Pechora River to the Kama River, a tributary of the Volga, along the abandoned and uncompleted Pechora–Kama Canal. The Eastern route would be from the Tobol River, Ishim River and Irtysh River in the Ob basin to the desert plains of Kazakhastan and the Aral Sea basin. In 2006 Kazakh president Nursultan Nazarbayev said he wanted to resuscitate the scheme that had been abandoned by the Soviet Union in 1986. The cost of that route alone is estimated at upwards from US$40 billion, well beyond the means of Kazakhstan. The western route of the South–North Water Transfer Project in China, which foresees to divert water from the headwater of Yangtze (and possibly also the headwaters of Mekong or Salween downstream) into the headwater of Yellow River. If the Mekong and Salween rivers were included in the project this would affect the downstream riparian countries Burma, Thailand, Laos, Cambodia and Vietnam. Australia The Bradfield Scheme in Queensland, serving primarily for irrigation The Kimberley Pipeline Scheme to supply Perth with water through, proposed because of radical rainfall changes in Western Australia since the late 1960s Europe From the Ebro River in Spain to Barcelona in the Northeast and to various cities on the Mediterranean coast to the Southwest Ecological aspects Since rivers are home to a complex web of species and their interactions, the transfer of water from one basin to another can have a serious impact on species living therein. See also Water export Headwater Diversion Plan (Jordan River) References Further reading Fereidoun Ghassemi and Ian White: Inter-Basin Water Transfer, Case Studies from Australia, United States, Canada, China and India, Cambridge University Press, International Hydrology Series, 2007, Hydrology Water resources management Articles containing video clips
Interbasin transfer
[ "Chemistry", "Engineering", "Environmental_science" ]
3,973
[ "Hydrology", "Interbasin transfer", "Environmental engineering" ]
19,726,608
https://en.wikipedia.org/wiki/Optical%20properties%20of%20carbon%20nanotubes
The optical properties of carbon nanotubes are highly relevant for materials science. The way those materials interact with electromagnetic radiation is unique in many respects, as evidenced by their peculiar absorption, photoluminescence (fluorescence), and Raman spectra. Carbon nanotubes are unique "one-dimensional" materials, whose hollow fibers (tubes) have a unique and highly ordered atomic and electronic structure, and can be made in a wide range of dimension. The diameter typically varies from 0.4 to 40 nm (i.e., a range of ~100 times). However, the length can reach , implying a length-to-diameter ratio as high as 132,000,000:1; which is unequaled by any other material. Consequently, all the electronic, optical, electrochemical and mechanical properties of the carbon nanotubes are extremely anisotropic (directionally dependent) and tunable. Applications of carbon nanotubes in optics and photonics are still less developed than in other fields. Some properties that may lead to practical use include tuneability and wavelength selectivity. Potential applications that have been demonstrated include light emitting diodes (LEDs), bolometers and optoelectronic memory. Apart from direct applications, the optical properties of carbon nanotubes can be very useful in their manufacture and application to other fields. Spectroscopic methods offer the possibility of quick and non-destructive characterization of relatively large amounts of carbon nanotubes, yielding detailed measurements of non-tubular carbon content, tube type and chirality, structural defects, and many other properties that are relevant to those other applications. Geometric structure Chiral angle A single-walled carbon nanotubes (SWCNT) can be envisioned as strip of a graphene molecule (a single sheet of graphite) rolled and joined into a seamless cylinder. The structure of the nanotube can be characterized by the width of this hypothetical strip (that is, the circumference c or diameter d of the tube) and the angle α of the strip relative to the main symmetry axes of the hexagonal graphene lattice. This angle, which may vary from 0 to 30 degrees, is called the "chiral angle" of the tube. The (n,m) notation Alternatively, the structure can be described by two integer indices (n,m) that describe the width and direction of that hypothetical strip as coordinates in a fundamental reference frame of the graphene lattice. If the atoms around any 6-member ring of the graphene are numbered sequentially from 1 to 6, the two vectors u and v of that frame are the displacements from atom 1 to atoms 3 and 5, respectively. Those two vectors have the same length, and their directions are 60 degrees apart. The vector w = n u + m v is then interpreted as the circumference of the unrolled tube on the graphene lattice; it relates each point A1 on one edge of the strip to the point A2 on the other edge that will be identified with it as the strip is rolled up. The chiral angle α is then the angle between u and w. The pairs (n,m) that describe distinct tube structures are those with 0 ≤ m ≤ n and n > 0. All geometric properties of the tube, such as diameter, chiral angle, and symmetries, can be computed from these indices. The type also determines the electronic structure of the tube. Specifically, the tube behaves like a metal if |m–n| is a multiple of 3, and like a semiconductor otherwise. Zigzag and armchair tubes Tubes of type (n,m) with n=m (chiral angle = 30°) are called "armchair" and those with m=0 (chiral angle = 0°) "zigzag". These tubes have mirror symmetry, and can be viewed as stacks of simple closed paths ("zigzag" and "armchair" paths, respectively). Electronic structure The optical properties of carbon nanotubes are largely determined by their unique electronic structure. The rolling up of the graphene lattice affects that structure in ways that depend strongly on the geometric structure type (n,m). Van Hove singularities A characteristic feature of one-dimensional crystals is that their distribution of density of states (DOS) is not a continuous function of energy, but it descends gradually and then increases in a discontinuous spike. These sharp peaks are called Van Hove singularities. In contrast, three-dimensional materials have continuous DOS. Van Hove singularities result in the following remarkable optical properties of carbon nanotubes: Optical transitions occur between the v1 − c1, v2 − c2, etc., states of semiconducting or metallic nanotubes and are traditionally labeled as S11, S22, M11, etc., or, if the "conductivity" of the tube is unknown or unimportant, as E11, E22, etc. Crossover transitions c1 − v2, c2 − v1, etc., are dipole-forbidden and thus are extremely weak, but they were possibly observed using cross-polarized optical geometry. The energies between the Van Hove singularities depend on the nanotube structure. Thus by varying this structure, one can tune the optoelectronic properties of carbon nanotube. Such fine tuning has been experimentally demonstrated using UV illumination of polymer-dispersed CNTs. Optical transitions are rather sharp (~10 meV) and strong. Consequently, it is relatively easy to selectively excite nanotubes having certain (n, m) indices, as well as to detect optical signals from individual nanotubes. Kataura plot The band structure of carbon nanotubes having certain (n, m) indexes can be easily calculated. A theoretical graph based on these calculations was designed in 1999 by Hiromichi Kataura to rationalize experimental findings. A Kataura plot relates the nanotube diameter and its bandgap energies for all nanotubes in a diameter range. The oscillating shape of every branch of the Kataura plot reflects the intrinsic strong dependence of the SWNT properties on the (n, m) index rather than on its diameter. For example, (10, 1) and (8, 3) tubes have almost the same diameter, but very different properties: the former is a metal, but the latter is a semiconductor. Optical properties Optical absorption Optical absorption in carbon nanotubes differs from absorption in conventional 3D materials by presence of sharp peaks (1D nanotubes) instead of an absorption threshold followed by an absorption increase (most 3D solids). Absorption in nanotubes originates from electronic transitions from the v2 to c2 (energy E22) or v1 to c1 (E11) levels, etc. The transitions are relatively sharp and can be used to identify nanotube types. Note that the sharpness deteriorates with increasing energy, and that many nanotubes have very similar E22 or E11 energies, and thus significant overlap occurs in absorption spectra. This overlap is avoided in photoluminescence mapping measurements (see below), which instead of a combination of overlapped transitions identifies individual (E22, E11) pairs. Interactions between nanotubes, such as bundling, broaden optical lines. While bundling strongly affects photoluminescence, it has much weaker effect on optical absorption and Raman scattering. Consequently, sample preparation for the latter two techniques is relatively simple. Optical absorption is routinely used to quantify quality of the carbon nanotube powders. The spectrum is analyzed in terms of intensities of nanotube-related peaks, background and pi-carbon peak; the latter two mostly originate from non-nanotube carbon in contaminated samples. However, it has been recently shown that by aggregating nearly single chirality semiconducting nanotubes into closely packed Van der Waals bundles the absorption background can be attributed to free carrier transition originating from intertube charge transfer. Carbon nanotubes as a black body An ideal black body should have emissivity or absorbance of 1.0, which is difficult to attain in practice, especially in a wide spectral range. Vertically aligned "forests" of single-wall carbon nanotubes can have absorbances of 0.98–0.99 from the far-ultraviolet (200 nm) to far-infrared (200 μm) wavelengths. These SWNT forests (buckypaper) were grown by the super-growth CVD method to about 10 μm height. Two factors could contribute to strong light absorption by these structures: (i) a distribution of CNT chiralities resulted in various bandgaps for individual CNTs. Thus a compound material was formed with broadband absorption. (ii) Light might be trapped in those forests due to multiple reflections. Luminescence Photoluminescence (fluorescence) Semiconducting single-walled carbon nanotubes emit near-infrared light upon photoexcitation, described interchangeably as fluorescence or photoluminescence (PL). The excitation of PL usually occurs as follows: an electron in a nanotube absorbs excitation light via S22 transition, creating an electron-hole pair (exciton). Both electron and hole rapidly relax (via phonon-assisted processes) from c2 to c1 and from v2 to v1 states, respectively. Then they recombine through a c1 − v1 transition resulting in light emission. No excitonic luminescence can be produced in metallic tubes. Their electrons can be excited, thus resulting in optical absorption, but the holes are immediately filled by other electrons out of the many available in the metal. Therefore, no excitons are produced. Salient properties Photoluminescence from SWNT, as well as optical absorption and Raman scattering, is linearly polarized along the tube axis. This allows monitoring of the SWNTs orientation without direct microscopic observation. PL is quick: relaxation typically occurs within 100 picoseconds. PL efficiency was first found to be low (~0.01%), but later studies measured much higher quantum yields. By improving the structural quality and isolation of nanotubes, emission efficiency increased. A quantum yield of 1% was reported in nanotubes sorted by diameter and length through gradient centrifugation, and it was further increased to 20% by optimizing the procedure of isolating individual nanotubes in solution. The spectral range of PL is rather wide. Emission wavelength can vary between 0.8 and 2.1 micrometers depending on the nanotube structure. Excitons are apparently delocalized over several nanotubes in single chirality bundles as the photoluminescence spectrum displays a splitting consistent with intertube exciton tunneling. Interaction between nanotubes or between a nanotube and another material may quench or increase PL. No PL is observed in multi-walled carbon nanotubes. PL from double-wall carbon nanotubes strongly depends on the preparation method: CVD grown DWCNTs show emission both from inner and outer shells. However, DWCNTs produced by encapsulating fullerenes into SWNTs and annealing show PL only from the outer shells. Isolated SWNTs lying on the substrate show extremely weak PL which has been detected in few studies only. Detachment of the tubes from the substrate drastically increases PL. Position of the (S22, S11) PL peaks depends slightly (within 2%) on the nanotube environment (air, dispersant, etc.). However, the shift depends on the (n, m) index, and thus the whole PL map not only shifts, but also warps upon changing the CNT medium. Raman scattering Raman spectroscopy has good spatial resolution (~0.5 micrometers) and sensitivity (single nanotubes); it requires only minimal sample preparation and is rather informative. Consequently, Raman spectroscopy is probably the most popular technique of carbon nanotube characterization. Raman scattering in SWNTs is resonant, i.e., only those tubes are probed which have one of the bandgaps equal to the exciting laser energy. Several scattering modes dominate the SWNT spectrum, as discussed below. Similar to photoluminescence mapping, the energy of the excitation light can be scanned in Raman measurements, thus producing Raman maps. Those maps also contain oval-shaped features uniquely identifying (n, m) indices. Contrary to PL, Raman mapping detects not only semiconducting but also metallic tubes, and it is less sensitive to nanotube bundling than PL. However, requirement of a tunable laser and a dedicated spectrometer is a strong technical impediment. Radial breathing mode Radial breathing mode (RBM) corresponds to radial expansion-contraction of the nanotube. Therefore, its frequency νRBM (in cm−1) depends on the nanotube diameter d as, νRBM A/d + B (where A and B are constants dependent on the environment in which the nanotube is present. For example, B=0 for individual nanotubes.) (in nanometers) and can be estimated as for SWNT or for DWNT, which is very useful in deducing the CNT diameter from the RBM position. Typical RBM range is 100–350 cm−1. If RBM intensity is particularly strong, its weak second overtone can be observed at double frequency. Bundling mode The bundling mode is a special form of RBM supposedly originating from collective vibration in a bundle of SWNTs. G mode Another very important mode is the G mode (G from graphite). This mode corresponds to planar vibrations of carbon atoms and is present in most graphite-like materials. G band in SWNT is shifted to lower frequencies relative to graphite (1580 cm−1) and is split into several peaks. The splitting pattern and intensity depend on the tube structure and excitation energy; they can be used, though with much lower accuracy compared to RBM mode, to estimate the tube diameter and whether the tube is metallic or semiconducting. D mode D mode is present in all graphite-like carbons and originates from structural defects. Therefore, the ratio of the G/D modes is conventionally used to quantify the structural quality of carbon nanotubes. High-quality nanotubes have this ratio significantly higher than 100. At a lower functionalisation of the nanotube, the G/D ratio remains almost unchanged. This ratio gives an idea of the functionalisation of a nanotube. G' mode The name of this mode is misleading: it is given because in graphite, this mode is usually the second strongest after the G mode. However, it is actually the second overtone of the defect-induced D mode (and thus should logically be named D'). Its intensity is stronger than that of the D mode due to different selection rules. In particular, D mode is forbidden in the ideal nanotube and requires a structural defect, providing a phonon of certain angular momentum, to be induced. In contrast, G' mode involves a "self-annihilating" pair of phonons and thus does not require defects. The spectral position of G' mode depends on diameter, so it can be used roughly to estimate the SWNT diameter. In particular, G' mode is a doublet in double-wall carbon nanotubes, but the doublet is often unresolved due to line broadening. Other overtones, such as a combination of RBM+G mode at ~1750 cm−1, are frequently seen in CNT Raman spectra. However, they are less important and are not considered here. Anti-Stokes scattering All the above Raman modes can be observed both as Stokes and anti-Stokes scattering. As mentioned above, Raman scattering from CNTs is resonant in nature, i.e. only tubes whose band gap energy is similar to the laser energy are excited. The difference between those two energies, and thus the band gap of individual tubes, can be estimated from the intensity ratio of the Stokes/anti-Stokes lines. This estimate however relies on the temperature factor (Boltzmann factor), which is often miscalculated – a focused laser beam is used in the measurement, which can locally heat the nanotubes without changing the overall temperature of the studied sample. Rayleigh scattering Carbon nanotubes have very large aspect ratio, i.e., their length is much larger than their diameter. Consequently, as expected from the classical electromagnetic theory, elastic light scattering (or Rayleigh scattering) by straight CNTs has anisotropic angular dependence, and from its spectrum, the band gaps of individual nanotubes can be deduced. Another manifestation of Rayleigh scattering is the "antenna effect", an array of nanotubes standing on a substrate has specific angular and spectral distributions of reflected light, and both those distributions depend on the nanotube length. Applications Light emitting diodes (LEDs) and photo-detectors based on a single nanotube have been produced in the lab. Their unique feature is not the efficiency, which is yet relatively low, but the narrow selectivity in the wavelength of emission and detection of light and the possibility of its fine tuning through the nanotube structure. In addition, bolometer and optoelectronic memory devices have been realised on ensembles of single-walled carbon nanotubes. Photoluminescence is used for characterization purposes to measure the quantities of semiconducting nanotube species in a sample. Nanotubes are isolated (dispersed) using an appropriate chemical agent ("dispersant") to reduce the intertube quenching. Then PL is measured, scanning both the excitation and emission energies and thereby producing a PL map. The ovals in the map define (S22, S11) pairs, which unique identify (n, m) index of a tube. The data of Weisman and Bachilo are conventionally used for the identification. Nanotube fluorescence has been investigated for the purposes of imaging and sensing in biomedical applications. Sensitization Optical properties, including the PL efficiency, can be modified by encapsulating organic dyes (carotene, lycopene, etc.) inside the tubes. Efficient energy transfer occurs between the encapsulated dye and nanotube — light is efficiently absorbed by the dye and without significant loss is transferred to the SWNT. Thus potentially, optical properties of a carbon nanotube can be controlled by encapsulating certain molecule inside it. Besides, encapsulation allows isolation and characterization of organic molecules which are unstable under ambient conditions. For example, Raman spectra are extremely difficult to measure from dyes because of their strong PL (efficiency close to 100%). However, encapsulation of dye molecules inside SWNTs completely quenches dye PL, thus allowing measurement and analysis of their Raman spectra. Cathodoluminescence Cathodoluminescence (CL) — light emission excited by electron beam — is a process commonly observed in TV screens. An electron beam can be finely focused and scanned across the studied material. This technique is widely used to study defects in semiconductors and nanostructures with nanometer-scale spatial resolution. It would be beneficial to apply this technique to carbon nanotubes. However, no reliable CL, i.e. sharp peaks assignable to certain (n, m) indices, has been detected from carbon nanotubes yet. Electroluminescence If appropriate electrical contacts are attached to a nanotube, electron-hole pairs (excitons) can be generated by injecting electrons and holes from the contacts. Subsequent exciton recombination results in electroluminescence (EL). Electroluminescent devices have been produced from single nanotubes and their macroscopic assemblies. Recombination appears to proceed via triplet-triplet annihilation giving distinct peaks corresponding to E11 and E22 transitions. Multi-walled carbon nanotubes Multi-walled carbon nanotubes (MWNT) may consist of several nested single-walled tubes, or of a single graphene strip rolled up multiple times, like a scroll. They are difficult to study because their properties are determined by contributions and interactions of all individual shells, which have different structures. Moreover, the methods used to synthesize them are poorly selective and result in higher incidence of defects. See also Allotropes of carbon Buckypaper Carbon nanotube Carbon nanotubes in photovoltaics Graphene Hiromichi Kataura Mechanical properties of carbon nanotubes Nanoflower Potential applications of carbon nanotubes Resonance Raman spectroscopy Selective chemistry of single-walled nanotubes Vantablack, a substance produced in 2014; one of the blackest substances known References External links Selection of free-download articles on carbon nanotubes (New Journal of Physics) Publications of H. Kataura — many of older ones are downloadable Carbon Nanotube Black Body (AIST nano tech 2009) Carbon nanotubes Carbon nanotubes
Optical properties of carbon nanotubes
[ "Physics" ]
4,384
[ "Materials", "Optical materials", "Matter" ]
19,727,998
https://en.wikipedia.org/wiki/Aero%20Controls
Aero Controls, Inc. is an aerospace engineering company founded in October 1984 by John Titus, the CEO and President of the Company, headquartered in Seattle, Washington. History It is a minority-owned, FAA certified repair station. The company overhauls, repairs, sells, exchanges and modifies components of aircraft. It opened a satellite location in Shelton, Washington in 1993. In 1995 Aero Controls was presented with the Federal Express Minority Supplier of the Year Award. In 1996 the Company acquired Aero Systems Aviation Corp. in Miami. In 2006 Aero Controls Avionics was merged with Aero Controls, Inc. In 2007 the company acquired Ft. Lauderdale based Patriot Aviation Services, LLC. The company has approximately 250 employees and revenues of 50 M.It is actually headquartered in Auburn, Washington. Community service Aero Controls Inc participates in community services such as, food drive: assisting local food banks in feeding the homeless and underprivileged in the Auburn, Kent, Federal Way, Shelton and Miami Florida areas; Heart Walk: Benefits the American Heart Association; Relay for Life: Benefits the American Cancer Society; Backpack Program: Provides school supplies for needy children; Aero Controls Golf Tournament: held annually to raise money for the community service fund; Adopt-A- Family; Highway Clean Up; Puget Sound Blood Mobile. References External links Aero Controls Companies based in Seattle Aerospace engineering organizations Technology companies established in 1984 1984 establishments in Washington (state)
Aero Controls
[ "Engineering" ]
285
[ "Aeronautics organizations", "Aerospace engineering organizations", "Aerospace engineering" ]
18,629,502
https://en.wikipedia.org/wiki/Context-sensitive%20user%20interface
A context-sensitive user interface offers the user options based on the state of the active program. Context sensitivity is ubiquitous in current graphical user interfaces, often in context menus. A user-interface may also provide context sensitive feedback, such as changing the appearance of the mouse pointer or cursor, changing the menu color, or with auditory or tactile feedback. Reasoning and advantages of context sensitivity The primary reason for introducing context sensitivity is to simplify the user interface. Advantages include: Reduced number of commands required to be known to the user for a given level of productivity. Reduced number of clicks or keystrokes required to carry out a given operation. Allows consistent behaviour to be pre-programmed or altered by the user. Reduces the number of options needed on screen at one time. Disadvantages Context sensitive actions may be perceived as dumbing down of the user interface, leaving the operator at a loss as to what to do when the computer decides to perform an unwanted action. Additionally non-automatic procedures may be hidden or obscured by the context sensitive interface causing an increase in user workload for operations the designers did not foresee. A poor implementation can be more annoying than helpful – a classic example of this is Office Assistant. Implementation At the simplest level each possible action is reduced to a single most likely action – the action performed is based on a single variable (such as file extension). In more complicated implementations multiple factors can be assessed such as the user's previous actions, the size of the file, the programs in current use, metadata etc. The method is not only limited to the response to imperative button presses and mouse clicks – pop-up menus can be pruned and/or altered, or a web search can focus results based on previous searches. At higher levels of implementation context sensitive actions require either larger amounts of meta-data, extensive case analysis based programming, or other artificial intelligence algorithms. In computer and video games Context sensitivity is important in video games, especially those controlled by a gamepad, joystick or computer mouse in which the number of buttons available is limited. It is primarily applied when the player is in a certain place and is used to interact with a person or object. For example, if the player is standing next to a non-player character, an option may come up allowing the player to talk with him/her. Implementations range from the embryonic 'Quick Time Event' to context sensitive sword combat in which the attack used depends on the position and orientation of both the player and opponent, as well as the virtual surroundings. A similar range of use is found in the 'action button' which, depending upon the in-game position of the player's character, may cause it to pick something up, open a door, grab a rope, punch a monster or opponent, or smash an object. The response does not have to be player activated – an on-screen device may only be shown in certain circumstances, e.g. 'targeting' cross hairs in a flight combat game may indicate the player should fire. An alternative implementation is to monitor the input from the player (e.g. level of button pressing activity) and use that to control the pace of the game in an attempt to maximize enjoyment or to control the excitement or ambience. The method has become increasingly important as more complex games are designed for machines with few buttons (keyboard-less consoles). Bennet Ring commented (in 2006) that "Context-sensitive is the new lens flare". Context-sensitive help Context sensitive help is a common implementation of context sensitivity, a single help button is actioned and the help page or menu will open a specific page or related topic. See also Autocomplete Autofill Autotype Combo box Context awareness DWIM "Do What I Mean" Principle of least astonishment (PLA/POLA) Quick time event (QTE) References Citations Sources Human–computer interaction Ergonomics Video game design Applications of artificial intelligence
Context-sensitive user interface
[ "Engineering" ]
806
[ "Human–computer interaction", "Human–machine interaction" ]
22,359,019
https://en.wikipedia.org/wiki/Oudemansiella%20australis
Oudemansiella australis is a species of gilled mushroom in the family Physalacriaceae. It is found in Australasia, where it grows on rotting wood. It produces fruit bodies that are white, with caps up to in diameter, attached to short, thick stems. Taxonomy and classification The species was reported as new to science by Greta Stevenson and G.M. Taylor in a 1964 publication, based on a specimen found in March, 1961. According to the 1986 arrangement of Pegler and Young, based largely on spore structure, Oudemansiella australis is classified in the section Oudemansiella of genus Oudemansiella, along with the species O. mucida, O. venesolamellata, and O. canarii. In a more recent classification proposed by Yang and colleagues, O. australis is in section Oudemansiella, which contains tropical to south temperate species, such as O. platensis, O. canarii and O. crassifolia. These species are characterised by having an ixotrichoderm cap cuticle, meaning it is made of gelatinized filamentous hyphae of different lengths arranged in roughly parallel fashion. These hyphae are often mixed with inflated cells that usually occur in chains. New Zealand mycologist Geoff Ridley has proposed the common name "porcelain slimecap" for the mushroom. Description Oudemansiella australis mushrooms have a cap that is in diameter, and initially white becoming a light yellowish brown (fawn) in age. It has a convex shape, but splits at the margins. The cap cuticle splits irregularly to reveal firm white flesh underneath. The gills are adnate, powdery white, and moderately distantly spaced. They are long and short intercalated, with deep with ribs at the base. The stem is long by thick, attached off-centre to the cap. It is white on the upper part, changing to fawn around the swollen base. The flesh is solid, white, and silky. The spore print is white. Spores are spherical or nearly so, measuring 24 by 21 μm, with thick walls (about 1 μm). They are non-amyloid, and have a prominent hilar appendix (a depression in the surface where the spore was once connected to the sterigmata). Habitat and distribution The fungus grows on rotting wood. The first recorded collection was made in the open near a forest in Wainui Valley, Wellington. It has since been found in Australia and Papua New Guinea. References Physalacriaceae Fungi described in 1964 Fungi of Australia Taxa named by Greta Stevenson Fungi of New Guinea Fungus species
Oudemansiella australis
[ "Biology" ]
564
[ "Fungi", "Fungus species" ]
22,359,420
https://en.wikipedia.org/wiki/C2H4N4
{{DISPLAYTITLE:C2H4N4}} The molecular formula C2H4N4 (molar mass: 84.08 g/mol, exact mass: 84.0436 u) may refer to: 3-Amino-1,2,4-triazole (3-AT), a herbicide 2-Cyanoguanidine
C2H4N4
[ "Chemistry" ]
78
[ "Isomerism", "Set index articles on molecular formulas" ]
22,359,636
https://en.wikipedia.org/wiki/Simplicial%20sphere
In geometry and combinatorics, a simplicial (or combinatorial) d-sphere is a simplicial complex homeomorphic to the d-dimensional sphere. Some simplicial spheres arise as the boundaries of convex polytopes, however, in higher dimensions most simplicial spheres cannot be obtained in this way. One important open problem in the field was the g-conjecture, formulated by Peter McMullen, which asks about possible numbers of faces of different dimensions of a simplicial sphere. In December 2018, the g-conjecture was proven by Karim Adiprasito in the more general context of rational homology spheres. Examples For any n ≥ 3, the simple n-cycle Cn is a simplicial circle, i.e. a simplicial sphere of dimension 1. This construction produces all simplicial circles. The boundary of a convex polyhedron in R3 with triangular faces, such as an octahedron or icosahedron, is a simplicial 2-sphere. More generally, the boundary of any (d+1)-dimensional compact (or bounded) simplicial convex polytope in the Euclidean space is a simplicial d-sphere. Properties It follows from Euler's formula that any simplicial 2-sphere with n vertices has 3n − 6 edges and 2n − 4 faces. The case of n = 4 is realized by the tetrahedron. By repeatedly performing the barycentric subdivision, it is easy to construct a simplicial sphere for any n ≥ 4. Moreover, Ernst Steinitz gave a characterization of 1-skeleta (or edge graphs) of convex polytopes in R3 implying that any simplicial 2-sphere is a boundary of a convex polytope. Branko Grünbaum constructed an example of a non-polytopal simplicial sphere (that is, a simplicial sphere that is not the boundary of a polytope). Gil Kalai proved that, in fact, "most" simplicial spheres are non-polytopal. The smallest example is of dimension d = 4 and has f0 = 8 vertices. The upper bound theorem gives upper bounds for the numbers fi of i-faces of any simplicial d-sphere with f0 = n vertices. This conjecture was proved for simplicial convex polytopes by Peter McMullen in 1970 and by Richard Stanley for general simplicial spheres in 1975. The ''g-conjecture, formulated by McMullen in 1970, asks for a complete characterization of f-vectors of simplicial d-spheres. In other words, what are the possible sequences of numbers of faces of each dimension for a simplicial d-sphere? In the case of polytopal spheres, the answer is given by the g''-theorem, proved in 1979 by Billera and Lee (existence) and Stanley (necessity). It has been conjectured that the same conditions are necessary for general simplicial spheres. The conjecture was proved by Karim Adiprasito in December 2018. See also Dehn–Sommerville equations References Algebraic combinatorics Topology
Simplicial sphere
[ "Physics", "Mathematics" ]
675
[ "Combinatorics", "Fields of abstract algebra", "Topology", "Space", "Geometry", "Spacetime", "Algebraic combinatorics" ]
22,361,363
https://en.wikipedia.org/wiki/Tax1
The Tax1 is a PDZ domain containing oncoprotein encoded by HTLV-1. References Proteins
Tax1
[ "Chemistry" ]
25
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
22,366,849
https://en.wikipedia.org/wiki/Rio%20Grande%20Project
The Rio Grande Project is a United States Bureau of Reclamation irrigation, hydroelectricity, flood control, and interbasin water transfer project serving the upper Rio Grande basin in the southwestern United States. The project irrigates along the river in the states of New Mexico and Texas. Approximately 60 percent of this land is in New Mexico. Some water is also allotted to Mexico to irrigate some on the south side of the river. The project was authorized in 1905, but its final features were not implemented until the early 1950s. The project consists of two large storage dams, 6 small diversion dams, two flood-control dams, of canals and their branches and of drainage channels and pipes. A small hydroelectric plant at one of the project's dams also supplies electricity to the region. History Long before Texas was a state, the Pueblo Indians used the waters of the Rio Grande with simple irrigation systems that were noted by the Spanish in the 16th century while conducting expeditions from Mexico to North America. In the mid-19th century, American settlers began intensive irrigation development of the Rio Grande watershed. Small dikes, dams, canals, and other irrigation works were constructed along the Rio Grande and its tributaries. The river would take out some of these primitive structures in its annual floods, and a large, coordinated project would be needed to construct permanent replacements. However, investigations to begin this project did not begin until the early twentieth century. Like many rivers of the American Southwest, runoff in the Rio Grande basin is limited and varies widely from year to year. By the 1890s, water use in the upper basin was so great that the river's flow near El Paso, Texas, was reduced to a trickle in dry summers. To resolve these problems, plans were drafted up for a large storage dam at Elephant Butte, about downstream of Albuquerque, New Mexico. The Newlands Reclamation Act was passed in 1902, authorizing the Rio Grande Project as a Bureau of Reclamation undertaking. For the next two years, surveyors and engineers undertook a comprehensive feasibility study for the project's dams and reservoirs. The first elements of the project to be built were the Leasburg Diversion Dam and about of supporting canal, begun in 1906 and finished in 1908. Elephant Butte Dam, the largest dam on the Rio Grande, was authorized by the United States Congress on February 15, 1905. Construction began in 1908, when groundworks were laid. Conflicts over the lands to be submerged under the future reservoir bogged down the project for a while, but work resumed in 1912 and the reservoir began to fill by 1915. The Franklin Canal was an existing 1890 canal purchased by the Bureau of Reclamation in 1912 and rebuilt from 1914 to 1915. The Mesilla and Percha Diversion Dams, East Side Canal, West Side Canal, Rincon Valley Canal, and an extension of the Leasburg Canal were built in the period between 1914 and 1919. In the late 1910s, a problem developed with rising local groundwater levels caused by irrigation. In response, Reclamation began planning for the extensive drainage system of the Rio Grande Project in 1916. Contracts for the construction of these drainage systems, as well as distribution canals (laterals) were not awarded until the period from 1917 to 1918. Before 1929, the entire irrigation system would be overhauled. This involved repairing, rebuilding and extending old canals; and construction of new laterals. Work is still in progress, as agricultural development in the region continues to grow. The last major components of the project were constructed from the 1930s to the early 1950s. Caballo Dam, the second major storage facility of the project located 21 miles south of Truth or Consequences, New Mexico was built from 1936 to 1938. Caballo was built to provide flood protection for the projects downstream, stabilize outflows from Elephant Butte, and replace storage lost in Elephant Butte Reservoir due to sedimentation. With the benefit of flow regulation, a small hydroelectric plant was completed in 1940 at the base of Elephant Butte Dam. The construction of power transmission lines was begun in 1940, and was finally completed by 1952. The Elephant Butte Irrigation District is a historic district providing recognition and limited protection for the history of much of the system, which was listed on the National Register of Historic Places in 1997. The listing included three contributing buildings and 214 contributing structures. Noted as historic are the diversion dams and the unlined irrigation canals; most of the mechanical fixtures in the system have been routinely replaced and are non-historic. Components of the project Elephant Butte Dam The Elephant Butte Dam (also referred to as Elephant Butte Dike) is the main storage facility for the Rio Grande Project. It is a long concrete gravity dam standing above the river and high from its foundations. The dam is thick at the base and tapers to about thick at the crest. The dam took of material to construct. The full volume of Elephant Butte Reservoir is some , accounting for about 85% of the project's storage capacity. The outlet works of the dam can release , while the service spillway can release . The reservoir and dam receive water from a catchment of , about 16% of the Rio Grande's total drainage area. The Elephant Butte hydroelectric station is a base load power plant that draws water from the reservoir and has a capacity of 27.95 megawatts. Caballo Dam Caballo Dam is the second major storage dam of the Rio Grande Project, located about below Elephant Butte. The dam is high above the river, high from its foundations, and long. It forms the Caballo Reservoir, which can store up to of water. The outlet works can release cubic feet per second, while the spillway has a capacity of per second. The dam has no power generation facilities, although it has been proposed that a small hydroelectric plant be installed at its base for local irrigation districts. Percha Diversion Dam and Rincon Valley Main Canal Percha Diversion Dam lies downstream from and west of the Caballo Dam. It consists of a concrete overflow section flanked by earthen wing dikes totaling in length, standing high above the riverbed and above its foundations. . The dam diverts water into the Rincon Valley Main Canal, which is long and has a capacity of . Water from the canal irrigates of land in the Rincon Valley. Leasburg Diversion Dam and Canal Leasburg Diversion Dam is downstream and nearly identical in design to the Percha Diversion Dam. It is high above the river and high above its foundations. The dam and adjacent dikes total in length. The dam's spillway is a broad-crested weir about long with a capacity of . The dam diverts water into the Leasburg Canal, which irrigates of land in the upper Mesilla Valley. The canal has a capacity of per second. Pichacho North and South Dams Pichacho North and Pichacho South dams impound North Pichacho Arroyo and South Pichacho Arroyo, respectively, to provide flood protection for the Leasburg Canal. Both arroyos are ephemeral, and so the dams operate only during storm events. The dams were both built in the 1950s. Pichacho North is an earthfill dam high above the streambed, high above its foundations, and long. It has an uncontrolled crest spillway that is long. It controls floods from a drainage area of . Pichacho South stands high above the arroyo and from its foundations. The dam is long. Its spillway is of similar design to that of North Pichacho, and is long. The dam provides flood protection for an area of . Mesilla Diversion Dam and Canals The Mesilla Diversion Dam is located about upstream of El Paso and consists of a gated overflow structure. The dam is high above the Rio Grande, high above its foundations, and measures long. The spillway has a capacity of . The dam diverts water into the East Side Canal and West Side Canal, which provide irrigation water to of land in the lower Mesilla Valley. The East Side Canal is long, and has a capacity of . The West Side Canal is larger at long, and has a capacity of . Near its end, the West Side Canal crosses underneath the Rio Grande via the Montoya Siphon. American Diversion Dam and Canals The American Diversion Dam is a gated dam flanked by earthen dikes about northwest of El Paso and just above the Mexico–United States border. It is high above the riverbed, and from crest to foundation. The spillway is long and has a capacity of . The dam diverts water into the American Canal, which carries up to of water for to the beginning of the Franklin Canal. The Franklin Canal is long and takes water into the El Paso Valle, where it irrigates . Riverside Diversion Dam and Canals Riverside Diversion Dam is the lowermost dam of the Rio Grande Project. The dam is above the streambed, above its foundations, and long. Its service spillway consists of six x radial gates, and an uncontrolled overflow weir serves as an emergency spillway. The Riverside Canal carries water to the El Paso Valley, and has a capacity of about . The Tornillo Canal, with a capacity of , branches off the Riverside Canal. Excess waters from the canals are diverted to irrigate about in Hudspeth County, Texas. Effects Benefits The Rio Grande Project furnishes irrigation water year-round to a long, narrow area of in the Rio Grande Valley in south-central New Mexico and western Texas. Crops grown in the region include grain, pecans, alfalfa, cotton, and many types of vegetables. Power generated at the Elephant Butte power plant is distributed through an electrical grid totaling of 115-kilovolt transmission lines and 11 substations. Originally built by Reclamation, the power grid remained under its ownership until 1977, when it was sold to a local company. Caballo and Elephant Butte reservoirs are both popular recreational areas. Elephant Butte Reservoir, with of water at full pool, is popular for swimming, boating, and fishing. Cabins, fishing tackle, and boat rental services are available at the reservoir. Downstream Caballo Reservoir, with an area of , is also a popular site for picnicking, fishing and boating. Elephant Butte Lake State Park and Caballo Lake State Park serve the two reservoirs, respectively. Negative impacts Even before the Rio Grande Project, the waters of the Rio Grande were already overtaxed by human development in the region. At the end of the 19th century, there were some 925 diversions of the river in the state of Colorado alone. In 1896, it was affirmed by the United States Geological Survey (USGS) that the river's flow was decreasing by annually. The river has run dry many times since the 1950s at Big Bend National Park. At El Paso, Texas, the river is non-existent for much of the year. Tributaries of the river, both on the Mexican and American sides, have been diverted heavily for irrigation. The Rio Grande is said to be "one of the most stressed river basins in the world". In 2001, the river failed to reach the Gulf of Mexico but instead ended from the shore behind a sandbar, "not with a roar but with a whimper in the sand". The river's decreasing flow has posed problems for international security. In the past, the river was wide, deep and fast-flowing in its section through Texas, where it forms a large section of the Mexico–United States border. Illegal immigrants once had to swim across the river at the border, but with the river so low immigrants need only wade across for most of the year. Other than extensive diversions, exotic introduced, fast-growing and water-consuming plants, such as water hyacinth and hydrilla, are also leading to reduced flows. The United States government has recently attempted to slow or stop the progress of these weeds by introducing insects and fish that feed on the invasive plants. See also Colorado River Storage Project Rio Grande Rectification Project Rio Grande dams and diversions References External links Rio Grande Project History Allocation of the Rio Grande Rio Grande Dams in New Mexico Engineering projects History of the American West United States Bureau of Reclamation 1905 establishments in the United States
Rio Grande Project
[ "Engineering" ]
2,478
[ "nan" ]
22,367,141
https://en.wikipedia.org/wiki/Gunnar%20K%C3%A4ll%C3%A9n
Anders Olof Gunnar Källén (13 February 1926 – 13 October 1968) was a Swedish theoretical physicist and professor at Lund University, known for his work on correlation functions in quantum field theory. He died at the age of 42 as a result of a plane crash. Biography Anders Olof Gunnar Källén was born in 1926 in Kristianstad, Sweden. His father, Yngve Källén, was a teacher of physics and mathematics, and together they published a paper on the theory of relativity. Gunnar's brother was the embryologist . Källén earned his doctorate at Lund in 1950 working with Torsten Gustafson, who was in close correspondence with Wolfgang Pauli. He worked at CERN's theoretical division from 1952 to 1957, which, at that time, was situated at the Institute for Theoretical Physics of the University of Copenhagen, later to be named Niels Bohr Institute. Afterwards he worked at Nordita from 1957 to 1958, and then began a professorship at Lund University. Källén's research focused on quantum field theory and elementary particle physics. His developments included the so-called Källén–Lehmann spectral representation of correlation functions in quantum field theory, and he made contributions to quantum electrodynamics, especially in renormalization. He also worked with the axiomatic formulation of quantum field theory, which led to contributions to the theory of functions of several complex variables, and collaborated on the Pauli–Källén equation. The Källén function, as well as the Källén–Yang–Feldman formalism and the Källén-Sabry potentials are named after him. Plane crash Källén was an avid pilot, who, being fascinated by flying from his childhood on, started taking lessons in 1964. He was flying a Piper PA-28 Cherokee Arrow from Malmö to CERN when his plane crashed during an emergency landing in Hanover, Germany in 1968. His two passengers, one of them his wife, survived the crash. Many years after his death, Cecilia Jarlskog edited the book Portrait of Gunnar Källén: A Physics Shooting Star and Poet of Early Quantum Field Theory (Springer, 2013) with 9 invited contributors, all of whom had a personal acquaintance with Källén. The book consists mainly of testimonies by Källén's colleagues. Steven Weinberg, whose first published physics paper was motivated by Källén, wrote one of the book's chapters. The chapter deals with Källén's research and is the written version of a 2009 lecture by Weinberg. Bibliography G. Källén, Quantenelektrodynamik, Handbuch der Physik (Springer-Verlag, Berlin, 1958) G. Källén, Elementary Particle Physics (Addison-Wesley, Reading, Massachusetts, 1964) G. Källén, Quantum Electrodynamics (Springer-Verlag, Berlin, 1972); 2013 pbk reprint References Further reading A. S. Wightman. Gunnar Källén 1926–1968, Comm. Math. Phys. 11 (1968) 181–182 C. Jarlskog (ed.) "Portrait of Gunnar Källén : A physics shooting star and poet of early quantum field theory" (Springer Verlag, , 2014) 1926 births 1968 deaths Swedish physicists Theoretical physicists People associated with CERN Victims of aviation accidents or incidents in 1968 Members of the Royal Swedish Academy of Sciences Aviators killed in aviation accidents or incidents in Germany
Gunnar Källén
[ "Physics" ]
692
[ "Theoretical physics", "Theoretical physicists" ]
3,559,472
https://en.wikipedia.org/wiki/Axiomatic%20quantum%20field%20theory
Axiomatic quantum field theory is a mathematical discipline which aims to describe quantum field theory in terms of rigorous axioms. It is strongly associated with functional analysis and operator algebras, but has also been studied in recent years from a more geometric and functorial perspective. There are two main challenges in this discipline. First, one must propose a set of axioms which describe the general properties of any mathematical object that deserves to be called a "quantum field theory". Then, one gives rigorous mathematical constructions of examples satisfying these axioms. Analytic approaches Wightman axioms The first set of axioms for quantum field theories, known as the Wightman axioms, were proposed by Arthur Wightman in the early 1950s. These axioms attempt to describe QFTs on flat Minkowski spacetime by regarding quantum fields as operator-valued distributions acting on a Hilbert space. In practice, one often uses the Wightman reconstruction theorem, which guarantees that the operator-valued distributions and the Hilbert space can be recovered from the collection of correlation functions. Osterwalder–Schrader axioms The correlation functions of a QFT satisfying the Wightman axioms often can be analytically continued from Lorentz signature to Euclidean signature. (Crudely, one replaces the time variable with imaginary time the factors of change the sign of the time-time components of the metric tensor.) The resulting functions are called Schwinger functions. For the Schwinger functions there is a list of conditions — analyticity, permutation symmetry, Euclidean covariance, and reflection positivity — which a set of functions defined on various powers of Euclidean space-time must satisfy in order to be the analytic continuation of the set of correlation functions of a QFT satisfying the Wightman axioms. Haag–Kastler axioms The Haag–Kastler axioms axiomatize QFT in terms of nets of algebras. Euclidean CFT axioms These axioms (see e.g.) are used in the conformal bootstrap approach to conformal field theory in . They are also referred to as Euclidean bootstrap axioms. See also Dirac–von Neumann axioms References Quantum field theory
Axiomatic quantum field theory
[ "Physics" ]
450
[ "Quantum field theory", "Quantum mechanics" ]
3,559,509
https://en.wikipedia.org/wiki/Background%20field%20method
In theoretical physics, background field method is a useful procedure to calculate the effective action of a quantum field theory by expanding a quantum field around a classical "background" value B: . After this is done, the Green's functions are evaluated as a function of the background. This approach has the advantage that the gauge invariance is manifestly preserved if the approach is applied to gauge theory. Method We typically want to calculate expressions like where J(x) is a source, is the Lagrangian density of the system, d is the number of dimensions and is a field. In the background field method, one starts by splitting this field into a classical background field B(x) and a field η(x) containing additional quantum fluctuations: Typically, B(x) will be a solution of the classical equations of motion where S is the action, i.e. the space integral of the Lagrangian density. Switching on a source J(x) will change the equations into . Then the action is expanded around the background B(x): The second term in this expansion is zero by the equations of motion. The first term does not depend on any fluctuating fields, so that it can be brought out of the path integral. The result is The path integral which now remains is (neglecting the corrections in the dots) of Gaussian form and can be integrated exactly: where "det" signifies a functional determinant and C is a constant. The power of minus one half will naturally be plus one for Grassmann fields. The above derivation gives the Gaussian approximation to the functional integral. Corrections to this can be computed, producing a diagrammatic expansion. See also BF theory Effective action Source field References Quantum field theory
Background field method
[ "Physics" ]
357
[ "Quantum field theory", "Quantum mechanics" ]
3,559,586
https://en.wikipedia.org/wiki/Chiral%20symmetry%20breaking
In particle physics, chiral symmetry breaking generally refers to the dynamical spontaneous breaking of a chiral symmetry associated with massless fermions. This is usually associated with a gauge theory such as quantum chromodynamics, the quantum field theory of the strong interaction, and it also occurs through the Brout-Englert-Higgs mechanism in the electroweak interactions of the standard model. This phenomenon is analogous to magnetization and superconductivity in condensed matter physics. The basic idea was introduced to particle physics by Yoichiro Nambu, in particular, in the Nambu–Jona-Lasinio model, which is a solvable theory of composite bosons that exhibits dynamical spontaneous chiral symmetry when a 4-fermion coupling constant becomes sufficiently large. Nambu was awarded the 2008 Nobel prize in physics "for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics". Overview Quantum chromodynamics Massless fermions in 4 dimensions are described by either left or right-handed spinors that each have 2 complex components. These have spin either aligned (right-handed chirality), or counter-aligned (left-handed chirality), with their momenta. In this case the chirality is a conserved quantum number of the given fermion, and the left and right handed spinors can be independently phase transformed. More generally they can form multiplets under some symmetry group . A Dirac mass term explicitly breaks the chiral symmetry. In quantum electrodynamics (QED) the electron mass unites left and right handed spinors forming a 4 component Dirac spinor. In the absence of mass and quantum loops, QED would have a chiral symmetry, but the Dirac mass of the electron breaks this to a single symmetry that allows a common phase rotation of left and right together, which is the gauge symmetry of electrodynamics. (At the quantum loop level, the chiral symmetry is broken, even for massless electrons, by the chiral anomaly, but the gauge symmetry is preserved, which is essential for consistency of QED.) In QCD, the gauge theory of strong interactions, the lowest mass quarks are nearly massless and an approximate chiral symmetry is present. In this case the left- and right-handed quarks are interchangeable in bound states of mesons and baryons, so an exact chiral symmetry of the quarks would imply "parity doubling", and every state should appear in a pair of equal mass particles, called "parity partners". In the notation, (spin), a meson would therefore have the same mass as a parity partner meson. Experimentally, however, it is observed that the masses of the pseudoscalar mesons (such as the pion) are much lighter than any of the other particles in the spectrum. The low masses of the pseudoscalar mesons, as compared to the heavier states, is also quite striking. The next heavier states are the vector mesons, , such as rho meson, and the scalars mesons and vector mesons are heavier still, appearing as short-lived resonances far (in mass) from their parity partners. This is a primary consequence of the phenomenon of spontaneous symmetry breaking of chiral symmetry in the strong interactions. In QCD, the fundamental fermion sector consists of three "flavors" of light mass quarks, in increasing mass order: up , down , and strange   (as well as three flavors of heavy quarks, charm , bottom , and If we assume the light quarks are ideally massless (and ignore electromagnetic and weak interactions), then the theory has an exact global chiral flavor symmetry. Under spontaneous symmetry breaking, the chiral symmetry is spontaneously broken to the "diagonal flavor SU(3) subgroup", generating low mass Nambu–Goldstone bosons. These are identified with the pseudoscalar mesons seen in the spectrum, and form an octet representation of the diagonal SU(3) flavor group. Beyond the idealization of massless quarks, the actual small quark masses (and electroweak forces) explicitly break the chiral symmetry as well. This can be described by a chiral Lagrangian where the masses of the pseudoscalar mesons are determined by the quark masses, and various quantum effects can be computed in chiral perturbation theory. This can be confirmed more rigorously by lattice QCD computations, which show that the pseudoscalar masses vary with the quark masses as dictated by chiral perturbation theory, (effectively as the square-root of the quark masses). The three heavy quarks: the charm quark, bottom quark, and top quark, have masses much larger than the scale of the strong interactions, thus they do not display the features of spontaneous chiral symmetry breaking. However bound states consisting of a heavy quark and a light quark (or two heavies and one light) still display a universal behavior, where the ground states are split from the parity partners by a universal mass gap of about (confirmed experimenally by the ) due to the light quark chiral symmetry breaking (see below). Light Quarks and Mass Generation If the three light quark masses of QCD are set to zero, we then have a Lagrangian with a symmetry group : Note that these symmetries, called "flavor-chiral" symmetries, should not be confused with the quark "color" symmetry, that defines QCD as a Yang-Mills gauge theory and leads to the gluonic force that binds quarks into baryons and meson. In this article we will not focus on the binding dynamics of QCD where quarks are confined within the baryon and meson particles that are observed in the laboratory (see Quantum chromodynamics). A static vacuum condensate can form, composed of bilinear operators involving the quantum fields of the quarks in the QCD vacuum, known as a fermion condensate. This takes the form : driven by quantum loop effects of quarks and gluons, with The condensate is not invariant under independent or rotations, but is invariant under common rotations. The pion decay constant, may be viewed as the measure of the strength of the chiral symmetry breaking. The quark condensate is induced by non-perturbative strong interactions and spontaneously breaks the down to the diagonal vector subgroup ; (this contains as a subgroup the original symmetry of nuclear physics called isospin, which acts upon the up and down quarks). The unbroken subgroup of constitutes the original pre-quark idea of Gell-Mann and Ne'eman known as the "Eightfold Way" which was the original successful classification scheme of the elementary particles including strangeness. The symmetry is anomalous, broken by gluon effects known as instantons and the corresponding meson is much heavier than the other light mesons. Chiral symmetry breaking is apparent in the mass generation of nucleons, since no degenerate parity partners of the nucleon appear. Chiral symmetry breaking and the quantum conformal anomaly account for approximately 99% of the mass of a proton or neutron, and these effects thus account for most of the mass of all visible matter (the proton and neutron, which form the nuclei of atoms, are baryons, called nucleons). For example, the proton, of mass contains two up quarks, each with explicit mass and one down quark with explicit mass . Naively, the light quark explicit masses only contribute a total of about 9.4 MeV to the proton's mass. For the light quarks the chiral symmetry breaking condensate can be viewed as inducing the so-called constituent quark masses. Hence, the light up quark, with explicit mass and down quark with explicit mass now acquire constituent quark masses of about . QCD then leads to the baryon bound states, which each contain combinations of three quarks (such as the proton (uud) and neutron (udd)). The baryons then acquire masses given, approximately, by the sums of their constituent quark masses. Nambu-Goldstone bosons One of the most spectacular aspects of spontaneous symmetry breaking, in general, is the phenomenon of the Nambu–Goldstone bosons. In QCD these appear as approximately massless particles. corresponding to the eight broken generators of the original They include eight mesons, the pions, kaons and the eta meson. These states have small masses due to the explicit masses of the underlying quarks and as such are referred to as "pseudo-Nambu-Goldstone bosons" or "pNGB's". pNGB's are a general phenomenon and arise in any quantum field theory with both spontaneous and explicit symmetry breaking, simultaneously. These two types of symmetry breaking typically occur separately, and at different energy scales, and are not predicated on each other. The properties of these pNGB's can be calculated from chiral Lagrangians, using chiral perturbation theory, which expands around the exactly symmetric zero-quark mass theory. In particular, the computed mass must be small. Technically, the spontaneously broken chiral symmetry generators comprise the coset space This space is not a group, and consists of the eight axial generators, corresponding to the eight light pseudoscalar mesons, the nondiagonal part of Heavy-light mesons Mesons containing a heavy quark, such as charm (D meson) or beauty, and a light anti-quark (either up, down or strange), can be viewed as systems in which the light quark is "tethered" by the gluonic force to the fixed heavy quark, like a ball tethered to a pole. These systems give us a view of the chiral symmetry breaking in its simplest form, that of a single light-quark state. In 1994 William A. Bardeen and Christopher T. Hill studied the properties of these systems implementing both the heavy quark symmetry and the chiral symmetries of light quarks in a Nambu–Jona-Lasinio model approximation. They showed that chiral symmetry breaking causes the s-wave ground states (spin) to be split from p-wave parity partner excited states by a universal "mass gap", . The Nambu–Jona-Lasinio model gave an approximate estimate of the mass gap of which would be zero if the chiral symmetry breaking was turned off. The excited states of non-strange, heavy-light mesons are usually short-lived resonances due to the principal strong decay mode and are therefore hard to observe. Though the results were approximate, they implied the charm-strange excited mesons could be abnormally narrow (long-lived) since the principal decay mode, would be blocked, owing to the mass of the kaon (). In 2003 the was discovered by the BaBar collaboration, and was seen to be surprisingly narrow, with a mass gap above the of within a few percent of the model prediction (also the more recently confirmed heavy quark spin-symmetry partner, ). Bardeen, Eichten and Hill predicted, using the chiral Lagrangian, numerous observable decay modes which have been confirmed by experiments. Similar phenomena should be seen in the mesons and heavy-heavy-strange baryons. See also Conformal anomaly Little Higgs Top Quark Condensate Footnotes References Quantum field theory Quantum chromodynamics Mathematical physics Asymmetry
Chiral symmetry breaking
[ "Physics", "Mathematics" ]
2,438
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Asymmetry", "Mathematical physics", "Symmetry" ]
3,559,710
https://en.wikipedia.org/wiki/Isovector
In particle physics, isovector refers to the vector transformation of a particle under the SU(2) group of isospin. An isovector state is a triplet state with total isospin 1, with the third component of isospin either 1, 0, or -1, much like a triplet state in the two-particle addition of Spin. See also Isoscalar References Bosons
Isovector
[ "Physics" ]
86
[ "Matter", "Bosons", "Particle physics", "Particle physics stubs", "Subatomic particles" ]
3,559,814
https://en.wikipedia.org/wiki/Bootstrap%20model
The term "bootstrap model" is used for a class of theories that use very general consistency criteria to determine the form of a quantum theory from some assumptions on the spectrum of particles. It is a form of S-matrix theory. Overview In the 1960s and '70s, the ever-growing list of strongly interacting particles — mesons and baryons — made it clear to physicists that none of these particles is elementary. Geoffrey Chew and others went so far as to question the distinction between composite and elementary particles, advocating a "nuclear democracy" in which the idea that some particles were more elementary than others was discarded. Instead, they sought to derive as much information as possible about the strong interaction from plausible assumptions about the S-matrix, which describes what happens when particles of any sort collide, an approach advocated by Werner Heisenberg two decades earlier. The reason the program had any hope of success was because of crossing, the principle that the forces between particles are determined by particle exchange. Once the spectrum of particles is known, the force law is known, and this means that the spectrum is constrained to bound states which form through the action of these forces. The simplest way to solve the consistency condition is to postulate a few elementary particles of spin less than or equal to one, and construct the scattering perturbatively through field theory, but this method does not allow for composite particles of spin greater than 1 and without the then undiscovered phenomenon of confinement, it is naively inconsistent with the observed Regge behavior of hadrons. Chew and followers believed that it would be possible to use crossing symmetry and Regge behavior to formulate a consistent S-matrix for infinitely many particle types. The Regge hypothesis would determine the spectrum, crossing and analyticity would determine the scattering amplitude (the forces), while unitarity would determine the self-consistent quantum corrections in a way analogous to including loops. The only fully successful implementation of the program required another assumption to organize the mathematics of unitarity (the narrow resonance approximation). This meant that all the hadrons were stable particles in the first approximation, so that scattering and decays could be thought of as a perturbation. This allowed a bootstrap model with infinitely many particle types to be constructed like a field theory — the lowest order scattering amplitude should show Regge behavior and unitarity would determine the loop corrections order by order. This is how Gabriele Veneziano and many others constructed string theory, which remains the only theory constructed from general consistency conditions and mild assumptions on the spectrum. Many in the bootstrap community believed that field theory, which was plagued by problems of definition, was fundamentally inconsistent at high energies. Some believed that there is only one consistent theory which requires infinitely many particle species and whose form can be found by consistency alone. This is nowadays known not to be true, since there are many theories which are nonperturbatively consistent, each with their own S-matrix. Without the narrow-resonance approximation, the bootstrap program did not have a clear expansion parameter, and the consistency equations were often complicated and unwieldy, so that the method had limited success. It fell out of favor with the rise of quantum chromodynamics, which described mesons and baryons in terms of elementary particles called quarks and gluons. Bootstrapping here refers to 'pulling oneself up by one's bootstraps,' as particles were surmised to be held together by forces consisting of exchanges of the particles themselves. In 2017 Quanta Magazine published an article in which bootstrap was said to enable new discoveries in the field of quantum theories. Decades after bootstrap seemed to be forgotten, physicists have discovered novel "bootstrap techniques" that appear to solve many problems. The bootstrap approach is said to be "a powerful tool for understanding more symmetric , perfect theories that, according to experts, serve as 'signposts' or 'building blocks' in the space of all possible quantum field theories". See also Tullio Regge Stanley Mandelstam Conformal bootstrap Notes References G. Chew (1962). S-Matrix theory of strong interactions. New York: W.A. Benjamin. D. Kaiser (2002). "Nuclear democracy: Political engagement, pedagogical reform, and particle physics in postwar America." Isis, 93, 229–268. Further reading Scattering Quantum field theory
Bootstrap model
[ "Physics", "Chemistry", "Materials_science" ]
911
[ "Quantum field theory", "Quantum mechanics", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
3,560,226
https://en.wikipedia.org/wiki/Plasma%20electrolytic%20oxidation
Plasma electrolytic oxidation (PEO), also known as electrolytic plasma oxidation (EPO) or microarc oxidation (MAO), is an electrochemical surface treatment process for generating oxide coatings on metals. It is similar to anodizing, but it employs higher potentials, so that discharges occur and the resulting plasma modifies the structure of the oxide layer. This process can be used to grow thick (tens or hundreds of micrometers), largely crystalline, oxide coatings on metals such as aluminium, magnesium and titanium. Because they can present high hardness and a continuous barrier, these coatings can offer protection against wear, corrosion or heat as well as electrical insulation. The coating is a chemical conversion of the substrate metal into its oxide, and grows both inwards and outwards from the original metal surface. Because it grows inward into the substrate, it has excellent adhesion to the substrate metal. A wide range of substrate alloys can be coated, including all wrought aluminum alloys and most cast alloys, although high levels of silicon can reduce coating quality. Process Metals such as aluminum naturally form a passivating oxide layer which provides moderate protection against corrosion. The layer is strongly adherent to the metal surface, and it will regrow quickly if scratched off. In conventional anodizing, this layer of oxide is grown on the surface of the metal by the application of electrical potential, while the part is immersed in an acidic electrolyte. In plasma electrolytic oxidation, higher potentials are applied. For example, in the plasma electrolytic oxidation of aluminum, at least 200 V must be applied. This locally exceeds the dielectric breakdown potential of the growing oxide film, and discharges occur. These discharges result in localized plasma reactions, with conditions of high temperature and pressure which modify the growing oxide. Processes include melting, melt-flow, re-solidification, sintering and densification of the growing oxide. One of the most significant effects, is that the oxide is partially converted from amorphous alumina into crystalline forms such as corundum (α-Al2O3) which is much harder. As a result, mechanical properties such as wear resistance and toughness are enhanced. Equipment used The part to be coated is immersed in a bath of electrolyte which usually consists of a dilute alkaline solution such as KOH. It is electrically connected, so as to become one of the electrodes in the electrochemical cell, with the other "counter-electrode" typically being made from an inert material such as stainless steel, and often consisting of the wall of the bath itself. Potentials of over 200 V are applied between these two electrodes. These may be continuous or pulsed direct current (DC) (in which case the part is simply an anode in DC operation), or alternating pulses (alternating current or "pulsed bi-polar" operation) where the stainless steel counter electrode might just be earthed. Coating properties One of the remarkable features of plasma electrolyte coatings is the presence of micro pores and cracks on the coating surface. Plasma electrolytic oxide coatings are generally recognized for high hardness, wear resistance, and corrosion resistance. However, the coating properties are highly dependent on the substrate used, as well as on the composition of the electrolyte and the electrical regime used (see 'Equipment used' section, above). Even on aluminium, the coating properties can vary strongly according to the exact alloy composition. For instance, the hardest coatings can be achieved on 2XXX series aluminium alloys, where the highest proportion of crystalline phase corundum (α-Al2O3) is formed, resulting in hardnesses of ~2000 HV, whereas coatings on the 5XXX series have less of this important constituent and are hence softer. Extensive work is being pursued by Prof. T. W. Clyne at the University of Cambridge to investigate the fundamental electrical and plasma physical processes involved in this process, having previously elucidated some of the micromechanical (& pore architectural), mechanical and thermal characteristics of PEO coatings. References External links Plasma Electrolytic Oxidation:WikiBooks Chemical processes Corrosion prevention Metallurgical processes
Plasma electrolytic oxidation
[ "Chemistry", "Materials_science" ]
868
[ "Corrosion prevention", "Metallurgical processes", "Metallurgy", "Corrosion", "Chemical processes", "nan", "Chemical process engineering" ]
3,560,258
https://en.wikipedia.org/wiki/ISO%2031-8
ISO 31-8 is the part of international standard ISO 31 that defines names and symbols for quantities and units related to physical chemistry and molecular physics. Quantities and units Notes In the tables of quantities and their units, the ISO 31-8 standard shows symbols for substances as subscripts (e.g., cB, wB, pB). It also notes that it is generally advisable to put symbols for substances and their states in parentheses on the same line, as in c(H2SO4). Normative annexes Annex A: Names and symbols of the chemical elements This annex contains a list of elements by atomic number, giving the names and standard symbols of the chemical elements from atomic number 1 (hydrogen, H) to 109 (unnilennium, Une). The list given in ISO 31-8:1992 was quoted from the 1998 IUPAC "Green Book" Quantities, Units and Symbols in Physical Chemistry and adds in some cases in parentheses the Latin name for information, where the standard symbol has no relation to the English name of the element. Since the 1992 edition of the standard was published, some elements with atomic number above 103 have been discovered and renamed. Annex B: Symbols for chemical elements and nucleides Symbols for chemical elements shall be written in roman (upright) type. The symbol is not followed by a full-stop. Examples: H He C Ca Attached subscripts or superscripts specifying a nucleotide or molecule have the following meanings and positions: The nucleon number (mass number) is shown in the left superscript position (e.g., 14N) The number of atoms of a nucleotide is shown in the right subscript position (e.g., 14N2) The proton number (atomic number) may be indicated in the left subscript position (e.g., 64Gd) If necessary, a state of ionization or an excited state may be indicated in the right superscript position (e.g., state of ionization Na+) Annex C: pH pH is defined operationally as follows. For a solution X, first measure the electromotive force EX of the galvanic cell reference electrode | concentrated solution of KCl | solution X | H2 | Pt and then also measure the electromotive force ES of a galvanic cell that differs from the above one only by the replacement of the solution X of unknown pH, pH(X), by a solution S of a known standard pH, pH(S). Then obtain the pH of X as pH(X) = pH(S) + (ES − EX) F / (RT ln 10) where F is the Faraday constant; R is the molar gas constant; T is the thermodynamic temperature. Defined this way, pH is a quantity of dimension 1, that is it has no unit. Values pH(S) for a range of standard solutions S are listed in Definitions of pH scales, standard reference values, measurement of pH, and related terminology. Pure Appl. Chem. (1985), 57, pp 531–542, where further details can be found. pH has no fundamental meaning; its official definition is a practical one. However, in the restricted range of dilute aqueous solutions having amount-of-substance concentrations less than 0.1 mol/L, and being neither strongly alkaline nor strongly acidic (2 < pH < 12), the definition is such that pH = −log10[c(H+) y1 / (1 mol/L)] ± 0.02 where c(H+) denotes the amount-of-substance concentration of hydrogen ion H+ and y1 denotes the activity coefficient of a typical uni-univalent electrolyte in the solution. Physical chemistry Molecular physics 00031-08
ISO 31-8
[ "Physics", "Chemistry" ]
798
[ "Applied and interdisciplinary physics", "Molecular physics", " molecular", "nan", "Atomic", "Physical chemistry", " and optical physics" ]
8,252,994
https://en.wikipedia.org/wiki/Vehicle%20infrastructure%20integration
The Vehicle Infrastructure Integration (VII) is a United States Department of Transportation initiative that aims to improve road safety by developing technology that connects road vehicles with their environment. This development draws on several disciplines, including transport engineering, electrical engineering, automotive engineering, and computer science. Although VII specifically covers road transport, similar technologies are under development for other modes of transport. For example, airplanes may use ground-based beacons for automated guidance, allowing the autopilot to fly the plane without human intervention. Goals The goal of VII is to establish a communication link between vehicles (via On-Board Equipment, or OBE) and roadside infrastructure (via Roadside Equipment, or RSE) to enhance the safety, efficiency, and convenience of transportation systems. One approach currently pursued is the widespread deployment of a dedicated short-range communications (DSRC) link following the IEEE 802.11p standard. The initiative has three priorities: Stakeholder evaluation and acceptance of the business model and its deployment schedule, Validation of the technology, with a focus on communications systems, in relation to deployment costs, and Creation of legal structures and policies, especially concerning digital privacy, to improve the system's long-term potential for success. Safety Current automotive safety technology relies primarily on vehicle-based radar, lidar, and sonar systems. This technology allows, for instance, a potential reduction in rear-end collisions by monitoring obstacles in front of or behind the vehicle and automatically applying the brakes when necessary. This technology, however, is limited by the sensing range of vehicle-based radar, particularly in angled and left-turn collisions, such as a motorist losing control of the vehicle during an impending head-on collision. The rear-end collisions addressed by current technology are generally less severe than angled, left-turn, or head-on collisions. VII promotes the development of a direct communication link between road vehicles and all other vehicles nearby, allowing for the exchange of information on vehicle speed and orientation or driver awareness and intent. This real-time exchange of information may enable more effective automated emergency maneuvers, such as steering, decelerating, or braking. In addition to nearby vehicle awareness, VII promotes a communication link between vehicles and roadway infrastructure. Such a link may allow for improved real-time traffic information, better queue management, and feedback to vehicles. Existing implementations of VII use vehicle-based sensors that can recognize and respond to roadway markings or signs, automatically adjusting vehicle parameters to follow the recognized instructions. However, this information may also be acquired via roadside beacons or stored in a centralized database accessible to all vehicles. Efficiency As vehicles will be linked together and therefore, the headway between vehicles could be reduced so that there is less empty space on the road. The available traffic capacity would therefore be increased. More capacity per lane will in turn imply fewer lanes in general, possibly satisfying the community's concerns about the impact of roadway widening. VII will enable precise traffic-signal coordination by tracking vehicle platoons and will benefit from accurate timing by drawing on real-time traffic data covering volume, density, and turning movements. Real-time traffic data can also be used in the design of new roadways or modification of existing systems as the data could be used to provide accurate origin-destination studies and turning-movement counts for uses in transportation forecasting and traffic operations. Such technology would also lead to improvements for transport engineers to address problems whilst reducing the cost of obtaining and compiling data. Tolling is another prospect for VII technology as it could enable roadways to be automatically tolled. Data could be collectively transmitted to road users for in-vehicle display, outlining the lowest cost, shortest distance, and/or fastest route to a destination on the basis of real-time conditions. Existing applications To some extent, results along these lines have been achieved in trials performed around the globe, making use of GPS, mobile phone signals, and vehicle registration plates. GPS is becoming standard in many new high-end vehicles and is an option on most new low- and mid-range vehicles. In addition, many users also have mobile phones that transmit trackable signals (and may also be GPS-enabled). Mobile phones can already be traced for purposes of emergency response. GPS and mobile phone tracking, however, do not provide fully reliable data. Furthermore, integrating mobile phones in vehicles may be prohibitively difficult. Data from mobile phones, though useful, might even increase risks to motorists as they tend to look at their phones rather than concentrate on their driving. Automatic registration plate recognition can provide high levels of data, but continuously tracking a vehicle through a corridor is a difficult task with existing technology. Today's equipment is designed for data acquisition and functions such as enforcement and tolling, not for returning data to vehicles or motorists for response. GPS will nevertheless be one of the key components in VII systems. Limitations Privacy The most common myth about VII is that it includes tracking technology; however, this is not the case. The architecture is designed to prevent identification of individual vehicles, with all data exchange between the vehicle and the system occurring anonymously. Exchanges between the vehicles and third parties such as OEMs and toll collectors will occur, but the network traffic will be sent via encrypted tunnels and will therefore not be decipherable by the VII system. Technical issues Coordination A major issue facing the deployment of VII is the problem of how to set up the system initially. The costs associated with installing the technology in vehicles and providing communications and power at every intersection are significant. Maintenance Another factor for consideration in regard to the technology's distribution is how to update and maintain the units. Traffic systems are highly dynamic, with new traffic controls implemented every day and roadways constructed or repaired every year. The vehicle-based option could be updated via the internet (preferably wireless) but may subsequently require all users to have access to internet technology. Alternatively, if receivers were placed in all vehicles and the VII system was primarily located along the roadside, information could be stored in a centralized database. This would allow the agency responsible to issue updates at any time. These would then be disseminated to the roadside units for passing motorists. Operationally, this method is currently considered to provide the greatest effectiveness but at a high cost to the authorities. Security Security of the units is another concern, especially in light of the public acceptance issue. Criminals could tamper, remove, or destroy VII units regardless of whether they are installed inside vehicles or along the roadside. Magnets, electric shocks, and malicious software (viruses, hacking, or jamming) could be used to damage VII systems – regardless of whether units are located inside vehicle or along the roadside. Recent developments Much of the current research and experimentation is conducted in the United States where coordination is ensured through the Vehicle Infrastructure Integration Consortium; consisting of automobile manufacturers (Ford, General Motors, Daimler Chrysler, Toyota, Nissan, Honda, Volkswagen, BMW), IT suppliers, U.S. Federal and state transportation departments, and professional associations. Trialing is taking place in Michigan and California. The specific applications now being developed under the U.S. initiative are: Warning drivers of unsafe conditions or imminent collisions. Warning drivers if they are about to run off the road or speed around a curve too fast. Informing system operators of real-time congestion, weather conditions and incidents. Providing operators with information on corridor capacity for real-time management, planning and provision of corridor-wide advisories to drivers. In mid-2007, a VII environment covering some near Detroit was used to test 20 prototype VII applications. Several automobile manufacturers are also conducting their own VII research and trialing. See also Intelligent transportation system Tracking Vehicle tracking system GPS tracking Automatic number-plate recognition Automated highway system Self-driving car References External links VII Coalition Website ITS Website of the USDOT FHWA PowerPoint Presentation Michigan DOT VII Development Site GPS World article on GPS-based VII eSafety Road transport Intelligent transportation systems Automotive technologies Automatic number plate recognition Applications of computer vision Applications of artificial intelligence
Vehicle infrastructure integration
[ "Technology", "Engineering" ]
1,616
[ "Self-driving cars", "Transport systems", "Automotive engineering", "Information systems", "Warning systems", "Intelligent transportation systems" ]
8,253,417
https://en.wikipedia.org/wiki/Plasma%20modeling
Plasma modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles. Single particle description The single-particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law. In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. Kinetic description The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function where the independent variables and are position and velocity, respectively. A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations. Fluid description To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient, averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics. Hybrid kinetic/fluid description Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically. The hybrid model is sometimes applied in space physics, when the simulation domain exceeds thousands of ion gyroradius scales, making it impractical to solve kinetic equations for electrons. In this approach, magnetohydrodynamic fluid equations describe electrons, while the kinetic Vlasov equation describes ions. Gyrokinetic description In the gyrokinetic model, which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius. This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications. Quantum mechanical methods Quantum methods are not yet very common in plasma modeling. They can be used to solve unique modeling problems; like situations where other methods do not apply. They involve the application of quantum field theory to plasma. In these cases, the electric and magnetic fields made by particles are modeled like a field; A web of forces. Particles that move, or are removed from the population push and pull on this web of forces, this field. The mathematical treatment for this involves Lagrangian mathematics. Collisional-radiative modeling is used to calculate quantum state densities and the emission/absorption properties of a plasma. This plasma radiation physics is critical for the diagnosis and simulation of astrophysical and nuclear fusion plasma. It is one of the most general approaches and lies between the extrema of a local thermal equilibrium and a coronal picture. In a local thermal equilibrium the population of excited states is distributed according to a Boltzmann distribution. However, this holds only if densities are high enough for an excited hydrogen atom to undergo many collisions such that the energy is distributed before the radiative process sets in. In a coronal picture the timescale of the radiative process is small compared to the collisions since densities are very small. The use of the term coronal equilibrium is ambiguous and may also refer to the non-transport ionization balance of recombination and ionization. The only thing they have in common is that a coronal equilibrium is not sufficient for tokamak plasma. Commercial plasma physics modeling codes Quantemol-VT VizGlow VizSpark CFD-ACE+ COMSOL LSP Magic Starfish USim VSim STAR-CCM+ See also Particle-in-cell References Computational physics
Plasma modeling
[ "Physics" ]
975
[ "Plasma theory and modeling", "Computational physics", "Plasma physics" ]
8,255,258
https://en.wikipedia.org/wiki/DNA%20separation%20by%20silica%20adsorption
DNA separation by silica adsorption is a method of DNA separation that is based on DNA molecules binding to silica surfaces in the presence of certain salts and under certain pH conditions. Operations In order to separate DNA through silica adsorption, a sample is first lysed, releasing proteins, DNA, phospholipids, etc. from the cells. The remaining tissue is discarded. The supernatant containing the DNA is then exposed to silica in a solution with high ionic strength. The highest DNA adsorption efficiencies occur in the presence of buffer solution with a pH at or below the pKa of the surface silanol groups. The mechanism behind DNA adsorption onto silica is not fully understood; one possible explanation involves reduction of the silica surface's negative charge due to the high ionic strength of the buffer. This decrease in surface charge leads to a decrease in the electrostatic repulsion between the negatively charged DNA and the negatively charged silica. Meanwhile, the buffer also reduces the activity of water by formatting hydrated ions. This leads to the silica surface and DNA becoming dehydrated. These conditions lead to an energetically favorable situation for DNA to adsorb to the silica surface. A further explanation of how DNA binds to silica is based on the action of guanidinium chloride (GuHCl), which acts as a chaotrope. A chaotrope denatures biomolecules by disrupting the shell of hydration around them. This allows positively charged ions to form a salt bridge between the negatively charged silica and the negatively charged DNA backbone in high salt concentration. The DNA can then be washed with high salt and ethanol, and ultimately eluted with low salt. After the DNA is bound to the silica it is then washed to remove contaminants and finally eluted using an elution buffer or distilled water. See also Spin column-based nucleic acid purification References Biotechnology Molecular biology Molecular genetics
DNA separation by silica adsorption
[ "Chemistry", "Biology" ]
413
[ "Biotechnology", "Molecular genetics", "nan", "Molecular biology", "Biochemistry" ]