id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
3,865,332
https://en.wikipedia.org/wiki/Genomic%20convergence
Genomic convergence is a multifactor approach used in genetic research that combines different kinds of genetic data analysis to identify and prioritize susceptibility genes for a complex disease. Early applications In January 2003, Michael Hauser along with fellow researchers at the Duke Center for Human Genetics (CHG) coined the term “genomic convergence” to describe their endeavor to identify genes affecting the expression of Parkinson disease (PD). Their work successfully combined serial analysis of gene expression (SAGE) with genetic linkage analysis. The authors explain, “While both linkage and expression analyses are powerful on their own, the number of possible genes they present as candidates for PD or any complex disorder remains extremely large”. The convergence of the two methods allowed researchers to decrease the number of possible PD genes to consider for further study. Their success prompted further use of the genomic convergence method at the CHG, and in July 2003 Yi-Ju Li, et al. published a paper revealing that glutathione S-transferase omega-1 (GSTO1) modifies the age-at-onset (AAO) of Alzheimer disease (AD) and PD. In May 2004, Dr. Margaret Pericak-Vance, currently the director of the John P. Hussman Institute for Human Genomics at the University of Miami Miller School of Medicine and then the director of the CHG, articulated the value of the genomic convergence method at a New York Academy of Sciences (NYAS) keynote address entitled "Novel Methods in Genetic Exploration of Neurodegenerative Disease." She stated, "No single method is going to get us where we need to be with these complex traits. It is going to take a combination of methods to dissect the underlying etiology of these disorders". Recent and future applications Genomic convergence has a countless number of creative applications that combine the strengths of different analyses and studies. Maher Noureddine et al., note in their 2005 paper, “One of the growing problems in the study of complex diseases is how to prioritize research and make sense of the immense amount of data now readily available at the click of a computer mouse...The best approach may be to take advantage of the strengths of both…SAGE …and microarrays”. The results of combining methods of analysis have continued to be promising. Sofia Oliveira et al. (2005) combined gene expression, linkage data, and “iterative association mapping” to identify several genes associated with PD AAO. Future studies will continue to apply genomic convergence to elucidate the etiology of complex diseases. Dr. Jeff Vance, Director of the Morris K. Udall PD Research Center of Excellence, notes, “Genomic convergence is really no different from mathematical convergence – the more angles from which you can come at a problem, the better chance you have of solving it”. References Genetics
Genomic convergence
[ "Biology" ]
593
[ "Genetics" ]
3,865,400
https://en.wikipedia.org/wiki/Biscayne%20Aquifer
The Biscayne Aquifer, named after Biscayne Bay, is a surficial aquifer. It is a shallow layer of highly permeable limestone under a portion of South Florida. The area it underlies includes Broward County, Miami-Dade County, Monroe County, and Palm Beach County, a total of about . Description The water-absorbing layers of rock underlying south Florida divide into three layers. The Biscayne Aquifer is closest to the surface and because of this it directly interacts with natural and man-made bodies of surface water, such as streams, lakes, canals and reservoirs. The ground water and the aquifer currently are managed as an integrated water system. Because the top part of the Biscayne aquifer is the water table, this aquifer is known as an unconfined aquifer. Since it merges with the floor of Biscayne Bay and with the Atlantic Ocean, it is also a coastal aquifer. Both of these factors contribute to its potential contamination. Lowered water tables, primarily from over-pumping, could allow salt water intrusion without man-made interventions such as dam-like structures that control fresh and salt water levels in canals. Because the aquifer is so close to the surface, it is extremely vulnerable to surface contaminants. A massive saltwater plume is radiating from the Turkey Point Nuclear Generating Station toward wellfields in the aquifer. Management The South Florida Water Management District controls an extensive system of canals and other control systems and pumping stations along with the Biscayne Aquifer, Lake Okeechobee and three other large water conservation areas as it monitors and controls the storage and release of the water in the district. It must take into account the danger of salt water intrusion and monitor water demand while it manages surplus flood water and maintains water table levels and adequate water supplies. The Biscayne Aquifer supplies South Florida metropolitan area with its primary source of fresh water. This area includes most of south Florida (Miami-Dade, Monroe, and parts of Broward Counties) as well was other urban areas stretching from Homestead, Florida to Delray Beach, Florida. Further, water from the Biscayne Aquifer is piped to the Florida Keys. Footnotes External links Description of Biscayne Aquifer Aquifers in the United States Hydrology Geologic formations of the United States Geology of Florida
Biscayne Aquifer
[ "Chemistry", "Engineering", "Environmental_science" ]
495
[ "Hydrology", "Environmental engineering" ]
3,865,434
https://en.wikipedia.org/wiki/Fast%20protein%20liquid%20chromatography
Fast protein liquid chromatography (FPLC) is a form of liquid chromatography that is often used to analyze or purify mixtures of proteins. As in other forms of chromatography, separation is possible because the different components of a mixture have different affinities for two materials, a moving fluid (the mobile phase) and a porous solid (the stationary phase). In FPLC the mobile phase is an aqueous buffer solution. The buffer flow rate is controlled by a positive-displacement pump and is normally kept constant, while the composition of the buffer can be varied by drawing fluids in different proportions from two or more external reservoirs. The stationary phase is a resin composed of beads, usually of cross-linked agarose, packed into a cylindrical glass or plastic column. FPLC resins are available in a wide range of bead sizes and surface ligands depending on the application. FPLC was developed and marketed in Sweden by Pharmacia in 1982, and was originally called fast performance liquid chromatography to contrast it with high-performance liquid chromatography (HPLC). FPLC is generally applied only to proteins; however, because of the wide choice of resins and buffers it has broad applications. In contrast to HPLC, the buffer pressure used is relatively low, typically less than 5 bar, but the flow rate is relatively high, typically 1–5 ml/min. FPLC can be readily scaled from analysis of milligrams of mixtures in columns with a total volume of 5 ml or less to industrial production of kilograms of purified protein in columns with volumes of many liters. When used for analysis of mixtures, the eluant is usually collected in fractions of 1–5 ml which can be further analyzed. When used for protein purification there may be only two collection containers: one for the purified product and one for waste. General principles In a common FPLC strategy, a resin is chosen that the protein of interest will bind to by a charge interaction while in buffer A (the running buffer) but become dissociated and return to solution in buffer B (the elution buffer). A mixture containing one or more proteins of interest is dissolved in 100% buffer A and pumped into the column. The proteins of interest bind to the resin while other components are carried out in the buffer. The total flow rate of the buffer is kept constant; however, the proportion of buffer B (the "elution" buffer) is gradually increased from 0% to 100% according to a programmed change in concentration (the "gradient"). At some point during this process each of the bound proteins dissociates and appears in the eluant. The eluant passes through two detectors which measure salt concentration (by conductivity) and protein concentration (by absorption of ultraviolet light at a wavelength of 280 nm). As each protein is eluted, it appears in the eluant as a "peak" in protein concentration, and can be collected for further use. System components A typical laboratory FPLC consist of one or two high-precision pumps, a control unit, a column, a detection system and a fraction collector. Although it is possible to operate the system manually, the components are normally linked to a personal computer or, in older units, a microcontroller. Pumps The majority of systems utilize two two-cylinder piston pumps, one for each buffer, combining the output of both in a mixing chamber. Some simpler systems use a single peristaltic pump which draws both buffers from separate reservoirs through a proportioning valve and mixing chamber. In either case the system allows the fraction of each buffer entering the column to be continuously varied. The flow rate can go from a few milliliters per minute in bench-top systems to liters per minute for industrial scale purifications. The wide flow range makes it suitable both for analytical and preparative chromatography. Injection loop The injection loop is a segment of tubing of known volume which is filled with the sample solution before it is injected into the column. Loop volume can range from a few microliters to 50 ml or more. Injection valve The injection valve is a motorized valve which links the mixer and sample loop to the column. Typically the valve has three positions for loading the sample loop, for injecting the sample from the loop into the column, and for connecting the pumps directly to the waste line to wash them or change buffer solutions. The injection valve has a sample loading port through which the sample can be loaded into the injection loop, usually from a hypodermic syringe using a Luer-lock connection. Column The column is a glass or plastic cylinder packed with beads of resin and filled with buffer solution. It is normally mounted vertically with the buffer flowing downward from top to bottom. A glass frit at the bottom of the column retains the resin beads in the column while allowing the buffer and dissolved proteins to exit. Flow cell The eluant from the column passes through one or more flow cells to measure the concentration of protein in the eluant (by UV light absorption at 280 nm). The conductivity cell measures the buffer conductivity, usually in millisiemens/cm, which indicates the concentration of salt in the buffer. A flow cell which measures pH of the buffer is also commonly included. Usually each flow cell is connected to a separate electronics module which provides power and amplifies the signal. Monitor/recorder The flow cells are connected to a display and/or recorder. On older systems this was a simple chart recorder, on modern systems a computer with hardware interface and display is used. This permits the experimenter to identify when peaks in protein concentration occur, indicating that specific components of the mixture are being eluted. Fraction collector The fraction collector is typically a rotating rack that can be filled with test tubes or similar containers. Distribution of the eluate into separate containers are determined by fixed volumes or specific fractions detected at peaks of protein concentration. Many systems include various optional components. A filter may be added between the mixer and column to minimize clogging. In large FPLC columns the sample may be loaded into the column directly using a small peristaltic pump rather than an injection loop. When the buffer contains dissolved gas, bubbles may form as pressure drops where the buffer exits the column; these bubbles create artifacts if they pass through the flow cells. This may be prevented by degassing the buffers, e.g. with a degasser, or by adding a flow restrictor downstream of the flow cells to maintain a pressure of 1-5 bar in the eluant line. Columns The columns used in FPLC are large (inner diameters on the order of millimeters) tubes that contain small (micrometer-scale) particles or gel beads as the stationary phase. The chromatographic bed is composed of gel beads inside the column and the sample is introduced into the injector and carried into the column by the flowing solvent. As a result of different components adhering to or diffusing through the gel, the sample mixture gets separated. Columns used with an FPLC can separate macromolecules based on size (size-exclusion chromatography), charge distribution (ion exchange), hydrophobicity, reverse-phase or biorecognition (as with affinity chromatography). For easy use, a wide range of pre-packed columns for techniques such as ion exchange, gel filtration (size exclusion), hydrophobic interaction, and affinity chromatography are available. FPLC differs from HPLC in that the columns used for FPLC can only be used up to maximum pressure of 3-4 MPa (435-580 psi). Thus, if the pressure of HPLC can be limited, each FPLC column may also be used in an HPLC machine. Optimizing protein purification Combinations of chromatographic methods can be used to purify a target molecule. The purpose of purifying proteins with FPLC is to deliver quantities of the target at sufficient purity in a biologically active state to suit its further use. The quality of the end product varies depending the type and amount of starting material, efficiency of separation, and selectivity of the purification resin. The ultimate goal of a given purification protocol is to deliver the required yield and purity of the target molecule in the quickest, cheapest, and safest way for acceptable results. The range of purity required can be from that required for basic analysis (SDS-PAGE or ELISA, for example), with only bulk impurities removed, to pure enough for structural analysis (NMR or X-ray crystallography), approaching >99% target molecule. Purity required can also mean pure enough that the biological activity of the target is retained. These demands can be used to determine the amount of starting material required to reach the experimental goal. If the starting material is limited and full optimization of purification protocol cannot be performed, then a safe standard protocol that requires a minimum adjustment and optimization steps are expected. This may not be optimal with respect to experimental time, yield, and economy but it will achieve the experimental goal. On the other hand, if the starting material is enough to develop more complete protocol, the amount of work to reach the separation goal depends on the available sample information and target molecule properties. Limits to development of purification protocols many times depends on the source of the substance to be purified, whether from natural sources (harvested tissues or organisms, for example), recombinant sources (such as using prokaryotic or eukaryotic vectors in their respective expression systems), or totally synthetic sources. No chromatographic techniques provide 100% yield of active material and overall yields depend on the number of steps in the purification protocol. By optimizing each step for the intended purpose and arranging them that minimizes inter step treatments, the number of steps will be minimized. A typical multistep purification protocol starts with a preliminary capture step which often utilizes ion exchange chromatography (IEC). The media (stationary phase) resin consists of beads, which range in size from being large (good for fast flow rates and little to no sample clarification at the expense of resolution) to small (for best possible resolution with all other factors being equal). Short and wide column geometries are amenable to high flow rates also at the expense of resolution, typically because of lateral diffusion of sample on the column. For techniques such as size exclusion chromatography to be useful, very long, thin columns and minimal sample volumes (maximum 5% of column volume) are required. Hydrophobic interaction chromatography (HIC) can also be used for first and/ or intermediate steps. Selectivity in HIC is independent of running pH and descending salt gradients are used. For HIC, conditioning involves adding ammonium sulfate to the sample to match the buffer A concentration. If HIC is used before IEC, the ionic strength would have to be lowered to match that of buffer A for IEC step by dilution, dialysis or buffer exchange by gel filtration. This is why IEC is usually performed prior to HIC as the high salt elution conditions for IEC are ideal for binding to HIC resins in the next purification step. Polishing is used to achieve the final level of purification required and is commonly performed on a gel filtration column. An extra intermediate purification step can be added or optimization of the different steps is performed for improving purity. This extra step usually involves another round of IEC under completely different conditions. Although this is an example of a common purification protocol for proteins, the buffer conditions, flow rates, and resins used to achieve final goals can be chosen to cover a broad range of target proteins. This flexibility is imperative for a functional purification system as all proteins behave differently and often deviate from predictions. References External links Example FPLC risk assessment (Leeper Group, University of Cambridge) Chromatography
Fast protein liquid chromatography
[ "Chemistry" ]
2,481
[ "Chromatography", "Separation processes" ]
3,868,817
https://en.wikipedia.org/wiki/Lead%E2%80%93lag%20compensator
A lead–lag compensator is a component in a control system that improves an undesirable frequency response in a feedback and control system. It is a fundamental building block in classical control theory. Applications Lead–lag compensators influence disciplines as varied as robotics, satellite control, automobile diagnostics, LCDs and laser frequency stabilisation. They are an important building block in analog control systems, and can also be used in digital control. Given the control plant, desired specifications can be achieved using compensators. I, P, PI, PD, and PID, are optimizing controllers which are used to improve system parameters (such as reducing steady state error, reducing resonant peak, improving system response by reducing rise time). All these operations can be done by compensators as well, used in cascade compensation technique. Theory Both lead compensators and lag compensators introduce a pole–zero pair into the open loop transfer function. The transfer function can be written in the Laplace domain as where X is the input to the compensator, Y is the output, s is the complex Laplace transform variable, z is the zero frequency and p is the pole frequency. The pole and zero are both typically negative, or left of the origin in the complex plane. In a lead compensator, , while in a lag compensator . A lead-lag compensator consists of a lead compensator cascaded with a lag compensator. The overall transfer function can be written as Typically , where z1 and p1 are the zero and pole of the lead compensator and z2 and p2 are the zero and pole of the lag compensator. The lead compensator provides phase lead at high frequencies. This shifts the root locus to the left, which enhances the responsiveness and stability of the system. The lag compensator provides phase lag at low frequencies which reduces the steady state error. The precise locations of the poles and zeros depend on both the desired characteristics of the closed loop response and the characteristics of the system being controlled. However, the pole and zero of the lag compensator should be close together so as not to cause the poles to shift right, which could cause instability or slow convergence. Since their purpose is to affect the low frequency behaviour, they should be near the origin. Implementation Both analog and digital control systems use lead-lag compensators. The technology used for the implementation is different in each case, but the underlying principles are the same. The transfer function is rearranged so that the output is expressed in terms of sums of terms involving the input, and integrals of the input and output. For example, In analog control systems, where integrators are expensive, it is common to group terms together to minimize the number of integrators required: In analog control, the control signal is typically an electrical voltage or current (although other signals such as hydraulic pressure can be used). In this case a lead-lag compensator will consist of a network of operational amplifiers ("op-amps") connected as integrators and weighted adders. A possible physical realization of a lead-lag compensator is shown below (note that the op-amp is used to isolate the networks): In digital control, the operations are performed numerically by discretization of the derivatives and integrals. The reason for expressing the transfer function as an integral equation is that differentiating signals amplify the noise on the signal, since even very small amplitude noise has a high derivative if its frequency is high, while integrating a signal averages out the noise. This makes implementations in terms of integrators the most numerically stable. Use cases To begin designing a lead-lag compensator, an engineer must consider whether the system needing correction can be classified as a lead-network, a lag-network, or a combination of the two: a lead-lag network (hence the name "lead-lag compensator"). The electrical response of this network to an input signal is expressed by the network's Laplace-domain transfer function, a complex mathematical function which itself can be expressed as one of two ways: as the current-gain ratio transfer function or as the voltage-gain ratio transfer function. Remember that a complex function can be in general written as , where is the real part and is the imaginary part of the single-variable function, . The phase angle of the network is the argument of ; in the left half plane this is . If the phase angle is negative for all signal frequencies in the network then the network is classified as a lag network. If the phase angle is positive for all signal frequencies in the network then the network is classified as a lead network. If the total network phase angle has a combination of positive and negative phase as a function of frequency then it is a lead-lag network. Depending upon the nominal operation design parameters of a system under an active feedback control, a lag or lead network can cause instability and poor speed and response times. See also Compensator (control theory) Control engineering Control theory Damping ratio Fall time PID controller Proportional control Response time compensation Rise time Settling time Steady state Step response Systems theory Time constant Transient modelling Transient response Transient state Transition time References Nise, Norman S. (2004); Control Systems Engineering (4 ed.); Wiley & Sons; Horowitz, P. & Hill, W. (2001); The Art of Electronics (2 ed.); Cambridge University Press; Cathey, J.J. (1988); Electronic Devices and Circuits (Schaum's Outlines Series); McGraw-Hill External links Matlab Control Tutorials: lead and lag compensators lead controller using Matlab Lead-Lag Frequency Response at MathPages Lead-Lag Algorithms at MathPages Classical control theory Control engineering Computational mathematics
Lead–lag compensator
[ "Mathematics", "Engineering" ]
1,255
[ "Applied mathematics", "Control engineering", "Computational mathematics" ]
3,869,624
https://en.wikipedia.org/wiki/Microshock
Microshock refers to the risk that patients undergoing medical procedures involving externally protruding intracardiac electrical conductors, such as external pacemaker electrodes, or saline filled catheters, could suffer an electric shock causing ventricular fibrillation (VF) due to currents entering the body via these parts. Some definitions related to micro-shock It is important to note that microshock (or micro-shock) are not IEV defined terms and are not used in any international standard. "Micro-shock" is an otherwise imperceptible electric current applied directly, or in very close proximity, to the heart muscle of sufficient strength, frequency, and duration to cause disruption of normal cardiac function. Note: It can be safely assumed (and it usually is) that micro-shock is only possible during certain medical procedures as the electric current needs to be focused directly into the heart by some conductor inserted by invasive means for some desired medical outcome (for example Cardiac Catheterisation). Micro-shock, if it occurs, is not always lethal. “Micro-electrocution” is the term that should be used whenever a micro-shock causes death. “Macro-shock” is when a much larger current is passed through the body, usually via a skin to skin pathway, but more generally the current is not applied directly through the heart muscle. The current in macro-shock events can vary widely from being imperceptible to being extremely destructive of tissue. (see Macroshock) “Electric Shock” is usually referring to macro-shock. (see Electric Shock) “Electrocution” is usually referring to a macro-shock that has caused prolonged or severe disruption of normal cardiac function - ultimately leading to death. (see Electrocution) Theory Microschock requires direct electrical connection to the heart muscle and is normally illustrated using a diagram such as Figure 1 (from TGE). (an image is to be uploaded here shortly) In this scenario the patient has inadvertently contacted both a source of current (it does not have to be AC, as shown) as well as a common return pathway during an invasive cardiac medical procedure. If the current flowing is below the threshold of perception, or the patient is sedated, or anaesthetized, there may be no pain or reflex response of either arm. If the current flow continues for sufficient time, at sufficient strength, the patient may die. Because of the low current and lack of patient response, this death may be unexpected, and without any obvious cause. In practice, however, this has never been proven to have happened. To a novice, however, this scenario looks incredibly dangerous, and it is therefore worth examining in some detail. Firstly, let's follow the path of the electric current. There is a generic source of current. This source can be either large or small, as only a small voltage is required to drive the low current for micro-shock. Such sources might be, a wall socket, a faulty item of equipment, an inappropriate item of equipment, a poorly designed item of equipment, or an item of equipment designed to deliver current into the body. Our patient has unfortunately been contacted to one such electrical source and current is dispersing through their right arm and upper torso, to eventually converge on a catheter (as labelled – but it could be a lead or wire) that is placed into their heart. This concentration of current flow at the heart muscle is the danger from micro-shock. If the catheter is conductive (from end to end) insulated in its passage through the body, the current may follow the catheter, emerging through the skin into some other energized item of equipment. For the circuit as shown to be complete, the loop needs to also be connected to the same ground/low-potential as the equipment. Finally, the hazardous circuit is complete – current can flow and if it continues the patient is in mortal danger. Again, while this is theoretically possible this has never been proven to have actually happened. Proving this type of event after the fact is difficult in autopsy, as the cause of the fatal ventricular fibrillation would be unknown and the death appears to be idiopathic. So, why has this situation not arisen? Generally, electrically conductive connections made in or around a patient's heart will be those of medical-grade electrical equipment. In countries that have a regulatory environment (example: such as much of Europe and North America) micro-shock is contemplated and the surgical equipment is regulated to prevent micro-shock. These medical-grade products are generally constructed to strict standards that limit the allowable currents flowing via such connections (applied parts). This decreases the risk to the patient and increases the margin of safety. History There has never been a documented case of microshock. A U.S. Senate inquiry in the early 1970s, sparked by exaggerated reports of thousands of U.S. hospital patients dying of microshock, heard expert testimony about the effect. A review of the evidence in the early 2000s found that not a single case had been reported in the 30 years since the Senate inquiry. Regular checks of the FDA's MAUDE database also show no evidence of this risk being manifest, before or since the review. Based on studies with dogs by Prof Leslie Geddes in the middle of last century, it is theorised that a current as low as 10 μA (microampere) directly through the heart, may send a human patient directly into ventricular fibrillation. Of course, the exact outcome is dependent on the duration of the current, the exact position of contact, the frequency of current oscillation, and the timing of the shock with the heart's rhythm e.g. R on T phenomenon. It is feared that such a small current may be introduced unwittingly, and unobserved, creating a very perilous situation for the patient. To guard against this slim theoretical possibility then, modern medical devices include a range of protective measures to limit current in cardiac-connected circuits to the assumed safe levels of below 10 μA (microampere). These measures include isolated patient connections, high impedance connections and current limiting circuits. Despite the in-built protections, and lack of observed incidents, microshock continues to be a concern to many practitioners of the fields of Biomedical and Clinical Engineering. Despite the evidence of decades of absence of reports, in any condition where electrical conductors are run into the body in proximity of the heart (i.e. cardiac catheterizations) precautions are still taken to ensure hazardous current is not introduced through these conductors and it is still regarded as a high-risk activity. See also Macroshock Electric Shock O'Meley P L Who's Afraid of Microshock - presentation to SMBE NSW Conference, Albury NSW, 2011 Hsu J The Hypertextbook http://hypertextbook.com/facts/2000/JackHsu.shtml accessed 23 July 2013 FDA MAUDE Database http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfmaude/TextSearch.cfm accessed 25 July 2013 Road Safety Report http://roadsafety.transport.nsw.gov.au/downloads/fatality_rate_1908_to_2009.pdf accessed 25 July 2013 IEC/TS 60479-1 - Effects of current on human beings and livestock https://webstore.iec.ch/publication/25402 References Biomedical engineering Cardiac electrophysiology
Microshock
[ "Engineering", "Biology" ]
1,556
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
3,870,862
https://en.wikipedia.org/wiki/Nodal%20admittance%20matrix
In power engineering, nodal admittance matrix (or just admittance matrix) is an N x N matrix describing a linear power system with N buses. It represents the nodal admittance of the buses in a power system. In realistic systems which contain thousands of buses, the admittance matrix is quite sparse. Each bus in a real power system is usually connected to only a few other buses through the transmission lines. The nodal admittance matrix is used in the formulation of the power flow problem. Construction from a single line diagram The nodal admittance matrix of a power system is a form of Laplacian matrix of the nodal admittance diagram of the power system, which is derived by the application of Kirchhoff's laws to the admittance diagram of the power system. Starting from the single line diagram of a power system, the nodal admittance diagram is derived by: replacing each line in the diagram with its equivalent admittance, and converting all voltage sources to their equivalent current source. Consider an admittance graph with buses. The vector of bus voltages, , is an vector where is the voltage of bus , and vector of bus current injections, , is an vector where is the cumulative current injected at bus by all loads and sources connected to the bus. The admittance between buses and is a complex number , and is the sum of the admittance of all lines connecting busses and . The admittance between the bus and ground is , and is the sum of the admittance of all the loads connected to bus . Consider the current injection, , into bus . Applying Kirchhoff's current law where is the current from bus to bus for and is the current from bus to ground through the bus load. Applying Ohm's law to the admittance diagram, the bus voltages and the line and load currents are linked by the relation Therefore, This relation can be written succinctly in matrix form using the admittance matrix. The nodal admittance matrix is a matrix such that bus voltage and current injection satisfy Ohm's law in vector format. The entries of are then determined by the equations for the current injections into buses, resulting in As an example, consider the admittance diagram of a fully connected three bus network of figure 1. The admittance matrix derived from the three bus network in the figure is: The diagonal entries are called the self-admittances of the network nodes. The non-diagonal entries are the mutual admittances of the nodes corresponding to the subscripts of the entry. The admittance matrix is typically a symmetric matrix as . However, extensions of the line model may make asymmetrical. For instance, modeling phase-shifting transformers, results in a Hermitian admittance matrix. Applications The admittance matrix is most often used in the formulation of the power flow problem. See also Admittance parameters Nodal analysis Zbus References External links A C/C++ Program and Source Code for Computing Ybus and Zbus Matrices Electrical engineering Electric power
Nodal admittance matrix
[ "Physics", "Engineering" ]
617
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
2,072,393
https://en.wikipedia.org/wiki/Win%E2%80%93win%20game
In game theory, a win–win game or win–win scenario is a situation that produces a mutually beneficial outcome for two or more parties. It is also called a positive-sum game as it is the opposite of a zero-sum game. If a win–win scenario is not achieved, the scenario becomes a lose–lose situation by default, since it had caused failure for at least one of the parties. While she did not coin the term, Mary Parker Follett's process of integration described in her book Creative Experience (Longmans, Green & Co., 1924) forms the basis of what we now refer to as the idea of "win-win" conflict resolution. See also Abundance mentality Game Cooperative game Group-dynamic game Zero-sum game No-win situation References Game theory game classes Personal development Negotiation Dispute resolution Metaphors referring to war and violence
Win–win game
[ "Mathematics", "Biology" ]
179
[ "Personal development", "Behavior", "Game theory", "Game theory game classes", "Human behavior" ]
2,072,472
https://en.wikipedia.org/wiki/Principal%20curvature
In differential geometry, the two principal curvatures at a given point of a surface are the maximum and minimum values of the curvature as expressed by the eigenvalues of the shape operator at that point. They measure how the surface bends by different amounts in different directions at that point. Discussion At each point p of a differentiable surface in 3-dimensional Euclidean space one may choose a unit normal vector. A normal plane at p is one that contains the normal vector, and will therefore also contain a unique direction tangent to the surface and cut the surface in a plane curve, called normal section. This curve will in general have different curvatures for different normal planes at p. The principal curvatures at p, denoted k1 and k2, are the maximum and minimum values of this curvature. Here the curvature of a curve is by definition the reciprocal of the radius of the osculating circle. The curvature is taken to be positive if the curve turns in the same direction as the surface's chosen normal, and otherwise negative. The directions in the normal plane where the curvature takes its maximum and minimum values are always perpendicular, if k1 does not equal k2, a result of Euler (1760), and are called principal directions. From a modern perspective, this theorem follows from the spectral theorem because these directions are as the principal axes of a symmetric tensor—the second fundamental form. A systematic analysis of the principal curvatures and principal directions was undertaken by Gaston Darboux, using Darboux frames. The product k1k2 of the two principal curvatures is the Gaussian curvature, K, and the average (k1 + k2)/2 is the mean curvature, H. If at least one of the principal curvatures is zero at every point, then the Gaussian curvature will be 0 and the surface is a developable surface. For a minimal surface, the mean curvature is zero at every point. Formal definition Let M be a surface in Euclidean space with second fundamental form . Fix a point p ∈ M, and an orthonormal basis X1, X2 of tangent vectors at p. Then the principal curvatures are the eigenvalues of the symmetric matrix If X1 and X2 are selected so that the matrix is a diagonal matrix, then they are called the principal directions. If the surface is oriented, then one often requires that the pair (X1, X2) be positively oriented with respect to the given orientation. Without reference to a particular orthonormal basis, the principal curvatures are the eigenvalues of the shape operator, and the principal directions are its eigenvectors. Generalizations For hypersurfaces in higher-dimensional Euclidean spaces, the principal curvatures may be defined in a directly analogous fashion. The principal curvatures are the eigenvalues of the matrix of the second fundamental form in an orthonormal basis of the tangent space. The principal directions are the corresponding eigenvectors. Similarly, if M is a hypersurface in a Riemannian manifold N, then the principal curvatures are the eigenvalues of its second-fundamental form. If k1, ..., kn are the n principal curvatures at a point p ∈ M and X1, ..., Xn are corresponding orthonormal eigenvectors (principal directions), then the sectional curvature of M at p is given by for all with . Classification of points on a surface At elliptical points, both principal curvatures have the same sign, and the surface is locally convex. At umbilic points, both principal curvatures are equal and every tangent vector can be considered a principal direction. These typically occur in isolated points. At hyperbolic points, the principal curvatures have opposite signs, and the surface will be locally saddle shaped. At parabolic points, one of the principal curvatures is zero. Parabolic points generally lie in a curve separating elliptical and hyperbolic regions. At flat umbilic points both principal curvatures are zero. A generic surface will not contain flat umbilic points. The monkey saddle is one surface with an isolated flat umbilic. Line of curvature The lines of curvature or curvature lines are curves which are always tangent to a principal direction (they are integral curves for the principal direction fields). There will be two lines of curvature through each non-umbilic point and the lines will cross at right angles. In the vicinity of an umbilic the lines of curvature typically form one of three configurations star, lemon and monstar (derived from lemon-star). These points are also called Darbouxian Umbilics (D1, D2, D3) in honor of Gaston Darboux, the first to make a systematic study in Vol. 4, p 455, of his Leçons (1896). In these figures, the red curves are the lines of curvature for one family of principal directions, and the blue curves for the other. When a line of curvature has a local extremum of the same principal curvature then the curve has a ridge point. These ridge points form curves on the surface called ridges. The ridge curves pass through the umbilics. For the star pattern either 3 or 1 ridge line pass through the umbilic, for the monstar and lemon only one ridge passes through. Applications Principal curvature directions along with the surface normal, define a 3D orientation frame at a surface point. For example, in case of a cylindrical surface, by physically touching or visually observing, we know that along one specific direction the surface is flat (parallel to the axis of the cylinder) and hence take note of the orientation of the surface. The implication of such an orientation frame at each surface point means any rotation of the surfaces over time can be determined simply by considering the change in the corresponding orientation frames. This has resulted in single surface point motion estimation and segmentation algorithms in computer vision. See also Earth radius#Principal sections Euler's theorem (differential geometry) References Further reading External links Historical Comments on Monge's Ellipsoid and the Configuration of Lines of Curvature on Surfaces Immersed in R3 Curvature (mathematics) Differential geometry of surfaces Surfaces
Principal curvature
[ "Physics" ]
1,271
[ "Geometric measurement", "Physical quantities", "Curvature (mathematics)" ]
2,073,059
https://en.wikipedia.org/wiki/Mitotoxin
A mitotoxin is a cytotoxic molecule targeted to specific cells by a mitogen. Generally found in snake venom. Mitotoxins are responsible for mediating cell death by interfering with protein or DNA synthesis. Some mechanisms by which mitotoxins can interfere with DNA or protein synthesis include the inactivation of ribosomes or the inhibition of complexes in the mitochondrial electron transport chain. These toxins have a very high affinity and level of specificity for the receptors that they bind to. Mitotoxins bind to receptors on cell surfaces and are then internalized into cells via receptor-mediated endocytosis. Once in the endosome, the receptor releases its ligand and a mitotoxin can mediate cell death. There are different classes of mitotoxins, each acting on a different type of cell or system. The mitotoxin classes that have been identified thus far include: interleukin-based, transferrin based, epidermal growth factor-based, nerve growth factor-based, insulin-like growth factor-I-based, and fibroblast growth factor-based mitotoxins. Because of the high affinity and specificity of mitotoxin binding, they present the possibility of creating precise therapeutic agents. A major one of these possibilities is the potential usage of growth factor-based mitotoxins as anti-neoplastic agents that can modulate the growth of melanomas. References Molecular biology
Mitotoxin
[ "Chemistry", "Biology" ]
301
[ "Biochemistry", "Molecular biology" ]
2,073,140
https://en.wikipedia.org/wiki/Semileptonic%20decay
In particle physics the semileptonic decay of a hadron is a decay caused by the weak force in which one lepton (and the corresponding neutrino) is produced in addition to one or more hadrons. An example for this can be  →  +  +  This is to be contrasted with purely hadronic decays, such as  →  + , which are also mediated by the weak force. Semileptonic decays of neutral kaons have been used to study kaon oscillations. See also Kaon Pion CP violation CPT symmetry References Electroweak theory
Semileptonic decay
[ "Physics" ]
121
[ "Physical phenomena", "Electroweak theory", "Fundamental interactions", "Particle physics", "Particle physics stubs" ]
2,073,235
https://en.wikipedia.org/wiki/Spallation%20Neutron%20Source
The Spallation Neutron Source (SNS) is an accelerator-based neutron source facility in the U.S. that provides the most intense pulsed neutron beams in the world for scientific research and industrial development. Each year, the facility hosts hundreds of researchers from universities, national laboratories, and industry, who conduct basic and applied research and technology development using neutrons. SNS is part of Oak Ridge National Laboratory, which is managed by UT-Battelle for the United States Department of Energy (DOE). SNS is a DOE Office of Science user facility, and it is open to scientists and researchers from all over the world. Neutron scattering research Neutron scattering allows scientists to count scattered neutrons, measure their energies and the angles at which they scatter, and map their final positions. This information can reveal the molecular and magnetic structure and behavior of materials, such as high-temperature superconductors, polymers, metals, and biological samples. In addition to studies focused on fundamental physics, neutron scattering research has applications in structural biology and biotechnology, magnetism and superconductivity, chemical and engineering materials, nanotechnology, complex fluids, and others. Spallation process The spallation process at SNS begins with negatively charged hydrogen ions that are produced by an ion source. Each ion consists of a proton orbited by two electrons. The ions are injected into a linear particle accelerator which accelerates them to an energy of about one GeV (or to about 90% the speed of light). The ions pass through a foil which strips off each ion's two electrons, converting it to a proton. The protons pass into a ring-shaped structure, a proton accumulator ring, where they spin around at very high speeds and accumulate in "bunches." Each bunch of protons is released from the ring as a pulse, at a rate of 60 times per second (60 hertz). The high-energy proton pulses strike a target of liquid mercury, where spallation occurs. The spalled neutrons are then slowed in a moderator and guided through beam lines to areas containing special instruments where they are used in a wide variety of experiments. History Most of the world's neutron sources were built decades ago, and although the uses and demand for neutrons have increased throughout the years, few new sources have been built. To fill that need for a new, improved neutron source, the DOE Office of Basic Energy Sciences funded the construction of SNS, which would provide the most intense pulsed neutron beams in the world for scientific research and industrial development. The construction of SNS was a partnership of six DOE national laboratories: Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, and Jefferson. This collaboration was one of the largest of its kind in U.S. scientific history and was used to bring together the best minds and experience from many different fields. After more than five years of construction and a cost of $1.4 billion, SNS was completed in April 2006. The first three instruments began commissioning and were available to the scientific community in August 2007. As of 2017, 20 instruments have been completed, and SNS is hosting about 1,400 researchers per year. See also Materials science Neutron detection Neutron electric dipole moment Neutron facilities Neutron scattering NPDGamma experiment References External links T. E. Mason et al., "The Spallation Neutron Source: A Powerful Tool for Materials Research," arXiv:physics/0007068v1. "SNS: Neutrons for 'molecular movies,'" Symmetry, vol. 03(05), Jun/Jul, 2006. Types of magnets Materials science organizations Nuclear physics Particle physics facilities United States Department of Energy Oak Ridge National Laboratory Scattering Superconductivity Neutron facilities Buildings and structures in Roane County, Tennessee Neutron sources Fixed-target experiments
Spallation Neutron Source
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
784
[ "Physical quantities", "Superconductivity", "Materials science", "Scattering", "Materials science organizations", "Condensed matter physics", "Particle physics", "Nuclear physics", "Electrical resistance and conductance" ]
2,075,246
https://en.wikipedia.org/wiki/Correlation%20dimension
In chaos theory, the correlation dimension (denoted by ν) is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. For example, if we have a set of random points on the real number line between 0 and 1, the correlation dimension will be ν = 1, while if they are distributed on say, a triangle embedded in three-dimensional space (or m-dimensional space), the correlation dimension will be ν = 2. This is what we would intuitively expect from a measure of dimension. The real utility of the correlation dimension is in determining the (possibly fractional) dimensions of fractal objects. There are other methods of measuring dimension (e.g. the Hausdorff dimension, the box-counting dimension, and the information dimension) but the correlation dimension has the advantage of being straightforwardly and quickly calculated, of being less noisy when only a small number of points is available, and is often in agreement with other calculations of dimension. For any set of N points in an m-dimensional space then the correlation integral C(ε) is calculated by: where g is the total number of pairs of points which have a distance between them that is less than distance ε (a graphical representation of such close pairs is the recurrence plot). As the number of points tends to infinity, and the distance between them tends to zero, the correlation integral, for small values of ε, will take the form: If the number of points is sufficiently large, and evenly distributed, a log-log graph of the correlation integral versus ε will yield an estimate of ν. This idea can be qualitatively understood by realizing that for higher-dimensional objects, there will be more ways for points to be close to each other, and so the number of pairs close to each other will rise more rapidly for higher dimensions. Grassberger and Procaccia introduced the technique in 1983; the article gives the results of such estimates for a number of fractal objects, as well as comparing the values to other measures of fractal dimension. The technique can be used to distinguish between (deterministic) chaotic and truly random behavior, although it may not be good at detecting deterministic behavior if the deterministic generating mechanism is very complex. As an example, in the "Sun in Time" article, the method was used to show that the number of sunspots on the sun, after accounting for the known cycles such as the daily and 11-year cycles, is very likely not random noise, but rather chaotic noise, with a low-dimensional fractal attractor. See also Takens' theorem Correlation integral Recurrence quantification analysis Approximate entropy Notes Chaos theory Dynamical systems Dimension theory Fractals
Correlation dimension
[ "Physics", "Mathematics" ]
573
[ "Functions and mappings", "Mathematical analysis", "Mathematical objects", "Fractals", "Mathematical relations", "Mechanics", "Dynamical systems" ]
2,075,960
https://en.wikipedia.org/wiki/Ferroelectric%20RAM
Ferroelectric RAM (FeRAM, F-RAM or FRAM) is a random-access memory similar in construction to DRAM but using a ferroelectric layer instead of a dielectric layer to achieve non-volatility. FeRAM is one of a growing number of alternative non-volatile random-access memory technologies that offer the same functionality as flash memory. An FeRAM chip contains a thin film of ferroelectric material, often lead zirconate titanate, commonly referred to as PZT. The atoms in the PZT layer change polarity in an electric field, thereby producing a power-efficient binary switch. However, the most important aspect of the PZT is that it is not affected by power disruption or magnetic interference, making FeRAM a reliable nonvolatile memory. FeRAM's advantages over Flash include: lower power usage, faster write speeds and a much greater maximum read/write endurance (about 1010 to 1015 cycles). FeRAMs have data retention times of more than 10 years at +85 °C (up to many decades at lower temperatures). Marked disadvantages of FeRAM are much lower storage densities than flash devices, storage capacity limitations and higher cost. Like DRAM, FeRAM's read process is destructive, necessitating a write-after-read architecture. History Ferroelectric RAM was proposed by MIT graduate student Dudley Allen Buck in his master's thesis, Ferroelectrics for Digital Information Storage and Switching, published in 1952. In 1955, Bell Telephone Laboratories was experimenting with ferroelectric-crystal memories. Following the introduction of metal–oxide–semiconductor (MOS) dynamic random-access memory (DRAM) chips in the early 1970s, development of FeRAM began in the late 1980s. Work was done in 1991 at NASA's Jet Propulsion Laboratory (JPL) on improving methods of read out, including a novel method of non-destructive readout using pulses of UV radiation. FeRAM was commercialized in the mid-1990s. In 1994, video game company Sega used FeRAM chips to store saved games in Sonic the Hedgehog 3, which shipped several million game cartridges that year. In 1996, Samsung Electronics introduced a 4Mb FeRAM chip fabricated using NMOS logic. In 1998, Hyundai Electronics (now SK Hynix) also commercialized FeRAM technology. The earliest known commercial product to use FeRAM is Sony's PlayStation 2 Memory Card (8MB), released in 2000. The Memory Card's microcontroller (MCU) manufactured by Toshiba contained 32kb (4 kB) embedded FeRAM fabricated using a 500 nm complementary MOS (CMOS) process. A major modern FeRAM manufacturer is Ramtron, a fabless semiconductor company. One major licensee is Fujitsu, who operates one of the largest semiconductor foundry production lines with FeRAM capability. Since 1999 they have been using this line to produce standalone FeRAMs, as well as specialized chips (e.g. chips for smart cards) with embedded FeRAMs. Fujitsu produced devices for Ramtron until 2010. Since 2010 Ramtron's fabricators have been TI (Texas Instruments) and IBM. Since at least 2001 Texas Instruments has collaborated with Ramtron to develop FeRAM test chips in a modified 130 nm process. In the fall of 2005, Ramtron reported that they were evaluating prototype samples of an 8-megabit FeRAM manufactured using Texas Instruments' FeRAM process. Fujitsu and Seiko-Epson were in 2005 collaborating in the development of a 180 nm FeRAM process. In 2012 Ramtron was acquired by Cypress Semiconductor. FeRAM research projects have also been reported at Samsung, Matsushita, Oki, Toshiba, Infineon, Hynix, Symetrix, Cambridge University, University of Toronto, and the Interuniversity Microelectronics Centre (IMEC, Belgium). Description Conventional DRAM consists of a grid of small capacitors and their associated wiring and signaling transistors. Each storage element, a cell, consists of one capacitor and one transistor, a so-called "1T-1C" device. The 1T-1C storage cell design in a FeRAM is similar in construction to the storage cell in DRAM, in that both cell types include one capacitor and one access transistor. In a DRAM cell capacitor, a linear dielectric is used, whereas in a FeRAM cell capacitor the dielectric structure includes ferroelectric material, typically lead zirconate titanate (PZT). A ferroelectric material has a nonlinear relationship between the applied electric field and the apparently stored charge. Specifically, the ferroelectric characteristic has the form of a hysteresis loop, which is very similar in shape to the hysteresis loop of ferromagnetic materials. The dielectric constant of a ferroelectric is typically much higher than that of a linear dielectric because of the effects of semi-permanent electric dipoles formed in the crystal structure of the ferroelectric material. When an external electric field is applied across a dielectric, the dipoles tend to align themselves with the field direction, produced by small shifts in the positions of atoms and shifts in the distributions of electronic charge in the crystal structure. After the charge is removed, the dipoles retain their polarization state. Binary "0"s and "1"s are stored as one of two possible electric polarizations in each data storage cell. For example, in the figure a "1" is encoded using the negative remnant polarization "-Pr", and a "0" is encoded using the positive remnant polarization "+Pr". In terms of operation, FeRAM is similar to DRAM. Writing is accomplished by applying a field across the ferroelectric layer by charging the plates on either side of it, forcing the atoms inside into the "up" or "down" orientation (depending on the polarity of the charge), thereby storing a "1" or "0". Reading, however, is somewhat different than in DRAM. The transistor forces the cell into a particular state, say "0". If the cell already held a "0", nothing will happen in the output lines. If the cell held a "1", the re-orientation of the atoms in the film will cause a brief pulse of current in the output as they push electrons out of the metal on the "down" side. The presence of this pulse means the cell held a "1". Since this process overwrites the cell, reading FeRAM is a destructive process, and requires the cell to be re-written. In general, the operation of FeRAM is similar to ferrite core memory, one of the primary forms of computer memory in the 1960s. However, compared to core memory, FeRAM requires far less power to flip the state of the polarity and does so much faster. Comparison with other memory types Density The main determinant of a memory system's cost is the density of the components used to make it up. Smaller components, and fewer of them, means that more cells can be packed onto a single chip, which in turn means more can be produced at once from a single silicon wafer. This improves yield, which is directly related to cost. The lower limit to this scaling process is an important point of comparison. In general, the technology that scales to the smallest cell size will end up being the least expensive per bit. In terms of construction, FeRAM and DRAM are similar, and can in general be built on similar lines at similar sizes. In both cases, the lower limit seems to be defined by the amount of charge needed to trigger the sense amplifiers. For DRAM, this appears to be a problem at around 55 nm, at which point the charge stored in the capacitor is too small to be detected. It is not clear whether FeRAM can scale to the same size, as the charge density of the PZT layer may not be the same as the metal plates in a normal capacitor. An additional limitation on size is that materials tend to stop being ferroelectric when they are too small. (This effect is related to the ferroelectric's "depolarization field".) There is ongoing research on addressing the problem of stabilizing ferroelectric materials; one approach, for example, uses molecular adsorbates. To date, the commercial FeRAM devices have been produced at 350 nm and 130 nm. Early models required two FeRAM cells per bit, leading to very low densities, but this limitation has since been removed. Power consumption The key advantage to FeRAM over DRAM is what happens between the read and write cycles. In DRAM, the charge deposited on the metal plates leaks across the insulating layer and the control transistor, and disappears. In order for a DRAM to store data for anything other than a very short time, every cell must be periodically read and then re-written, a process known as refresh. Each cell must be refreshed many times every second (typically times per second) and this requires a continuous supply of power. In contrast, FeRAM only requires power when actually reading or writing a cell. The vast majority of power used in DRAM is used for refresh, so it seems reasonable to suggest that the benchmark quoted by STT-MRAM researchers is useful here too, indicating power usage about 99% lower than DRAM. The destructive read aspect of FeRAM may put it at a disadvantage compared to MRAM, however. Another non-volatile memory type is flash, and like FeRAM it does not require a refresh process. Flash works by pushing electrons across a high-quality insulating barrier where they get "stuck" on one terminal of a transistor. This process requires high voltages, which are built up in a charge pump over time. This means that FeRAM could be expected to be lower power than flash, at least for writing, as the write power in FeRAM is only marginally higher than reading. For a "mostly-read" device the difference might be slight, but for devices with more balanced read and write the difference could be expected to be much higher. Reliability Data reliability is guaranteed in F-RAM even in a high magnetic field environment compared to MRAM. Cypress Semiconductor's F-RAM devices are immune to the strong magnetic fields and do not show any failures under the maximum available magnetic field strengths (3,700 Gauss for horizontal insertion and 2,000 Gauss for vertical insertion). In addition, the F-RAM devices allow rewriting with a different data pattern after exposure to the magnetic fields. Speed DRAM speed is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing). In general, this ends up being defined by the capability of the control transistors, the capacitance of the lines carrying power to the cells, and the heat that power generates. FeRAM is based on the physical movement of atoms in response to an external field, which is extremely fast, averaging about 1 ns. In theory, this means that FeRAM could be much faster than DRAM. However, since power has to flow into the cell for reading and writing, the electrical and switching delays would likely be similar to DRAM overall. It does seem reasonable to suggest that FeRAM would require less charge than DRAM, because DRAMs need to hold the charge, whereas FeRAM would have been written to before the charge would have drained. However, there is a delay in writing because the charge has to flow through the control transistor, which limits current somewhat. In comparison to flash, the advantages are much more obvious. Whereas the read operation is likely to be similar in speed, the charge pump used for writing requires a considerable time to "build up" current, a process that FeRAM does not need. Flash memories commonly need a millisecond or more to complete a write, whereas current FeRAMs may complete a write in less than 150 ns. On the other hand, FeRAM has its own reliability issues, including imprint and fatigue. Imprint is the preferential polarization state from previous writes to that state, and fatigue is the increase of minimum writing voltage due to loss of polarization after extensive cycling. The theoretical speed of FeRAM is not entirely clear. Existing 350 nm devices have read times on the order of 50–60 ns. Although slow compared to modern DRAMs, which can be found with times on the order of 20 ns, common 350 nm DRAMs operated with a read time of about 35 ns, so FeRAM speed appears to be comparable given the same fabrication technology. Additional Metrics Applications Datalogger in Portable/Implantable medical devices, as FRAM consumes less energy compared to other non-volatile memories such as EEPROM Event-data-recorder in automotive systems to capture the critical system data even in case of crash or failure FRAM is used in Smart meters for its fast write and high endurance In Industrial PLCs, FRAM is an ideal replacement for battery-backed SRAM (BBSRAM) and EEPROM to log machine data such as CNC tool machine position Market FeRAM remains a relatively small part of the overall semiconductor market. In 2005, worldwide semiconductor sales were US$235 billion (according to the Gartner Group), with the flash memory market accounting for US$18.6 billion (according to IC Insights). The 2005 annual sales of Ramtron, perhaps the largest FeRAM vendor, were reported to be US$32.7 million. The much larger sales of flash memory compared to the alternative NVRAMs support a much larger research and development effort. Flash memory is produced using semiconductor linewidths of 30 nm at Samsung (2007) while FeRAMs are produced in linewidths of 350 nm at Fujitsu and 130 nm at Texas Instruments (2007). Flash memory cells can store multiple bits per cell (currently 4 in the highest density NAND flash devices), and the number of bits per flash cell is projected to increase to 8 as a result of innovations in flash cell design. As a consequence, the areal bit densities of flash memory are much higher than those of FeRAM, and thus the cost per bit of flash memory is orders of magnitude lower than that of FeRAM. The density of FeRAM arrays might be increased by improvements in FeRAM foundry process technology and cell structures, such as the development of vertical capacitor structures (in the same way as DRAM) to reduce the area of the cell footprint. However, reducing the cell size may cause the data signal to become too weak to be detectable. In 2005, Ramtron reported significant sales of its FeRAM products in a variety of sectors including (but not limited to) electricity meters, automotive (e.g. black boxes, smart air bags), business machines (e.g. printers, RAID disk controllers), instrumentation, medical equipment, industrial microcontrollers, and radio frequency identification tags. The other emerging NVRAMs, such as MRAM, may seek to enter similar niche markets in competition with FeRAM. Texas Instruments proved it to be possible to embed FeRAM cells using two additional masking steps during conventional CMOS semiconductor manufacture. Flash typically requires nine masks. This makes possible for example, the integration of FeRAM onto microcontrollers, where a simplified process would reduce costs. However, the materials used to make FeRAMs are not commonly used in CMOS integrated circuit manufacturing. Both the PZT ferroelectric layer and the noble metals used for electrodes raise CMOS process compatibility and contamination issues. Texas Instruments has incorporated an amount of FRAM memory into its MSP430 microcontrollers in its new FRAM series. Capacity timeline As of 2021 different vendors were selling chips with no more than 16Mb of memory in storage size (density). See also Magnetic-core memory MRAM nvSRAM Phase-change memory Programmable metallization cell Memristor Racetrack memory Bubble memory References External links FRAM(FeRAM) [Cypress FRAM(FeRAM) Application Community Sponsored by Ramtron[Language: Chinese] FRAM overview by Fujitsu FeRAM Tutorial by the Department of Electrical and Computer Engineering at the University of Toronto FRAM operation and technology tutorial IC Chips Types of RAM Non-volatile memory Ferroelectric materials
Ferroelectric RAM
[ "Physics", "Materials_science" ]
3,426
[ "Physical phenomena", "Ferroelectric materials", "Materials", "Electrical phenomena", "Hysteresis", "Matter" ]
30,549,271
https://en.wikipedia.org/wiki/Crab%20%28unit%29
A Crab is a standard astrophotometrical unit for measurement of the intensity of Astrophysical X-ray sources. One Crab is defined as the intensity of the Crab Nebula at the corresponding X-ray photon energy. The Crab Nebula, and the Crab Pulsar within it, is an intense space X-ray source. It is used as a standard candle in the calibration procedure of X-ray instruments in space. However, because of the Crab Nebula's variable intensity at different X-ray energies, conversion of the Crab to another units depends on the X-ray energy range of interest. In the photon energy range from 2 to 10 keV, 1 Crab equals 2.4 · 10−8 erg cm−2 s−1 = 15 keV cm−2 s−1 = 2.4 · 10−11 W m−2. For energies greater than ~30 keV, the Crab Nebula becomes unsuitable for calibration purposes, as its flux can no longer be characterized by a single coherent model. The unit mCrab, or milliCrab, is sometimes used instead of the Crab. References Crab Nebula Nebulae Units of measurement in astronomy
Crab (unit)
[ "Astronomy", "Mathematics" ]
240
[ "Nebulae", "Quantity", "Units of measurement in astronomy", "Astronomical objects", "Units of measurement" ]
30,552,217
https://en.wikipedia.org/wiki/Stellar%20mass
Stellar mass is a phrase that is used by astronomers to describe the mass of a star. It is usually enumerated in terms of the Sun's mass as a proportion of a solar mass (). Hence, the bright star Sirius has around . A star's mass will vary over its lifetime as mass is lost with the stellar wind or ejected via pulsational behavior, or if additional mass is accreted, such as from a companion star. Properties Stars are sometimes grouped by mass based upon their evolutionary behavior as they approach the end of their nuclear fusion lifetimes. Very-low-mass stars with masses below 0.5 do not enter the asymptotic giant branch (AGB) but evolve directly into white dwarfs. (At least in theory; the lifetimes of such stars are long enough—longer than the age of the universe to date—that none has yet had time to evolve to this point and be observed.) Low-mass stars with a mass below about 1.8–2.2 (depending on composition) do enter the AGB, where they develop a degenerate helium core. Intermediate-mass stars undergo helium fusion and develop a degenerate carbon–oxygen core. Massive stars have a minimum mass of 5–10 . These stars undergo carbon fusion, with their lives ending in a core-collapse supernova explosion. Black holes created as a result of a stellar collapse are termed stellar-mass black holes. The combination of the radius and the mass of a star determines the surface gravity. Giant stars have a much lower surface gravity than main sequence stars, while the opposite is the case for degenerate, compact stars such as white dwarfs. The surface gravity can influence the appearance of a star's spectrum, with higher gravity causing a broadening of the absorption lines. Range One of the most massive stars known is Eta Carinae, with ; its lifespan is very short—only several million years at most. A study of the Arches Cluster suggests that is the upper limit for stars in the current era of the universe. The reason for this limit is not precisely known, but it is partially due to the Eddington luminosity which defines the maximum amount of luminosity that can pass through the atmosphere of a star without ejecting the gases into space. However, a star named R136a1 in the RMC 136a star cluster has been measured at 215 , putting this limit into question. A study has determined that stars larger than 150 in R136 were created through the collision and merger of massive stars in close binary systems, providing a way to sidestep the 150 limit. The first stars to form after the Big Bang may have been larger, up to 300 or more, due to the complete absence of elements heavier than lithium in their composition. This generation of supermassive, population III stars is long extinct, however, and currently only theoretical. With a mass only 93 times that of Jupiter (), or .09 , AB Doradus C, a companion to AB Doradus A, is the smallest known star undergoing nuclear fusion in its core. For stars with similar metallicity to the Sun, the theoretical minimum mass the star can have, and still undergo fusion at the core, is estimated to be about 75 . When the metallicity is very low, however, a recent study of the faintest stars found that the minimum star size seems to be about 8.3% of the solar mass, or about 87 . Smaller bodies are called brown dwarfs, which occupy a poorly defined grey area between stars and gas giants. Change The Sun is losing mass from the emission of electromagnetic energy and by the ejection of matter with the solar wind. It is expelling about per year. The mass loss rate will increase when the Sun enters the red giant stage, climbing to y−1 when it reaches the tip of the red-giant branch. This will rise to y−1 on the asymptotic giant branch, before peaking at a rate of 10−5 to 10−4 y−1 as the Sun generates a planetary nebula. By the time the Sun becomes a degenerate white dwarf, it will have lost 46% of its starting mass. References Mass Concepts in stellar astronomy Mass
Stellar mass
[ "Physics", "Astronomy", "Mathematics" ]
863
[ "Scalar physical quantities", "Astronomical sub-disciplines", "Physical quantities", "Concepts in astrophysics", "Quantity", "Mass", "Size", "Concepts in stellar astronomy", "Wikipedia categories named after physical quantities", "Matter", "Stellar astronomy" ]
30,552,634
https://en.wikipedia.org/wiki/MIMOS%20II
MIMOS II is the miniaturised Mössbauer spectrometer, developed by Dr. Göstar Klingelhöfer at the Johannes Gutenberg University in Mainz, Germany, that is used on the Mars Exploration Rovers Spirit and Opportunity for close-up investigations on the Martian surface of the mineralogy of iron-bearing rocks and soils. MIMOS II uses a Cobalt-57 gamma ray source of about 300 mCi at launch which gave a 6-12 hr time for acquisition of a standard MB spectrum during the primary mission on Mars, depending on total Fe content and which Fe-bearing phases are present. Cobalt-57 has a half-life of only 271.8 days (hence the extended measuring times now on Mars after over a decade). The MIMOS II sensorheads used on Mars are approx 9 cm x 5 cm x 4 cm and weigh about 400g The MIMOS II system also includes a circuit board of about 100g. References Mars Exploration Rover mission Spectrometers Spacecraft instruments Space science experiments
MIMOS II
[ "Physics", "Chemistry" ]
208
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
30,555,998
https://en.wikipedia.org/wiki/Kondo%20insulator
In solid-state physics, Kondo insulators (also referred as Kondo semiconductors and heavy fermion semiconductors) are understood as materials with strongly correlated electrons, that open up a narrow band gap (in the order of 10 meV) at low temperatures with the chemical potential lying in the gap, whereas in heavy fermion materials the chemical potential is located in the conduction band. The band gap opens up at low temperatures due to hybridization of localized electrons (mostly f-electrons) with conduction electrons, a correlation effect known as the Kondo effect. As a consequence, a transition from metallic behavior to insulating behavior is seen in resistivity measurements. The band gap could be either direct or indirect. Most studied Kondo insulators are FeSi, Ce3Bi4Pt3, SmB6, YbB12, and CeNiSn, although there are over a dozen known Kondo insulators. Historical overview In 1969, Menth et al. found no magnetic ordering in SmB6 down to 0.35 K and a change from metallic to insulating behavior in the resistivity measurement with decreasing temperature. They interpreted this phenomenon as a change of the electronic configuration of Sm. In 1992, Gabriel Aeppli and Zachary Fisk found a descriptive way to explain the physical properties of Ce3Bi4Pt3 and CeNiSn. They called the materials Kondo insulators, showing Kondo lattice behavior near room temperature, but becoming semiconducting with very small energy gaps (a few Kelvin to a few tens of Kelvin) when decreasing the temperature. Transport properties At high temperatures the localized f-electrons form independent local magnetic moments. According to the Kondo effect, the dc-resistivity of Kondo insulators shows a logarithmic temperature-dependence. At low temperatures, the local magnetic moments are screened by the sea of conduction electrons, forming a so-called Kondo resonance. The interaction of the conduction band with the f-orbitals results in a hybridization and an energy gap . If the chemical potential lies in the hybridization gap, an insulating behavior can be seen in the dc-resistivity at low temperatures. In recent times, angle-resolved photoemission spectroscopy experiments provided direct imaging of band-structure, hybridization and flat band topology in Kondo insulators and related compounds. References Correlated electrons Condensed matter physics
Kondo insulator
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
492
[ "Phases of matter", "Materials science", "Condensed matter physics", "Correlated electrons", "Matter" ]
30,556,018
https://en.wikipedia.org/wiki/Parabolic%20geometry%20%28differential%20geometry%29
In differential geometry and the study of Lie groups, a parabolic geometry is a homogeneous space G/P which is the quotient of a semisimple Lie group G by a parabolic subgroup P. More generally, the curved analogs of a parabolic geometry in this sense is also called a parabolic geometry: any geometry that is modeled on such a space by means of a Cartan connection. Examples The projective space Pn is an example. It is the homogeneous space PGL(n+1)/H where H is the isotropy group of a line. In this geometrical space, the notion of a straight line is meaningful, but there is no preferred ("affine") parameter along the lines. The curved analog of projective space is a manifold in which the notion of a geodesic makes sense, but for which there are no preferred parametrizations on those geodesics. A projective connection is the relevant Cartan connection that gives a means for describing a projective geometry by gluing copies of the projective space to the tangent spaces of the base manifold. Broadly speaking, projective geometry refers to the study of manifolds with this kind of connection. Another example is the conformal sphere. Topologically, it is the n-sphere, but there is no notion of length defined on it, just of angle between curves. Equivalently, this geometry is described as an equivalence class of Riemannian metrics on the sphere (called a conformal class). The group of transformations that preserve angles on the sphere is the Lorentz group O(n+1,1), and so Sn = O(n+1,1)/P. Conformal geometry is, more broadly, the study of manifolds with a conformal equivalence class of Riemannian metrics, i.e., manifolds modeled on the conformal sphere. Here the associated Cartan connection is the conformal connection. Other examples include: CR geometry, the study of manifolds modeled on a real hyperquadric , where is the stabilizer of an isotropic line (see CR manifold) contact projective geometry, the study of manifolds modeled on where is that subgroup of the symplectic group stabilizing the line generated by the first standard basis vector in References Slovak, J. Parabolic Geometries, Research Lecture Notes, Part of DrSc-dissertation, Masaryk University, 1997, 70pp, IGA Preprint 97/11 (University of Adelaide) Differential geometry Homogeneous spaces
Parabolic geometry (differential geometry)
[ "Physics", "Mathematics" ]
518
[ "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Geometry", "Symmetry" ]
25,886,202
https://en.wikipedia.org/wiki/Conversion%20between%20Julian%20and%20Gregorian%20calendars
The tables below list equivalent dates in the Julian and Gregorian calendars. Years are given in astronomical year numbering. Conventions Within these tables, January 1 is always the first day of the year. The Gregorian calendar did not exist before October 15, 1582. Gregorian dates before that are proleptic, that is, using the Gregorian rules to reckon backward from October 15, 1582. Years are given in astronomical year numbering. Augustus corrected errors in the observance of leap years by omitting leap days until AD 8. Julian calendar dates before March AD 4 are proleptic, and do not necessarily match the dates actually observed in the Roman Empire. Conversion table This table is taken from the book by the Nautical almanac offices of the United Kingdom and United States originally published in 1961. Using the tables Dates near leap days that are observed in the Julian calendar but not in the Gregorian are listed in the table. Dates near the adoption date in some countries are also listed. For dates not listed, see below. The usual rules of algebraic addition and subtraction apply; adding a negative number is the same as subtracting the absolute value, and subtracting a negative number is the same as adding the absolute value. If conversion takes you past a February 29 that exists only in the Julian calendar, then February 29 is counted in the difference. Years affected are those which divide by 100 without remainder but do not divide by 400 without remainder (e.g., 1900 and 2100 but not 2000). No guidance is provided about conversion of dates before March 5, -500, or after February 29, 2100 (both being Julian dates). For unlisted dates, find the date in the table closest to, but earlier than, the date to be converted. Be sure to use the correct column. If converting from Julian to Gregorian, add the number from the "Difference" column. If converting from Gregorian to Julian, subtract. See also Revised Julian calendar References External links Calendars
Conversion between Julian and Gregorian calendars
[ "Physics" ]
407
[ "Spacetime", "Calendars", "Physical quantities", "Time" ]
25,886,609
https://en.wikipedia.org/wiki/Stratified%20space
In mathematics, especially in topology, a stratified space is a topological space that admits or is equipped with a stratification, a decomposition into subspaces, which are nice in some sense (e.g., smooth or flat). A basic example is a subset of a smooth manifold that admits a Whitney stratification. But there is also an abstract stratified space such as a Thom–Mather stratified space. On a stratified space, a constructible sheaf can be defined as a sheaf that is locally constant on each stratum. Among the several ideals, Grothendieck's Esquisse d’un programme considers (or proposes) a stratified space with what he calls the tame topology. A stratified space in the sense of Mather Mather gives the following definition of a stratified space. A prestratification on a topological space X is a partition of X into subsets (called strata) such that (a) each stratum is locally closed, (b) it is locally finite and (c) (axiom of frontier) if two strata A, B are such that the closure of A intersects B, then B lies in the closure of A. A stratification on X is a rule that assigns to a point x in X a set germ at x of a closed subset of X that satisfies the following axiom: for each point x in X, there exists a neighborhood U of x and a prestratification of U such that for each y in U, is the set germ at y of the stratum of the prestratification on U containing y. A stratified space is then a topological space equipped with a stratification. Pseudomanifold In the MacPherson's stratified pseudomanifolds; the strata are the differences Xi+i-Xi between sets in the filtration. There is also a local conical condition; there must be an almost smooth atlas where locally each little open set looks like the product of two factors Rnx c(L); a euclidean factor and the topological cone of a space L. Classically, here is the point where the definitions turns to be obscure, since L is asked to be a stratified pseudomanifold. The logical problem is avoided by an inductive trick which makes different the objects L and X. The changes of charts or cocycles have no conditions in the MacPherson's original context. Pflaum asks them to be smooth, while in the Thom-Mather context they must preserve the above decomposition, they have to be smooth in the Euclidean factor and preserve the conical radium. See also Equisingularity Perverse sheaf Stratified Morse theory Harder–Narasimhan stratification Footnotes References Appendix 1 of R. MacPherson, Intersection homology and perverse sheaves, 1990 notes J. Mather, Stratifications and Mappings, Dynamical Systems, Proceedings of a Symposium Held at the University of Bahia, Salvador, Brasil, July 26–August 14, 1971, 1973, pages 195–232. Markus J. Pflaum, Analytic and Geometric Study of Stratified Spaces: Contributions to Analytic and Geometric Aspects (Lecture Notes in Mathematics, 1768); Publisher, Springer; Further reading https://ncatlab.org/nlab/show/stratified+space https://mathoverflow.net/questions/258562/correct-definition-of-stratified-spaces-and-reference-for-constructible-sheave Chapter 2 of Greg Friedman, Singular intersection homology https://ncatlab.org/nlab/show/poset-stratified+space Stratifications Topology
Stratified space
[ "Physics", "Mathematics" ]
793
[ "Stratifications", "Topology", "Space", "Geometry", "Spacetime" ]
25,887,069
https://en.wikipedia.org/wiki/Karlsruhe%20metric
In metric geometry, the Karlsruhe metric is a measure of distance that assumes travel is only possible along rays through the origin and circular arcs centered at the origin. The name alludes to the layout of the city of Karlsruhe, which has radial streets and circular avenues around a central point. This metric is also called Moscow metric. In this metric, there are two types of shortest paths. One possibility, when the two points are on nearby rays, combines a circular arc through the nearer to the origin of the two points and a segment of a ray through the farther of the two points. Alternatively, for points on rays that are nearly opposite, it is shorter to follow one ray all the way to the origin and then follow the other ray back out. Therefore, the Karlsruhe distance between two points is the minimum of the two lengths that would be obtained for these two types of path. That is, it equals where are the polar coordinates of and is the angular distance between the two points. See also Manhattan distance Hamming distance Notes External links Karlsruhe-metric Voronoi diagram, by Takashi Ohyama Karlsruhe-Metric Voronoi Diagram, by Rashid Bin Muhammad Metric spaces
Karlsruhe metric
[ "Mathematics" ]
235
[ "Mathematical structures", "Space (mathematics)", "Metric spaces" ]
22,965,204
https://en.wikipedia.org/wiki/Fano%20fibration
In algebraic geometry, a Fano fibration or Fano fiber space, named after Gino Fano, is a morphism of varieties whose general fiber is a Fano variety (in other words has ample anticanonical bundle) of positive dimension. The ones arising from extremal contractions in the minimal model program are called Mori fibrations or Mori fiber spaces (for Shigefumi Mori). They appear as standard forms for varieties without a minimal model. See also Ample line bundle Fiber bundle Fibration Quasi-fibration References Algebraic geometry
Fano fibration
[ "Mathematics" ]
118
[ "Fields of abstract algebra", "Algebraic geometry" ]
22,965,231
https://en.wikipedia.org/wiki/Supersingular%20prime%20%28algebraic%20number%20theory%29
In algebraic number theory, a supersingular prime for a given elliptic curve is a prime number with a certain relationship to that curve. If the curve E is defined over the rational numbers, then a prime p is supersingular for E if the reduction of E modulo p is a supersingular elliptic curve over the residue field Fp. Noam Elkies showed that every elliptic curve over the rational numbers has infinitely many supersingular primes. However, the set of supersingular primes has asymptotic density zero (if E does not have complex multiplication). conjectured that the number of supersingular primes less than a bound X is within a constant multiple of , using heuristics involving the distribution of eigenvalues of the Frobenius endomorphism. As of 2019, this conjecture is open. More generally, if K is any global field—i.e., a finite extension either of Q or of Fp(t)—and A is an abelian variety defined over K, then a supersingular prime for A is a finite place of K such that the reduction of A modulo is a supersingular abelian variety. See also Supersingular prime (moonshine theory) References Classes of prime numbers Algebraic number theory Unsolved problems in number theory
Supersingular prime (algebraic number theory)
[ "Mathematics" ]
273
[ "Unsolved problems in mathematics", "Unsolved problems in number theory", "Algebraic number theory", "Mathematical problems", "Number theory" ]
22,965,361
https://en.wikipedia.org/wiki/Computational%20steering
Computational steering is the practice of manually intervening with an otherwise autonomous computational process, to change its outcome. The term is commonly used within the numerical simulation community, where it more specifically refers to the practice of interactively guiding a computational experiment into some region of interest. Examples A simple, but contrived, example of computational steering is: In a simulated chess match with two automated players: manually forcing a certain move at a particular time for one player, to change the evolution of the game. Some real examples of computational steering are: In a population dynamics simulation: changing selection pressures exerted between hosts and parasites, to examine the effect on their coevolution. In a fluid dynamics simulation: resetting the phase state of an immiscible fluid, to speed the search for its critical separation temperature. System design Computational steering systems are a manner of feedback control system, where some or all of the feedback is provided interactively by the operator. All computational steering mechanisms have three fundamental components: A target system that is being studied A representation of the target system, typically a graphical visualization, that can be perceived by the investigator A set of controls that the investigator can use to provide feedback that modifies the state, behavior, or product of the system being studied Disambiguation There appears to be a distinction that the term computational steering is used only when referring to interaction with simulated systems, not operational ones. Further clarification on this point is needed. For example: Vetter (who is apparently well acquainted with the computational steering field ) refers to the following practice as interactive steering. In a grid computing framework: adjusting the cache size of a computational process, to examine the effect on its performance. Computational steering software SCIRun Cumulvs CSE RealityGrid EStA References Control engineering Simulation software Computational science
Computational steering
[ "Mathematics", "Engineering" ]
367
[ "Computational science", "Applied mathematics", "Control engineering" ]
22,968,686
https://en.wikipedia.org/wiki/Keystone%20wall%20plate
Keystone wall plates are used in commercial and industrial buildings to cleanly attach telecommunication cables etc. to a junction box, surface mount box, or a mud ring mounted in the drywall of a building. Keystone wall plates are made to work with many different types of cabling solutions, including coaxial, twisted pair, HDMI, optical fiber, etc. Keystone wall plates are made of plastic and have one to twelve ports. A keystone port is a hole in the wall plate which allows the insertion of a keystone module or other male or female cabling connectors. The most common colors of keystone wall plates are beige and white. Keystone wall plates are commonly made to be compatible with NEMA standard openings and boxes. Further reading How To Install Drywall Rings for Keystone Wall Plates Video shows 3 most common drywall rings References Electrical wiring
Keystone wall plate
[ "Physics", "Engineering" ]
169
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
22,974,080
https://en.wikipedia.org/wiki/P%26H%20Mining
P&H Mining Equipment sells drilling and material handling machinery under the "P&H" trademark. The firm is an operating subsidiary of Joy Global Inc. In 2017 Joy Global Inc. was acquired by Komatsu Limited of Tokyo, Japan, and is now known as Komatsu Mining Corporation and operates as a subsidiary of Komatsu. Parent Company Overview Joy Global Inc. (JGI) is a mining machinery and service support company based in Milwaukee, Wisconsin, USA. It consists of two operating units – P&H Mining Equipment and Joy Mining Machinery. P&H Mining Equipment specializes in the design, manufacture and support of equipment applied to surface mines. Joy Mining Machinery specializes in equipment and support applied to underground mine operations. P&H Mining Equipment History and Henry Harnischfeger started the manufacturing business that would evolve into P&H Mining Equipment in 1884 in Milwaukee, Wisconsin, USA. Pawling was a castings pattern maker. Harnischfeger was a locksmith machinist with some engineering training. Both individuals served within the Whitehill Sewing Machine Company factory in Milwaukee starting in 1881. Concerned that Whitehill business operations were drifting toward failure, Pawling exited the firm to start a small gear machining and pattern making shop in 1883. Needing more gear machining expertise and capital, Pawling persuaded Harnischfeger to join his firm as an equal partner. Their Pawling & Harnischfeger Machine and Pattern Shop officially began on December 1, 1884. Components and Assemblies Suppliers Pawling and Harnischfeger initially supplied industrial machinery components and assembly service support to large manufacturing operations in Milwaukee. Their customers included industrial knitting machine manufacturers, brick makers, grain drying equipment manufacturers and beer brewers. When an overloaded overhead bridge-type crane collapsed within the foundry operations of a nearby heavy equipment manufacturer known as the Edward P. Allis Manufacturing Company, Pawling and Harnischfeger rebuilt the crane with an improved and simplified design. Pawling & Harnischfeger soon transformed their business into an industrial cranes manufacturing and service operation. A bank panic in 1893 caused demand for cranes to plummet however, prompting P&H to look for another product line that might help them reduce business risk amid economic downturns. They turned their attention to earth moving machinery, as America was in the midst of an infrastructure and construction boom that required large volumes of such equipment. Earth Moving Machinery Era Begins By 1920 the P&H digging machinery product line included P&H Model 206 and Model 300 machines that the firm produced in batches of five or more. By 1926, P&H digging machinery was effectively in distribution around the world including Mumbai, India. Products Evolve Over the ensuing decades, P&H earth moving machines evolved into larger, more powerful and more productive prime movers of material. By 1930, welding technology made possible the fabrication of lighter, stronger machinery versus traditional riveted-design machinery. P&H not only was an early adapter to welded design, but the firm also designed and manufactured its own line of electric arc welding machinery and welding rod products. Another technology advance applied to P&H digging machines during the 1930s was the Ward-Leonard DC electric motor drive system. Pawling & Harnischfeger began designing and making their own electric motors and controls starting in 1893 when they acquired assets of the Gibb Electric Company that were not needed when Gibb was purchased by Westinghouse Electric Manufacturing Company. P&H shovels and draglines were originally available with prime-mover options including a Waukesha gasoline engine, a Buda diesel engine, or a P&H electric motor. By the 1930s, diesel engines and P&H electric motors became the dominant prime mover options on P&H digging machines. By the end of the 1960s, virtually all P&H excavating machines would be equipped with P&H electric motors. P&H excavators that started out in the 1920s with dipper and bucket payloads of about 500 pounds / 226 kilograms and dipper and bucket capacities of 0.5 cubic yard / 0.382 cubic meter would evolve into massive and powerful electric mining shovels with maximum dipper payloads of 120 tons / 109 tonnes and maximum capacities of 82 cubic yards / 62.7 cubic meters. Machine working weights would see similar dramatic changes. P&H Model 206 excavators originally had working weights of about 25 tons / 22.7 tonnes. P&H 4100XPC electric shovels today have working weights of about 1,645 tons / 1492 tonnes. P&H Product Line P&H Mining Equipment sells four lines of equipment for surface mining operations. They include electric mining shovels, blast hole production drills, walking draglines, and in-pit crushing-conveying systems. Electric Mining Shovels P&H Electric Mining Shovels are applied to loading haul trucks and in-pit crusher-conveyor systems in surface mine operations. They range in payload from 21 tons / 19.1 tonnes for the smallest model, the P&H 1900AL, to 120 tons / 108.9 tonnes for the model P&H 4100XPC. From March 2018 the new superior model: P&H Komatsu 4800XPC. The productions are 20% increased and this machinery has a lower cost per ton by up to 10% compared to other rope shovels: nominal payload 122,5 tonnes; optimum truck size: 360mt. - 400st; operator eye level 33ft. 1 in (10.1 mt.) Drilling Rigs P&H Drilling Rigs are applied to boring grids of tubular explosives containers in hard-rock formations within mine operations. They apply three kinds of force to the task of advancing a tri-cone drill bit into rock. Bit-loading force up to combines with up to of torque and large volumes of compressed air flowing at nearly per minute create the tubes or "blast holes." The holes are loaded with explosives and then detonated. The blast produces a powerful shock wave that fragments the rock, making it easier to load, haul, crush and distill for its mineral contents. Walking Draglines P&H Walking Draglines move large volumes of earthen overburden capping coal seams, and also overburden and phosphates. They wield a large-volume bucket that is cast out toward the material that needs to be relocated using a football field-length boom and powerful swing motors and transmissions. The bucket lands atop the material and its teeth quickly bite into the material. Powerful drag force is applied to fill the bucket, followed by powerful hoist and swing forces applied to dump the material away from the excavation site to. P&H dragline bucket capacities range from . In-pit Crushing-Conveying Systems In-pit Crushing-Conveying Systems (IPCCs) were introduced in 2008, which take earthen material excavated and deposited into a large holding hopper by an electric shovel, and then crush the material to an easy-to-convey size for transfer elsewhere in the mine. Soaring and volatile mine material handling costs associated with older mine operations utilizing longer and deeper truck haulage routes experienced during an inflationary period between 2004 and 2008 led to efforts by several mining equipment suppliers to develop alternative IPCC systems. Product distribution and support P&H Mining Equipment operates a global network of P&H MinePro services support teams in key mining regions, formally established in 1996. MinePro operations are located in Africa, Asia, Australia, Europe, North America and South America close to major concentrations of mining operations that produce energy and minerals for the global economy. P&H equipment and MinePro support are primarily directed to copper, coal, iron ore, oil sand, gold, diamonds and phosphate mining operations in order of product and service demand. MinePro service support includes new machine assembly, maintenance and repairs, systems upgrades, machine relocations, motors and transmissions rebuilds and repairs, structures weldments and repairs, and training for machine operators and maintenance personnel. MinePro teams consist of mechanics, electricians, welders, machinists, assemblers and logistics warehouse managers, most of them native to the city or region in which they serve. MinePro is a primary sales channel for surface mining equipment. However, it is capable of servicing underground mining equipment, and offers service support to the construction industry as well. About MinePro works with surface mining operations to ensure maximum productivity of equipment designed and built by P&H Mining Equipment including electric mining shovels, blast hole production drills, walking draglines, and supply parts and services for that equipment. It provides support for non-P&H equipment in some regions as well, including Hitachi trucks and excavators, Liebherr trucks, LeTourneau wheel loaders, CQMS dragline buckets and “GETs” or ground-engaging tools (e.g. crawler track shoes, bucket and dipper armor), Stamler feeder breakers and Continental Conveyor Products idler rolls and conveyor systems among others. MinePro continues the after-market service support function that began with the founding of P&H Mining Equipment by industrial artisans Alonzo Pawling and Henry Harnischfeger in 1884. As designers and builders of industrial equipment, Pawling and Harnischfeger were well positioned to provide installation, preventive maintenance, repairs, rebuilds and upgrades for the industrial operations including construction and mining firms that invested in their products. The rugged, reliable and productive quality of their products combined with the expertise available from Pawling and Harnischfeger were key factors in the ability of their manufacturing enterprise to endure economic recessions and continue growing into a modern, global business referred to by their customers simply as “P&H” by the start of the 1900s. P&H Mining Equipment formalized the name of its service support business in 1996 by renaming it “P&H MinePro Services”. Takeover of Beloit Corporation P&H took over The Beloit Corporation in 1986. Beloit Corporation was a large, worldwide paper machine manufacturing corporation based in Beloit, Wisconsin. The Beloit Corporation had started nearly 150 year prior as BIW "Beloit Iron Works" corporation and sat on the original grounds of Beloit Iron Works. By the year 2000 (and mostly in the years before), Harnischfeger had extended Beloit Corporation's operations in the Greater Pacific including (Indonesia, Thailand, etc.) to a point of bankruptcy. Beloit Iron Works antiquity and signage remain as a homage to the city and the company. Range of services Replacement parts and assemblies for P&H and some non-P&H mining machinery including shovels, drills and draglines Mining equipment mechanical and electrical system upgrades and modernizations Mining equipment installations, rebuilds and relocations Preventive maintenance, emergency repairs and mechanical and electrical systems audit services Used equipment trade, purchases, relocations, upgrades External links P&H MinePro Services: P&H MinePro Services web site References P&H MinePro Website About Page Joy Global Inc. 2009 annual report, Form 10-K addendum required by US Securities and Exchange Commission for all publicly traded firms based in the US Manufacturing companies based in Milwaukee Mining equipment companies
P&H Mining
[ "Engineering" ]
2,292
[ "Mining equipment", "Mining equipment companies" ]
22,976,317
https://en.wikipedia.org/wiki/F%C3%B6rster%20coupling
Förster coupling is the resonant energy transfer between excitons within adjacent QD's (quantum dots). The first studies of Forster were performed in the context of the sensitized luminescence of solids. Here, an excited sensitizer atom can transfer its excitation to a neighbouring acceptor atom, via an intermediate virtual photon. This same mechanism has also been shown to be responsible for exciton transfer between QD's and within molecular systems and biosystems (though incoherently, as a mechanism for photosynthesis), all of which may be treated in a similar formulation. (See also Förster resonance energy transfer (FRET).) Introduction In the introductory lecture given by T. Förster, he considered the transfer of electronic excitation energy between otherwise well-separated atomic or molecular electronic systems, which exclude the trivial case of an excitation transfer that consists in the emission of one quantum of light by the first atom or molecule followed by re-absorption by the second one. It is only the non-radiative transfer of excitation occurring during the short lifetimes of excited electronic systems which he considered there. The first observation of energy transfer was made by Cario and Franck (1922) in their classical experiments on sensitized fluorescence of atoms in the vapour phase. A mixture of mercury and thallium vapour, when irradiated with the light of the mercury resonance line, shows the emission spectra of both atoms. Since thallium atoms do not absorb the exciting light, they can get excited only indirectly by an excitation transfer from mercury atoms. A transfer by reabsorption is impossible here. Therefore, this transfer must be a non-radiative one with a mercury atom as the donor or sensitizer and the thallium atom as the acceptor. Unfortunately, in this case it cannot be decided whether the transfer occurs between distant atoms or during a normal collision or even in a labile molecule formed as an intermediate. This decision, however, was possible in similar cases, as in the mercury-sensitized fluorescence of sodium and in the mutual sensitization of the fluorescence of different mercury isotopes. In these cases, the transfer occurs over distances very much larger than those in normal collisional separations. Similar observations of sensitized fluorescence were made with molecular vapours and in solution. Further experiments have shown that in this case the transfer occurs not over collisional distances but over the mean intermolecular distances of sensitizer and acceptor, corresponding to a concentration of 10−3 to 10−2M. This is demonstrated by the fact that sensitization occurs with similar half-value concentrations in solution of very different viscosities and even in organic glasses at low temperature. The possibility of the formation of a complex between sensitizer and acceptor molecules was excluded by the additivity of the absorption spectra and the different dependence on concentration to be expected in this case. It must be concluded, therefore, that excitation transfer of a non-trivial nature occurs over the mean distances between statistically distributed molecules which are about 40Å in this case. It differs from short-distance collisional transfer by its independence of solvent viscosity and from transfer within a molecular complex by the constancy of absorption spectra and the decrease in sensitizer fluorescence lifetime. Qualitative features Table 2 summarizes some qualitative features of this kind of long-range transfer and of some more or less trivial mechanisms. The non-trivial transfer differs from re-absorption transfer by its independence of the volume of the solution, by the decrease in sensitizer fluorescence lifetime, and by the invariability of the sensitizer fluorescence spectrum. It differs from short-distance collisional transfer by its independence of solvent viscosity and from transfer within a molecular complex by the constancy of absorption spectra and the decrease in sensitizer fluorescence lifetime. In most cases, some of these different properties allow a decision between trivial and non-trivial transfer mechanisms. Further discriminations may be made by quantitative studies of these properties. Coulomb interaction The electrons interact via the Coulomb interaction, given by the Hamiltonian where the Coulomb matrix element is given by Here, is the dielectric constant of the medium. To calculate the dynamics of two coupled QDs (each modeled as an interband two-level system with one conduction and one valence level and , respectively) which have no electronic overlap, an expansion of the potential is performed: (i) a long-range expansion about a reference point of each QD, varying on a mesoscopic scale and neglecting the variation on the scale of the elementary cell - this yields level diagonal contributions in the Hamiltonian and ; and (ii) a short-range expansion about an arbitrary lattice vector, taking into account the microscopic variation of the QD - this yields nondiagonal contributions . On the dipole-dipole level, the level diagonal elements correspond to an electrostatic energetic shift of the system (biexcitonic shift ), while the nondiagonal elements, the so-called Förster coupling elements , correspond to an excitation transfer between the different QDs. Hamiltonian Here, we shall consider excitons in two coupled QD's and the Coulomb interactions between them. More specifically, we shall derive an analytical expression for the strength of the inter-dot Foerster coupling. It can be also shown that this coupling is, under certain conditions, of dipole-dipole type and that it is responsible for resonant exciton exchange between adjacent QD's. This is a transfer of energy only, not a tunnelling effect. we write the Hamiltonian of two interacting QD's in the computational basis where the off-diagonal Förster interaction is given by , and the direct Coulomb binding energy between the two excitons, one on each dot, is on the diagonal and given by . The ground state energy is denoted by , and is the difference between the excitation energy for dot I and that for dot II. These excitation energies and inter-dot interactions are all functions of the applied field F. It is also straightforward to see that an off-diagonal Förster coupling does indeed correspond to a resonant transfer of energy; if we begin in the state (exciton on dot I, no exciton on dot II) this will naturally evolve to a state See also Semiconductors References Further reading T. Förster, Delocalized excitation and excitation transfer, in Modern Quantum Chemistry, ed. by O. Sinanoglu (Academic, New York, 1965), p. 93 Condensed matter physics Quantum mechanics Quantum chemistry Energy transfer
Förster coupling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,395
[ "Quantum chemistry", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Materials science", "Theoretical chemistry", "Condensed matter physics", " molecular", "Atomic", "Matter", " and optical physics" ]
29,025,999
https://en.wikipedia.org/wiki/Terbequinil
Terbequinil (SR-25776) is a stimulant and nootropic drug which acts as a partial inverse agonist at benzodiazepine sites on the GABAA receptor. In human trials it was found to partially reverse the sedative and amnestic effects of the hypnotic drug triazolam with only slight effects when administered by itself. See also GABAA receptor negative allosteric modulator GABAA receptor § Ligands References 2-Quinolones Ethers Carboxamides GABAA receptor negative allosteric modulators
Terbequinil
[ "Chemistry" ]
125
[ "Organic compounds", "Functional groups", "Ethers" ]
29,028,436
https://en.wikipedia.org/wiki/Cameron%20Leigh%20Stewart
Cameron Leigh Stewart FRSC is a Canadian mathematician. He is a professor of pure mathematics at the University of Waterloo. Contributions He has made numerous contributions to number theory, in particular to work on the abc conjecture. In 1976 he obtained, with Alan Baker, an effective improvement to Liouville's Theorem. In 1991 he proved that the number of solutions to a Thue equation is at most , where is a pre-determined positive real number and is the number of distinct primes dividing a large divisor of . This improves on an earlier result of Enrico Bombieri and Wolfgang M. Schmidt and is close to the best possible result. In 1995 he obtained, along with Jaap Top, the existence of infinitely many quadratic, cubic, and sextic twists of elliptic curves of large rank. In 1991 and 2001 respectively, he obtained, along with Kunrui Yu, the best unconditional estimates for the abc conjecture. In 2013, he solved an old problem of Erdős (so his Erdős number is 1) involving Lucas and Lehmer numbers. In particular, he proved that the largest prime divisor of satisfies . Education Stewart completed a B.Sc. at the University of British Columbia in 1971 and a M.Sc in 1972 from McGill University. He earned his doctorate from the University of Cambridge in 1976, under the supervision of Alan Baker. Recognition In 1974, while at Cambridge, he was awarded the J.T. Knight Prize. He was elected Fellow of the Royal Society of Canada in 1989. He was appointed Fellow of the Fields Institute in 2008. Since 2003 he has held a Canada Research Chair (tier 1). Since 2005 he has been appointed University Professor at the University of Waterloo. He was selected to give the annual Isidore and Hilda Dressler Lecture at Kansas State University in 2015. He was elected as a fellow of the Canadian Mathematical Society in 2019. Selected works References External links Website at University of Waterloo Year of birth missing (living people) Living people Fellows of the Canadian Mathematical Society Fellows of the Royal Society of Canada 20th-century Canadian mathematicians 21st-century Canadian mathematicians McGill University alumni Alumni of the University of Cambridge Academic staff of the University of Waterloo University of British Columbia alumni Canada Research Chairs Abc conjecture
Cameron Leigh Stewart
[ "Mathematics" ]
457
[ "Abc conjecture", "Number theory" ]
2,882,536
https://en.wikipedia.org/wiki/K%C3%B6nig%27s%20theorem%20%28kinetics%29
In kinetics, König's theorem or König's decomposition is a mathematical relation derived by Johann Samuel König that assists with the calculations of angular momentum and kinetic energy of bodies and systems of particles. For a system of particles The theorem is divided in two parts. First part of König's theorem The first part expresses the angular momentum of a system as the sum of the angular momentum of the centre of mass and the angular momentum applied to the particles relative to the center of mass. Proof Considering an inertial reference frame with origin O, the angular momentum of the system can be defined as: The position of a single particle can be expressed as: And so we can define the velocity of a single particle: The first equation becomes: But the following terms are equal to zero: So we prove that: where M is the total mass of the system. Second part of König's theorem The second part expresses the kinetic energy of a system of particles in terms of the velocities of the individual particles and the centre of mass. Specifically, it states that the kinetic energy of a system of particles is the sum of the kinetic energy associated to the movement of the center of mass and the kinetic energy associated to the movement of the particles relative to the center of mass. Proof The total kinetic energy of the system is: Like we did in the first part, we substitute the velocity: We know that so if we define: we're left with: For a rigid body The theorem can also be applied to rigid bodies, stating that the kinetic energy K of a rigid body, as viewed by an observer fixed in some inertial reference frame N, can be written as: where is the mass of the rigid body; is the velocity of the center of mass of the rigid body, as viewed by an observer fixed in an inertial frame N; is the angular momentum of the rigid body about the center of mass, also taken in the inertial frame N; and is the angular velocity of the rigid body R relative to the inertial frame N. References Hanno Essén: Average Angular Velocity (1992), Department of Mechanics, Royal Institute of Technology, S-100 44 Stockholm, Sweden. Samuel König (Sam. Koenigio): De universali principio æquilibrii & motus, in vi viva reperto, deque nexu inter vim vivam & actionem, utriusque minimo, dissertatio, Nova acta eruditorum (1751) 125-135, 162-176 (Archived). Paul A. Tipler and Gene Mosca (2003), Physics for Scientists and Engineers (Paper): Volume 1A: Mechanics (Physics for Scientists and Engineers), W. H. Freeman Ed., Works cited Eponymous theorems of physics Mechanics
König's theorem (kinetics)
[ "Physics", "Engineering" ]
583
[ "Equations of physics", "Theoretical physics", "Eponymous theorems of physics", "Mechanics", "Mechanical engineering", "Theoretical physics stubs", "Physics theorems" ]
2,883,833
https://en.wikipedia.org/wiki/HERA-B
The HERA-B detector was a particle physics experiment at the HERA accelerator at the German national laboratory DESY that collected data from 1993 to 2003. It measured 8 m × 20 m × 9 m and weighed 1000 tons. The HERA-B collaboration consisted of some 250 scientists from 32 institutes in 13 countries. Its primary aim was to measure CP violation in the decays of heavy B mesons in the late 1990s, several years ahead of the Large Hadron Collider and B Factory programs. Unlike most particle physics detectors, the particles were produced not by colliding two circulating beams head-on, nor by slamming the beam into a stationary target, but by moving a thin wire target directly into the waste 'halo' of the circulating proton beam of the HERA accelerator. The beam was unaffected by this 'scraping' but the collision rate produced could be made extremely high, around 5 to 10 million interactions per second (5–10 MHz). The collaboration developed a novel scheme for moving the wires and the vertex detectors very close to the beam (less than one centimetre), using a vacuum chamber and motorised 'arms', had to be developed. External links HERA-B webpage HERA-B experiment record on INSPIRE-HEP References Particle experiments B physics Experimental particle physics
HERA-B
[ "Physics" ]
267
[ "Particle physics stubs", "Experimental physics", "Particle physics", "Experimental particle physics" ]
2,884,010
https://en.wikipedia.org/wiki/Turbine%20map
Each turbine in a gas turbine engine has an operating map. Complete maps are either based on turbine rig test results or are predicted by a special computer program. Alternatively, the map of a similar turbine can be suitably scaled. Description A turbine map shows lines of percent corrected speed (based on a reference value) plotted against the x-axis which is pressure ratio, but deltaH/T (roughly proportional to temperature drop across the unit/component entry temperature) is also often used. The y-axis is some measure of flow, usually non-dimensional flow or corrected flow, but not actual flow. Sometimes the axes of a turbine map are transposed, to be consistent with those of a compressor map. As in this case, a companion plot, showing the variation of isentropic (i.e. adiabatic) or polytropic efficiency, is often also included. The turbine may be a transonic unit, where the throat Mach number reaches sonic conditions and the turbine becomes truly choked. Consequently, there is virtually no variation in flow between the corrected speed lines at high pressure ratios. Most turbines however, are subsonic devices, the highest Mach number at the NGV throat being about 0.85. Under these conditions, there is a slight scatter in flow between the percent corrected speed lines in the 'choked' region of the map, where the flow for a given speed reaches a plateau. Unlike a compressor or fan, surge or stall does not occur in a turbine. This is because the gas flows through the turbine in its natural direction, from high to low pressure. As a result, there is no surge line marked on a turbine map. Working lines are difficult to see on a conventional turbine map because the speed lines bunch up. The map may be replotted, with the y-axis being the multiple of flow and corrected speed. This separates the speed lines, enabling working lines (and efficiency contours) to be cross-plotted and clearly seen. Progressive unchoking of the expansion system The following discussion relates to the expansion system of a 2 spool, high bypass ratio, unmixed, turbofan. On the RHS is a typical primary (i.e. hot) nozzle map (or characteristic). Its appearance is similar to that of a turbine map, but it lacks any (rotational) speed lines. Note that at high flight speeds (ignoring the change in altitude), the hot nozzle is usually in, or close to, a choking condition. This is because the ram rise in the air intake factors-up the nozzle pressure ratio. At static (e.g. SLS) conditions there is no ram rise, so the nozzle tends to operate unchoked (LHS of plot). The low pressure turbine 'sees' the variation in flow capacity of the primary nozzle. A falling nozzle flow capacity tends to reduce the LP turbine pressure ratio (and deltaH/T). As the left hand map shows, initially the reduction in LP turbine deltaH/T has little effect upon the entry flow of the unit. Eventually, however, the LP turbine unchokes, causing the flow capacity of the LP turbine to start to decrease. As long as the LP turbine remains choked, there is no significant change in HP turbine pressure ratio (or deltaH/T) and flow. Once, however, the LP turbine unchokes, the HP turbine deltaH/T starts to decrease. Eventually the HP turbine unchokes, causing its flow capacity to start to fall. Ground Idle is often reached shortly after HPT unchoke. References Diagrams Engines Turbines
Turbine map
[ "Physics", "Chemistry", "Technology" ]
740
[ "Machines", "Turbomachinery", "Engines", "Turbines", "Physical systems" ]
2,884,506
https://en.wikipedia.org/wiki/Detached%20eddy%20simulation
Detached eddy simulation (DES) is a modification of a Reynolds-averaged Navier–Stokes equations (RANS) model in which the model switches to a subgrid scale formulation in regions fine enough for large eddy simulation (LES) calculations. Details Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model, it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart-Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solution. References External links CFD wiki article on DES technique Article comparing RANS and DES for Automotive Applications. Computational fluid dynamics Turbulence models
Detached eddy simulation
[ "Physics", "Chemistry" ]
289
[ "Computational physics stubs", "Computational fluid dynamics", "Computational physics", "Fluid dynamics stubs", "Fluid dynamics" ]
2,884,698
https://en.wikipedia.org/wiki/Diffusion%20barrier
A diffusion barrier is a thin layer (usually micrometres thick) of metal usually placed between two other metals. It is done to act as a barrier to protect either one of the metals from corrupting the other. Adhesion of a plated metal layer to its substrate requires a physical interlocking, inter-diffusion of the deposit or a chemical bonding between plate and substrate in order to work. The role of a diffusion barrier is to prevent or to retard the inter-diffusion of the two superposed metals. Therefore, to be effective, a good diffusion barrier requires inertness with respect to adjacent materials. To obtain good adhesion and a diffusion barrier simultaneously, the bonding between layers needs to come from a chemical reaction of limited range at both boundaries. Materials providing good adhesion are not necessarily good diffusion barriers and vice versa. Consequently, there are cases where two or more separate layers must be used to provide a proper interface between substrates. Selection While the choice of diffusion barrier depends on the final function, anticipated operating temperature, and service life, are critical parameters to select diffusion barrier materials. Many thin film metal combinations have been evaluated for their adhesion and diffusion barrier properties. Aluminum provides good electrical and thermal conductivity, adhesion and reliability because of its oxygen reactivity and the self-passivation properties of its oxide. Copper also easily reacts with oxygen but its oxides have poor adhesion properties. As for gold its virtue relies in its inertness, and ease of application; its problem is its cost. Chromium has excellent adhesion to many materials because of its reactivity. Its affinity for oxygen forms a thin stable oxide coat on the outer surface, creating a passivation layer which prevents further oxidation of the chromium, and of the underlying metal (if any), even in corrosive environments. Chromium plating on steel for automotive use involves three diffusion barrier layers—copper, nickel, then chromium—to provide long term durability where there will be many large temperature changes. If chromium is plated directly onto the steel, then their different thermal expansion coefficients will cause the chrome plating to peel off the steel. Nickel, Nichrome, tantalum, hafnium, niobium, zirconium, vanadium, and tungsten are a few of the metal combinations used to form diffusion barriers for specific applications. Conductive ceramics can be also used, such as tantalum nitride, indium oxide, copper silicide, tungsten nitride, and titanium nitride. Integrated circuits A barrier metal is a material used in integrated circuits to chemically isolate semiconductors from soft metal interconnects, while maintaining an electrical connection between them. For instance, a layer of barrier metal must surround every copper interconnect in modern integrated circuits, to prevent diffusion of copper into surrounding materials. As the name implies, a barrier metal must have high electrical conductivity in order to maintain a good electronic contact, while maintaining a low enough copper diffusivity to sufficiently chemically isolate these copper conductor films from underlying device silicon. The thickness of the barrier films is also quite important; with too thin a barrier layer, the inner copper may contact and poison the very devices that they supply with energy and information; with barrier layers too thick, these wrapped stacks of two barrier metal films and an inner copper conductor can have a greater total resistance than the traditional aluminum interconnections would have, eliminating any benefit derived from the new metallization technology. Some materials that have been used as barrier metals include cobalt, ruthenium, tantalum, tantalum nitride, indium oxide, tungsten nitride, and titanium nitride (the last four being conductive ceramics, but "metals" in this context). References Semiconductor device fabrication Metal plating
Diffusion barrier
[ "Chemistry", "Materials_science" ]
789
[ "Microtechnology", "Metallurgical processes", "Coatings", "Semiconductor device fabrication", "Metal plating" ]
2,884,728
https://en.wikipedia.org/wiki/Automated%20reasoning
In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy. The most developed subareas of automated reasoning are automated theorem proving (and the less automated but more pragmatic subfield of interactive theorem proving) and automated proof checking (viewed as guaranteed correct reasoning under fixed assumptions). Extensive work has also been done in reasoning by analogy using induction and abduction. Other important topics include reasoning under uncertainty and non-monotonic reasoning. An important part of the uncertainty field is that of argumentation, where further constraints of minimality and consistency are applied on top of the more standard automated deduction. John Pollock's OSCAR system is an example of an automated argumentation system that is more specific than being just an automated theorem prover. Tools and techniques of automated reasoning include the classical logics and calculi, fuzzy logic, Bayesian inference, reasoning with maximal entropy and many less formal ad hoc techniques. Early years The development of formal logic played a big role in the field of automated reasoning, which itself led to the development of artificial intelligence. A formal proof is a proof in which every logical inference has been checked back to the fundamental axioms of mathematics. All the intermediate logical steps are supplied, without exception. No appeal is made to intuition, even if the translation from intuition to logic is routine. Thus, a formal proof is less intuitive and less susceptible to logical errors. Some consider the Cornell Summer meeting of 1957, which brought together many logicians and computer scientists, as the origin of automated reasoning, or automated deduction. Others say that it began before that with the 1955 Logic Theorist program of Newell, Shaw and Simon, or with Martin Davis’ 1954 implementation of Presburger's decision procedure (which proved that the sum of two even numbers is even). Automated reasoning, although a significant and popular area of research, went through an "AI winter" in the eighties and early nineties. The field subsequently revived, however. For example, in 2005, Microsoft started using verification technology in many of their internal projects and is planning to include a logical specification and checking language in their 2012 version of Visual C. Significant contributions Principia Mathematica was a milestone work in formal logic written by Alfred North Whitehead and Bertrand Russell. Principia Mathematica - also meaning Principles of Mathematics - was written with a purpose to derive all or some of the mathematical expressions, in terms of symbolic logic. Principia Mathematica was initially published in three volumes in 1910, 1912 and 1913. Logic Theorist (LT) was the first ever program developed in 1956 by Allen Newell, Cliff Shaw and Herbert A. Simon to "mimic human reasoning" in proving theorems and was demonstrated on fifty-two theorems from chapter two of Principia Mathematica, proving thirty-eight of them. In addition to proving the theorems, the program found a proof for one of the theorems that was more elegant than the one provided by Whitehead and Russell. After an unsuccessful attempt at publishing their results, Newell, Shaw, and Herbert reported in their publication in 1958, The Next Advance in Operation Research: "There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until (in a visible future) the range of problems they can handle will be co- extensive with the range to which the human mind has been applied." Examples of Formal Proofs {| class="wikitable" |- ! Year !! Theorem !! Proof System !! Formalizer !! Traditional Proof |- | 1986 || First Incompleteness|| Boyer-Moore || Shankar || Gödel |- | 1990 || Quadratic Reciprocity || Boyer-Moore || Russinoff || Eisenstein |- | 1996 || Fundamental- of Calculus || HOL Light || Harrison || Henstock |- | 2000 || Fundamental- of Algebra || Mizar || Milewski || Brynski |- | 2000 || Fundamental- of Algebra || Coq || Geuvers et al. || Kneser |- | 2004 || Four Color || Coq || Gonthier || Robertson et al. |- | 2004 || Prime Number || Isabelle || Avigad et al. || Selberg-Erdős |- | 2005 || Jordan Curve || HOL Light || Hales || Thomassen |- | 2005 || Brouwer Fixed Point || HOL Light || Harrison || Kuhn |- | 2006 || Flyspeck 1 || Isabelle || Bauer- Nipkow || Hales |- | 2007 || Cauchy Residue || HOL Light || Harrison || Classical |- | 2008 || Prime Number || HOL Light || Harrison || Analytic proof |- | 2012 || Feit-Thompson || Coq || Gonthier et al. || Bender, Glauberman and Peterfalvi |- | 2016 || Boolean Pythagorean triples problem || Formalized as SAT || Heule et al. || None |} Proof systems Boyer-Moore Theorem Prover (NQTHM) The design of NQTHM was influenced by John McCarthy and Woody Bledsoe. Started in 1971 at Edinburgh, Scotland, this was a fully automatic theorem prover built using Pure Lisp. The main aspects of NQTHM were: the use of Lisp as a working logic. the reliance on a principle of definition for total recursive functions. the extensive use of rewriting and "symbolic evaluation". an induction heuristic based the failure of symbolic evaluation. HOL Light Written in OCaml, HOL Light is designed to have a simple and clean logical foundation and an uncluttered implementation. It is essentially another proof assistant for classical higher order logic. Coq Developed in France, Coq is another automated proof assistant, which can automatically extract executable programs from specifications, as either Objective CAML or Haskell source code. Properties, programs and proofs are formalized in the same language called the Calculus of Inductive Constructions (CIC). Applications Automated reasoning has been most commonly used to build automated theorem provers. Oftentimes, however, theorem provers require some human guidance to be effective and so more generally qualify as proof assistants. In some cases such provers have come up with new approaches to proving a theorem. Logic Theorist is a good example of this. The program came up with a proof for one of the theorems in Principia Mathematica that was more efficient (requiring fewer steps) than the proof provided by Whitehead and Russell. Automated reasoning programs are being applied to solve a growing number of problems in formal logic, mathematics and computer science, logic programming, software and hardware verification, circuit design, and many others. The TPTP (Sutcliffe and Suttner 1998) is a library of such problems that is updated on a regular basis. There is also a competition among automated theorem provers held regularly at the CADE conference (Pelletier, Sutcliffe and Suttner 2002); the problems for the competition are selected from the TPTP library. See also Automated machine learning (AutoML) Automated theorem proving Reasoning system Semantic reasoner Program analysis (computer science) Applications of artificial intelligence Outline of artificial intelligence Casuistry • Case-based reasoning Abductive reasoning Inference engine Commonsense reasoning Conferences and workshops International Joint Conference on Automated Reasoning (IJCAR) Conference on Automated Deduction (CADE) International Conference on Automated Reasoning with Analytic Tableaux and Related Methods Journals Journal of Automated Reasoning Communities Association for Automated Reasoning (AAR) References External links International Workshop on the Implementation of Logics Workshop Series on Empirically Successful Topics in Automated Reasoning Theoretical computer science Automated theorem proving Logic in computer science
Automated reasoning
[ "Mathematics" ]
1,694
[ "Logic in computer science", "Automated theorem proving", "Theoretical computer science", "Applied mathematics", "Mathematical logic", "Computational mathematics" ]
2,884,904
https://en.wikipedia.org/wiki/Imaginary%20time
Imaginary time is a mathematical representation of time that appears in some approaches to special relativity and quantum mechanics. It finds uses in certain cosmological theories. Mathematically, imaginary time is real time which has undergone a Wick rotation so that its coordinates are multiplied by the imaginary unit i. Imaginary time is not imaginary in the sense that it is unreal or made-up; it is simply expressed in terms of imaginary numbers. Origins In mathematics, the imaginary unit is , such that is defined to be . A number which is a direct multiple of is known as an imaginary number. A number that is the sum of an imaginary number and a real number is known as a complex number. In certain physical theories, periods of time are multiplied by in this way. Mathematically, an imaginary time period may be obtained from real time via a Wick rotation by in the complex plane: . Stephen Hawking popularized the concept of imaginary time in his book The Universe in a Nutshell. In fact, the terms "real" and "imaginary" for numbers are just a historical accident, much like the terms "rational" and "irrational": In cosmology Derivation In the Minkowski spacetime model adopted by the theory of relativity, spacetime is represented as a four-dimensional surface or manifold. Its four-dimensional equivalent of a distance in three-dimensional space is called an interval. Assuming that a specific time period is represented as a real number in the same way as a distance in space, an interval in relativistic spacetime is given by the usual formula but with time negated: where , and are distances along each spatial axis and is a period of time or "distance" along the time axis (Strictly, the time coordinate is where is the speed of light, however we conventionally choose units such that ). Mathematically this is equivalent to writing In this context, may be either accepted as a feature of the relationship between space and real time, as above, or it may alternatively be incorporated into time itself, such that the value of time is itself an imaginary number, denoted by . The equation may then be rewritten in normalised form: Similarly its four vector may then be written as where distances are represented as , and where is the speed of light and time is imaginary. Application to cosmology Hawking noted the utility of rotating time intervals into an imaginary metric in certain situations, in 1971. In physical cosmology, imaginary time may be incorporated into certain models of the universe which are solutions to the equations of general relativity. In particular, imaginary time can help to smooth out gravitational singularities, where known physical laws break down, to remove the singularity and avoid such breakdowns (see Hartle–Hawking state). The Big Bang, for example, appears as a singularity in ordinary time but, when modelled with imaginary time, the singularity can be removed and the Big Bang functions like any other point in four-dimensional spacetime. Any boundary to spacetime is a form of singularity, where the smooth nature of spacetime breaks down. With all such singularities removed from the Universe, it thus can have no boundary and Stephen Hawking speculated that "the boundary condition to the Universe is that it has no boundary". However, the unproven nature of the relationship between actual physical time and imaginary time incorporated into such models has raised criticisms. Roger Penrose has noted that there needs to be a transition from the Riemannian metric (often referred to as "Euclidean" in this context) with imaginary time at the Big Bang to a Lorentzian metric with real time for the evolving Universe. Also, modern observations suggest that the Universe is open and will never shrink back to a Big Crunch. If this proves true, then the end-of-time boundary still remains. See also Euclidean quantum gravity Multiple time dimensions References Further reading Gerald D. Mahan. Many-Particle Physics, Chapter 3 A. Zee Quantum field theory in a nutshell, Chapter V.2 External links The Beginning of Time — Lecture by Stephen Hawking which discusses imaginary time. Stephen Hawking's Universe: Strange Stuff Explained — PBS site on imaginary time. Quantum mechanics Philosophy of time
Imaginary time
[ "Physics" ]
851
[ "Physical quantities", "Time", "Theoretical physics", "Quantum mechanics", "Philosophy of time", "Spacetime" ]
2,885,328
https://en.wikipedia.org/wiki/Oligotroph
An oligotroph is an organism that can live in an environment that offers very low levels of nutrients. They may be contrasted with copiotrophs, which prefer nutritionally rich environments. Oligotrophs are characterized by slow growth, low rates of metabolism, and generally low population density. Oligotrophic environments are those that offer little to sustain life. These environments include deep oceanic sediments, caves, glacial and polar ice, deep subsurface soil, aquifers, ocean waters, and leached soils. Examples of oligotrophic organisms are the cave-dwelling olm; the bacterium "Candidatus Pelagibacter communis", which is the most abundant organism in the ocean (with an estimated 2 × 1028 individuals in total); and lichens, with their extremely low metabolic rate. Etymology Etymologically, the word "oligotroph" is a combination of the Greek adjective oligos (ὀλίγος) meaning "few" and the adjective trophikos (τροφικός) meaning "feeding". Plant adaptations Plant adaptations to oligotrophic soils provide for greater and more efficient nutrient uptake, reduced nutrient consumption, and efficient nutrient storage. Improvements in nutrient uptake are facilitated by root adaptations such as nitrogen-fixing root nodules, mycorrhizae and cluster roots. Consumption is reduced by very slow growth rates, and by efficient use of low-availability nutrients; for example, the use of highly available ions to maintain turgor pressure, with low-availability nutrients reserved for the building of tissues. Despite these adaptations, nutrient requirement typically exceed uptake during the growing season, so many oligotrophic plants have the ability to store nutrients, for example, in trunk tissues, when demand is low, and remobilise them when demand increases. Oligotrophic environments Oligotrophs occupy environments where the available nutrients offer little to sustain life. The term “oligotrophic” is commonly used to describe terrestrial and aquatic environments with very low concentrations of nitrates, iron, phosphates, and carbon sources. Oligotrophs have acquired survival mechanisms that involve the expression of genes during periods of low nutrient conditions, which has allowed them to find success in various environments. Despite the capability to live in low nutrient concentrations, oligotrophs may find difficulty surviving in nutrient-rich environments. The presence of excess nutrients overwhelm oligotroph's metabolic systems, which cause them to struggle to regulate nutrient uptake. For example, oligotroph's enzymes function well in low nutrient environments, but struggle in high nutrient environments. Antarctica Antarctic environments offer very little to sustain life as most organisms are not well adapted to live under nutrient-limiting conditions and cold temperatures (lower than 5 °C). As such, these environments display a large abundance of psychrophiles that are well adapted to living in an Antarctic biome. Most oligotrophs live in lakes where water helps support biochemical processes for growth and survival. Below are some documented examples of oligotrophic environments in Antarctica: Lake Vostok, a freshwater lake which has been isolated from the world beneath 4 km (2.5 mi) of Antarctic ice is frequently held to be a primary example of an oligotrophic environment. Analysis of ice samples showed ecologically separated microenvironments. Isolation of microorganisms from each microenvironment led to the discovery of a wide range of different microorganisms present within the ice sheet. Traces of fungi have also been observed which suggests potential for unique symbiotic interactions. The lake’s extensive oligotrophy has led some to believe parts of the lake are completely sterile. This lake is a helpful tool for simulating studies regarding extraterrestrial life on frozen planets and other celestial bodies. Crooked Lake is an ultra-oligotrophic glacial lake with a thin distribution of heterotrophic and autotrophic microorganisms. The microbial loop plays a big role in cycling nutrients and energy within this lake, despite particularly low bacterial abundance and productivity in these environments. The little ecological diversity can be attributed to the lake's low annual temperatures. Species discovered in this lake include Ochromonas, Chlamydomonas, Scourfeldia, Cryptomonas, Akistrodesmus falcatus, and Daphniopsis studeri (a microcrustacean). It is proposed that low competitive selection against Daphniopsis studeri has allowed the species to survive long enough to reproduce in nutrient limiting environments. Australia The sandplains and lateritic soils of southern Western Australia, where an extremely thick craton has precluded any geological activity since the Cambrian and there has been no glaciation to renew soils since the Carboniferous. Thus, soils are extremely nutrient-poor and most vegetation must use strategies such as cluster roots to gain even the smallest quantities of such nutrients as phosphorus and sulfur. The vegetation in these regions, however, is remarkable for its biodiversity, which in places is as great as that of a tropical rainforest and produces some of the most spectacular wildflowers in the world. It is however, severely threatened by climate change which has moved the winter rain belt south, and also by clearing for agriculture and through use of fertilizers, which is primarily driven by low land costs which make farming economic even with yields a fraction of those in Europe or North America. South America An example of oligotrophic soils are those on white-sands, with soil pH lower than 5.0, on the Rio Negro basin on northern Amazonia that house very low-diversity, extremely fragile forests and savannahs drained by blackwater rivers; dark water colour due to high concentration of tannins, humic acids and other organic compounds derived from the very slow decomposition of plant matter. Similar forests are found in the oligotrophic waters of the Patía River delta on the Pacific side of the Andes. Ocean In the ocean, the subtropical gyres north and south of the equator are regions in which the nutrients required for phytoplankton growth (for instance, nitrate, phosphate and silicic acid) are strongly depleted all year round. These areas are described as oligotrophic and exhibit low surface chlorophyll. They are occasionally described as "ocean deserts". Oligotrophic soil environments The oligotrophic soil environments include agricultural soil, frozen soil, et cetera. Various factors, such as decomposition, soil structure, fertilization and temperature, can affect the nutrient-availability in the soil environments. Generally, the nutrient becomes less available along the depth of the soil environment, because on the surface, the organic compounds decomposed from the plant and animal debris are consumed quickly by other microbes, resulting in the lack of nutrient in the deeper level of soil. In addition, the metabolic waste produced by the microorganisms on the surface also causes the accumulation of toxic chemicals in the deeper area. Furthermore, oxygen and water are important for some metabolic pathways, but it is difficult for water and oxygen to diffuse as the depth increases. Some factors such as: soil aggregates, pores and extracellular enzymes, may help water, oxygen and other nutrients diffuse into the soil. Moreover, the presence of mineral under the soil provides the alternative sources for the species living in the oligotrophic soil. In terms of the agricultural lands, the application of fertilizer has a complicated impact on the source of carbon, either increasing or decreasing the organic carbon in the soil. Collimonas is one of the genera that are capable of living in the oligotrophic soil. One common feature of the environments where Collimonas lives is the presence of fungi, because Collimonas have the ability of not only hydrolyzing the chitin produced by fungi for nutrients, but also producing materials (e.g., P. fluorescens 2-79) to protect themselves from fungal infection. The mutual relationship is common in the oligotrophic environments. Additionally, Collimonas can also obtain electron sources from rocks and minerals by weathering. In terms of polar areas, such as Antarctic and Arctic region, the soil environment is considered as oligotrophic because the soil is frozen with low biological activities. The most abundant species in the frozen soil are Actinomycetota, Pseudomonadota, Acidobacteriota and Cyanobacteria, together with a small amount of archaea and fungi. Actinomycetota can maintain the activity of their metabolic enzymes and continue their biochemical reactions under a wide range of low temperature. In addition, the DNA repairing machinery in Actinomycetota protects them from lethal DNA mutation at low temperature. See also Oligotrophic lake Eutrophic lake Pelagibacter ubique, most abundant species on Earth and a streamlined oligotroph References External links Special issue about Lake oligotrophication published in Freshwater Biology Edaphology Aquatic ecology Limnology
Oligotroph
[ "Biology" ]
1,864
[ "Aquatic ecology", "Ecosystems" ]
2,885,779
https://en.wikipedia.org/wiki/Phase-transfer%20catalyst
In chemistry, a phase-transfer catalyst or PTC is a catalyst that facilitates the transition of a reactant from one phase into another phase where reaction occurs. Phase-transfer catalysis is a special form of catalysis and can act through homogeneous catalysis or heterogeneous catalysis methods depending on the catalyst used. Ionic reactants are often soluble in an aqueous phase but insoluble in an organic phase in the absence of the phase-transfer catalyst. The catalyst functions like a detergent for solubilizing the salts into the organic phase. Phase-transfer catalysis refers to the acceleration of the reaction upon the addition of the phase-transfer catalyst. By using a PTC process, one can achieve faster reactions, obtain higher conversions or yields, make fewer byproducts, eliminate the need for expensive or dangerous solvents that will dissolve all the reactants in one phase, eliminate the need for expensive raw materials and/or minimize waste problems. Phase-transfer catalysts are especially useful in green chemistry—by allowing the use of water, the need for organic solvents is reduced. Contrary to common perception, PTC is not limited to systems with hydrophilic and hydrophobic reactants. PTC is sometimes employed in liquid/solid and liquid/gas reactions. As the name implies, one or more of the reactants are transported into a second phase which contains both reactants. Phase-boundary catalysis (PBC) is a type of heterogeneous catalytic system which facilitates the chemical reaction of a particular chemical component in an immiscible phase to react on a catalytic active site located at a phase boundary. The chemical component is soluble in one phase but insoluble in the other. The catalyst for PBC has been designed in which the external part of the zeolite is hydrophobic, internally it is usually hydrophilic, notwithstanding to polar nature of some reactants. In this sense, the medium environment in this system is close to that of an enzyme. The major difference between this system and enzyme is lattice flexibility. The lattice of zeolite is rigid, whereas the enzyme is flexible. Types Phase-transfer catalysts for anionic reactants are often quaternary ammonium salts. Commercially important catalysts include benzyltriethylammonium chloride, methyltricaprylammonium chloride and methyltributylammonium chloride. Organic phosphonium salts are also used, e.g., hexadecyltributylphosphonium bromide. The phosphonium salts tolerate higher temperatures, but are unstable toward base, degrading to phosphine oxide. For example, the nucleophilic substitution reaction of an aqueous sodium cyanide solution with an ethereal solution of 1-bromooctane does not readily occur. The 1-bromooctane is poorly soluble in the aqueous cyanide solution, and the sodium cyanide does not dissolve well in the ether. Upon the addition of small amounts of hexadecyltributylphosphonium bromide, a rapid reaction ensues to give nonyl nitrile: C8H17Br_{(org)}{} + NaCN_{(aq)} ->[\ce{R4P+Br-}] C8H17CN_{(org)}{} + NaBr_{(aq)} By the quaternary phosphonium cation, cyanide ions are "ferried" from the aqueous phase into the organic phase. Subsequent work demonstrated that many such reactions can be performed rapidly at around room temperature using catalysts such as tetra-n-butylammonium bromide and methyltrioctylammonium chloride in benzene/water systems. An alternative to the use of "quat salts" is to convert alkali metal cations into hydrophobic cations. In the research lab, crown ethers are used for this purpose. Polyethylene glycols are more commonly used in practical applications. These ligands encapsulate alkali metal cations (typically and ), affording large lipophilic cations. These polyethers have a hydrophilic "interiors" containing the ion and a hydrophobic exterior. Chiral phase-transfer catalysts have also been demonstrated. Applications PTC is widely exploited industrially. Polyesters for example are prepared from acyl chlorides and bisphenol-A. Phosphothioate-based pesticides are generated by PTC-catalyzed alkylation of phosphothioates. One of the more complex applications of PTC involves asymmetric alkylations, which are catalyzed by chiral quaternary ammonium salts derived from cinchona alkaloids. Design of phase-boundary catalyst Phase-boundary catalytic (PBC) systems can be contrasted with conventional catalytic systems. PBC is primarily applicable to reactions at the interface of an aqueous phase and organic phase. In these cases, an approach such as PBC is needed due to the immiscibility of aqueous phases with most organic substrate. In PBC, the catalyst acts at the interface between the aqueous and organic phases. The reaction medium of phase boundary catalysis systems for the catalytic reaction of immiscible aqueous and organic phases consists of three phases; an organic liquid phase, containing most of the substrate, an aqueous liquid phase containing most of the substrate in aqueous phase and the solid catalyst. In case of conventional catalytic system; When the reaction mixture is vigorously stirred, an apparently homogeneous emulsion is obtained, which segregates very rapidly into two liquid phases when the agitation ceases. Segregation occurs by formation of organic bubbles in the emulsion which move downwards to form the aqueous phase, indicating that emulsion consists of dispersed particles of the aqueous phase in the organic phase. Due to the triphasic reactions conditions, the overall reaction between aqueous phase and organic phase substrates on solid catalyst requires different transfer processes. The following steps are involved: transfer of aqueous phase from organic phase to the external surface of solid catalyst; transfer of aqueous phase inside the pore volume of solid catalyst; transfer of the substrate from aqueous phase to the interphase between aqueous and organic phases transfer of the substrate from the interphase to the aqueous phase; mixing and diffusion of the substrate in the aqueous phase; transfer of the substrate from the aqueous phase to the external surface of solid catalyst; transfer of the substrate inside the pore volume of the solid catalyst; catalytic reaction (adsorption, chemical reaction and desorption). In some systems, without vigorous stirring, no reactivity of the catalyst is observed in conventional catalytic system. Stirring and mass transfer from the organic to the aqueous phase and vice versa are required for conventional catalytic system. Conversely, in PBC, stirring is not required because the mass transfer is not the rate determining step in this catalytic system. It is already demonstrated that this system works for alkene epoxidation without stirring or the addition of a co-solvent to drive liquid–liquid phase transfer. The active site located on the external surface of the zeolite particle were dominantly effective for the observed phase boundary catalytic system. Process of synthesis Modified zeolite on which the external surface was partly covered with alkylsilane, called phase-boundary catalyst was prepared in two steps. First, titanium dioxide made from titanium isopropoxide was impregnated into NaY zeolite powder to give sample W-Ti-NaY. In the second step, alkysilane from n-octadecyltrichlorosilane (OTS) was impregnated into the W-Ti-NaY powder containing water. Due to the hydrophilicity of the w-Ti-NaY surface, addition of a small amount of water led to aggregation owing to the capillary force of water between particles. Under these conditions, it is expected that only the outer surface of aggregates, in contact with the organic phase can be modified with OTS, and indeed almost all of the particles were located at the phase boundary when added to an immiscible water–organic solvent (W/O) mixture. The partly modified sample is denoted w/o-Ti-NaY. Fully modified Ti-NaY (o-Ti-NaY), prepared without the addition of water in the above second step, is readily suspended in an organic solvent as expected. Janus interphase catalyst Janus interphase catalyst is a new generation of heterogeneous catalysts, which is capable to do organic reactions on the interface of two phases via the formation of Pickering emulsion. See also Ionic transfer References Catalysts
Phase-transfer catalyst
[ "Chemistry" ]
1,847
[ "Catalysis", "Catalysts", "Chemical kinetics" ]
2,886,641
https://en.wikipedia.org/wiki/Nepheloid%20layer
A nepheloid layer or nepheloid zone is a layer of water in the deep ocean basin, above the ocean floor, that contains significant amounts of suspended sediment. It is from 200 to 1000 m thick. The name comes from Greek: nephos, "cloud". The particles in the layer may come from the upper ocean layers and from stripping the sediments from the ocean floor by currents. Its thickness depends on bottom current velocity and is a result of balance between gravitational settling of particles and turbulence of the current. The formation mechanisms of nepheloid layers may vary, but primarily depend on deep ocean convection. Nepheloid layers can impact the accuracy of instruments when measuring bathymetry as well as affect the types of marine life in an area. There are several significant examples of nepheloid layers across the globe, including within the Gulf of Mexico and the Porcupine Bank. Formation mechanisms A surface nepheloid layer (SNL) may be created, due to particle flotation, while intermediate nepheloid layers (INL) may be formed at the slopes of the ocean bed due to the dynamics of internal waves. These intermediate nepheloid layers are derived from bottom nepheloid layers (BNL) after the layers become detached and spread along isopycnal surfaces. Open ocean convection has a prominent effect on the distribution of nepheloid layers and their ability to form in certain areas of the ocean, such as the northern Atlantic Ocean and the northwestern Mediterranean Sea. Nepheloid layers are more likely to form based on patterns of deep ocean circulation that directly affect the abyssal plain. This is largely through the disruption of accumulated sediments in areas that deep ocean currents interact with. Convection currents that disturb areas of the ocean floor such as those that circulate via ocean gyres also affect the concentration and relative sizes of the suspended sediments, and by extension the area's corresponding biotic activity. Impacts Bathymetry The existence of the nepheloid layer complicates bathymetric measurements: one has to take into account the reflections of lidar or ultrasonic pulses from the upper interface of this layer, as well as their absorption within the layer. Interference from the thick layers of suspended sediments can ultimately produce inaccurate results concerning submarine topography. Marine life Depending on the characteristics of a particular nepheloid layer, they can have a significant impact on marine life in the area. The layers of sediments can block natural light, making it difficult for photosynthetic organisms to survive. In addition, the suspended particulates can harm filter feeding organisms and plankton by blocking their gills or weighing them down. Examples Gulf of Mexico A prominent nepheloid layer exists in the Gulf of Mexico extending from the delta of the Brazos River to South Padre Island. The layer of turbid water can begin as shallow as 20 meters and is caused mostly by clay run-off from multiple rivers. The silty bottom of the gulf also contributes to the high turbidity. Due to the blockage of light by this nepheloid layer, algae and coral are sparse, resulting in an animal-dominated community. This community is largely composed of infauna and consists of a detrital-based food chain. Many species of polychaete worms, amphipods, and brittle stars inhabit the benthic surface and can also be accompanied by some secondary consumers such as flounders, shrimp, crabs, and starfishes. Porcupine Bank A prominent nepheloid layer exists in the Porcupine Bank. Geographically, the nepheloid layers are more detectable and prominent along the Porcupine Bank's western slope. Both the bottom and intermediate nepheloid layers form due to a myriad of factors such as internal tides, waves, and subsequent bottom erosion. The intermediate nepheloid layer can also manifest by breaking off from the bottom layer, and the water column above the area in which the bottom nepheloid layer forms is marked by significant differences in temperature, density, and salinity. References Oceanography Geology
Nepheloid layer
[ "Physics", "Environmental_science" ]
842
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
2,887,653
https://en.wikipedia.org/wiki/Treadle
A treadle (from , "to tread") is a foot-powered lever mechanism; it is operated by treading on it repeatedly. A treadle, unlike some other types of pedals, is not directly mounted on the crank (see treadle bicycle for a clear example). Most treadle machines convert reciprocating motion into rotating motion, using a mechanical linkage to indirectly connect one or two treadles to a crank. The treadle then turns the crank, which powers the machine. Other machines use treadles directly, to generate reciprocating motion. For instance, in a treadle loom, the reciprocating motion is used directly to lift and lower the harnesses or heddles; a common treadle pump uses the reciprocating motion to raise and lower pistons. Before the widespread availability of electric power, treadles were the most common way to power a range of machines. They are still widely used as a matter of preference and necessity. A human-powered machine gives the human operator close, instinctive control over the rate at which energy is fed into the machine; this lets them easily vary the rate at which they work. Treadle-operated machines are also used in environments where electric power is not available to power electric machinery. Other, similar mechanisms for allowing human and animal muscle to power machines are cranks, treadmills, treadwheels, and kick wheels like a potter's kick wheel. Operation and uses A treadle is operated by pressing down on it repeatedly with one or both feet, causing a rocking motion. This movement can then be stored as rotational motion via a crankshaft driving a flywheel. Alternatively, energy can be stored in a spring, as in the pole lathe. Treadles were once used extensively to power most machines including lathes, rotating or reciprocating saws, spinning wheels, looms, and sewing machines. Today the use of treadle-powered machines is common in areas of the developing world where other forms of power are unavailable. It is also common among artisans, hobbyists and historical re-enactors. Some treadle looms in Africa and South Asia use toggles on a string as treadles. The toggles are held between the weaver's toes. See also Bicycle pedal Treadle bicycle Treadle pump Sewing machine References Mechanical engineering Human power Foot Mechanical hand tools
Treadle
[ "Physics", "Engineering" ]
487
[ "Applied and interdisciplinary physics", "Physical quantities", "Power (physics)", "Mechanics", "Human power", "Mechanical hand tools", "Mechanical engineering" ]
35,784,363
https://en.wikipedia.org/wiki/Jeans%27s%20theorem
In astrophysics and statistical mechanics, Jeans's theorem, named after James Jeans, states that any steady-state solution of the collisionless Boltzmann equation depends on the phase space coordinates only through integrals of motion in the given potential, and conversely any function of the integrals is a steady-state solution. Jeans's theorem is most often discussed in the context of potentials characterized by three, global integrals. In such potentials, all of the orbits are regular, i.e. non-chaotic; the Kepler potential is one example. In generic potentials, some orbits respect only one or two integrals and the corresponding motion is chaotic. Jeans's theorem can be generalized to such potentials as follows: The phase-space density of a stationary stellar system is constant within every well-connected region. A well-connected region is one that cannot be decomposed into two finite regions such that all trajectories lie, for all time, in either one or the other. Invariant tori of regular orbits are such regions, but so are the more complex parts of phase space associated with chaotic trajectories. Integrability of the motion is therefore not required for a steady state. Mathematical description Consider the collisionless Boltzmann equation for the distribution function Consider the Lagrangian approach to the particle's motion in which case, the required equations are Let the solutions of these equations be where s are the integration constants. Let us assume that from the above set, we are able to solve , that is to say, we are able to find Now consider an arbitrary function of 's, Then this function is the solution of the collisionless Boltzmann equation, as can be verified by substituting this function into the collisionless Boltzmann equation to find This proves the theorem. A trivial set of integration constants are the initial location and the initial velocities of the particle. In this case, any function is a solution of the collisionless Boltzmann equation. See also Jeans equations References Astrophysics
Jeans's theorem
[ "Physics", "Astronomy" ]
419
[ "Astrophysics stubs", "Astronomical sub-disciplines", "Astronomy stubs", "Astrophysics" ]
35,788,567
https://en.wikipedia.org/wiki/NEXT%20%28ion%20thruster%29
The NASA Evolutionary Xenon Thruster (NEXT) project at Glenn Research Center is a gridded electrostatic ion thruster about three times as powerful as the NSTAR used on Dawn and Deep Space 1 spacecraft. It was used in DART, launched in 2021. Glenn Research Center manufactured the test engine's core ionization chamber, and Aerojet Rocketdyne designed and built the ion acceleration assembly. Purpose and objectives NEXT affords larger delivered payloads, smaller launch vehicle size, and other mission enhancements compared to chemical and other electric propulsion technologies for Discovery, New Frontiers, Mars Exploration, and Flagship outer-planet exploration missions. Design and Performance The NEXT engine is a type of solar electric propulsion in which thruster systems use the electricity generated by the spacecraft's solar panel to accelerate the xenon propellant to speeds of up to 90,000 mph (145,000 km/h or 40 km/s). NEXT can consume 6.9 kW power to produce 237 mN thrust, with a specific impulse of 4,170 seconds (compared to 3120 for NSTAR), and has been run for over five years. It can be throttled down to 0.5 kW power, when it has a specific impulse of 1320 seconds. Longevity and total impulse The NEXT thruster has demonstrated, in ground tests, a total impulse of 17 MN·s; which as of 2010 was the highest total impulse ever demonstrated by an ion thruster. A beam extraction area 1.6 times that of NSTAR allows higher thruster input power while maintaining low voltages and ion current densities, thus maintaining thruster longevity. In November 2010, it was revealed that the prototype had completed a 48,000 hours (5.5 years) test in December 2009. Thruster performance characteristics, measured over the entire throttle range of the thruster, were within predictions and the engine showed little signs of degradation and is ready for mission opportunities. Development and Status NEXT completed its System Requirement Review in July 2015 and Preliminary Design Review in February 2016. The first two flight units will be available in early 2019. After 2019, it will be a commercial product for purchase by NASA and non-NASA customers. Aerojet Rocketdyne, and their major sub-contractor ZIN Technologies retain the rights to produce the system, known as NEXT-C for future commercialization. In 2018, the CAESAR mission concept to comet 67P/Churyumov–Gerasimenko was a finalist for the New Frontiers program mission #4, and if selected, it would have been propelled by the NEXT ion engine. However, on 27 June 2019, the other finalist, the Dragonfly mission, was chosen instead. NEXT-C was selected for the DART mission. Use in space Launched in November 2021, for the first time in space, the Double Asteroid Redirection Test (DART) spacecraft used the NEXT-C ion thruster powered by 22 m2 of solar arrays generating ~3.5 kW. See also Electrically powered spacecraft propulsion Hall effect thruster, a different type of ion thruster Nuclear electric rocket References Ion engines
NEXT (ion thruster)
[ "Physics", "Chemistry" ]
627
[ "Ions", "Ion engines", "Matter" ]
24,470,730
https://en.wikipedia.org/wiki/RV%20144
RV 144, or the Thai trial, was an HIV vaccine clinical trial that was conducted in Thailand between 2003 and 2006. It used a combination of two HIV vaccines that had each failed in earlier trials. Participants were vaccinated over the course of 24 weeks beginning in October 2003 and were then tested for HIV until July 2006. The results of the study were publicized in September 2009. The initial report showed that the rate of HIV infection among volunteers who received the experimental vaccine was 31% lower than the rate of HIV infection in volunteers who received the placebo. This reduction was not large enough for the Ministry of Public Health in Thailand to support approving the vaccine; it would have licensed it if the reduction had been 50% or more. The trial collaborators have stated that results of this trial give the first supporting evidence of any vaccine being effective in lowering the risk of contracting HIV. On October 20, 2009, the organizers released full results of the study through publishing in the New England Journal of Medicine and presented them at the AIDS Vaccine Conference in Paris. Protocol A total of 16,402 Thai volunteers aged 18–30 were recruited to participate in Chon Buri and Rayong Provinces in Thailand. These volunteers were randomized into double-blind study groups, with those in the experimental group receiving a phase III prime-boost HIV vaccine. Eligibility criteria for participation in the study required that all volunteers be HIV negative prior to enrollment in the study and be willing to participate in educational counseling intended to teach ways to reduce risk behavior associated with contracting HIV. After being vaccinated, volunteers were asked to receive HIV testing every six months for three years, as well as receive additional risk-behavior counseling at every testing visit. Before this vaccine trial was initiated, an opinion letter from 22 established HIV researchers had been published in the journal Science calling into question the rationale for this study of combining two vaccines that each failed in prior human trials to generate immune responses that they were designed to elicit. This letter stated that spending $119 million when "the overall approval process lacked input from independent immunologists and virologists who could have judged whether the trial was scientifically meritorious" was an ill-advised use of precious resources. Vaccine composition Over six months, volunteers received a prime-boost vaccination including six injections, four injections of a vaccine called ALVAC HIV (vCP1521) with the last two being at the same time as two injections of another vaccine called AIDSVAX B/E (gp120). ALVAC‐HIV consists of a viral vector containing genetically engineered versions of three HIV genes (env, gag and pol). The ALVAC vector is an inert form of canarypox, a bird virus which cannot cause disease or replicate in humans. AIDSVAX B/E is composed of genetically engineered gp120, a protein on the surface of HIV, together with the adjuvant alum. Results During the study, 125 of the 16,402 participants contracted HIV through behavior unrelated to their study participation. Of those 125, 74 infected persons had received placebo and 51 had received the vaccine, or 31.2% reduction. By one of the three pre-decided statistical tests for analysis of the trial there was a statistically significant lower rate of infection in the vaccine group compared to the placebo group, with p=0.04 for the "modified intent to treat" analysis that excluded persons who were found to have HIV infection after enrollment but before the first vaccination. However, by the other two methods of analysis, there was no statistical significance in infection rates between the vaccine and placebo groups, with p=0.08 for the "intent to treat analysis" including all persons originally enrolled in the trial, and p=0.16 for the "per protocol analysis" including only persons from the modified intent to treat group who completed all three vaccinations and subsequent screening. Additionally, the vaccine regimen had no effect on the amount of virus in the blood of volunteers who became HIV-infected during the study. Immediately after release of the results, there was controversy and dispute over the significance of these results raised by several researchers, who also questioned the unusual strategy of pre-releasing the conclusion of vaccine efficacy to the press before publication of the actual data in a peer-reviewed scientific journal and lack of explanation regarding the three different statistical evaluations of which two did not yield significant results; Dr. Anthony Fauci defended this by stating that explaining these nuances in the press release "would have confused everybody". In May 2011, a new analysis initiated at Duke University showed that there is a 29% chance that the vaccine is not effective (although this posterior probability is very different conceptually from a p-value, and cannot be directly compared to the p=.04 from the original analysis). Cautious optimism In a study in September 2011, researchers involved with the trial at Mahidol University in Bangkok and the United States Military HIV Research Program in Washington DC tested the blood of trial subjects for different immune indicators between those who received the vaccine and contracted HIV (41 subjects) and those who did not become infected (205 subjects). Their work is not complete, but those in the study who produced IgG antibodies that recognise the V2 loop in the HIV envelope protein gp120 were 43% less likely to become infected. Those who produced envelope specific IgA were 54% more likely to become infected, but no more susceptible than trial subjects receiving the placebo. However, these studies all emphasize that such post-hoc analyses are subject to inherent bias and must be interpreted with caution. The immune responses of uninfected patients could point the way to more fruitful research. Nelson Michael, director of the U.S. Military HIV Research Program who ran the trial, says that results lend "biological credence to the initial clinical study results". Conclusion The vaccine was found to be safe, well tolerated, and suitable for large-scale further research. Sponsors The RV 144 trial was sponsored by the Surgeon General of the United States Army and conducted by the Thailand Ministry of Public Health with support from the United States Army Medical Research and Materiel Command and the National Institute of Allergy and Infectious Diseases, which is part of the National Institutes of Health. The cost of the trial was $119 million. Supported in part by an Interagency Agreement (Y1-AI-2642-12) between the U.S. Army Medical Research and Materiel Command and the National Institute of Allergy and Infectious Diseases and by a cooperative agreement (W81XWH-07-2-0067) between the Henry M. Jackson Foundation for the Advancement of Military Medicine and the U.S. Department of Defense. Sanofi Pasteur provided the ALVAC-HIV vaccine, and Global Solutions for Infectious Diseases (VaxGen) provided the reagents for the immunogenicity assays. ALVAC‐HIV (vCP1521) was manufactured by Sanofi Pasteur. AIDSVAX B/E was manufactured by Genentech under a license and supply agreement with VaxGen, which itself is a spin-off company of Genentech founded for the purpose of developing AIDSVAX. Global Solutions for Infectious Diseases, a nonprofit organization co‐founded by former VaxGen executives, has ownership of certain intellectual and manufacturing rights of AIDSVAX. Subsequent trials In 2016 the HIV vaccine trial HVTN 702 was started in South Africa. It tested a combination of two HIV-vaccines which were slight modifications of those used in the RV 144 trial. The trial was stopped early in 2020 because no evidence of efficacy was seen. A similar combination of two HIV vaccines, a vector-based vaccine and a recombinant protein vaccine, was tested in the Imbokodo study (HVTN 705/HPX2008) in Africa between 2017 and 2021. The primary analysis found the vaccine safe but with low efficacy (25.2%, not statistically significant) in preventing HIV infection compared to placebo. The study, sponsored by Janssen Vaccines & Prevention B.V. and funded by the NIAID and the Bill & Melinda Gates Foundation, is ongoing. A modified version of the vaccine regimen tested in Imbokodo was evaluated in the Mosaico trial (HVTN 706/HPX3002), which began in 2019. This trial enrolled nearly 3,900 men who have sex with men and transgender people in the Americas and Europe. In 2024, the Mosaico DSMB determined the regimen ineffective in preventing HIV infection, leading to the trial’s discontinuation. No safety issues were identified with the Mosaico vaccine regimen. References External links ClinicalTrials.gov record of the study HIV vaccine research Clinical trials related to HIV
RV 144
[ "Chemistry" ]
1,802
[ "HIV vaccine research", "Drug discovery" ]
24,471,529
https://en.wikipedia.org/wiki/IC%20289
IC 289 is a planetary nebula in the constellation Cassiopeia. It was discovered by Lewis Swift in early September 1888. It lies close to the 10th magnitude star BD +60° 0631. N.J. Martin described IC 289 as "A nice, faint round planet like planetary nebula. The uniform oval disc shows some irregularity in brightness but is not obviously brighter at the edge." The central star of the planetary nebula is an O-type star with a spectral type of O(H). References External links http://www.noao.edu/outreach/aop/observers/ic289.html Planetary nebulae 0289 Cassiopeia (constellation) 188809??
IC 289
[ "Astronomy" ]
151
[ "Cassiopeia (constellation)", "Astronomy stubs", "Constellations", "Nebula stubs" ]
24,474,414
https://en.wikipedia.org/wiki/Minimum%20Fisher%20information
In information theory, the principle of minimum Fisher information (MFI) is a variational principle which, when applied with the proper constraints needed to reproduce empirically known expectation values, determines the best probability distribution that characterizes the system. (See also Fisher information.) Measures of information Information measures (IM) are the most important tools of information theory. They measure either the amount of positive information or of "missing" information an observer possesses with regards to any system of interest. The most famous IM is the so-called Shannon-entropy (1948), which determines how much additional information the observer still requires in order to have all the available knowledge regarding a given system S, when all he/she has is a probability density function (PDF) defined on appropriate elements of such system. This is then a "missing" information measure. The IM is a function of the PDF only. If the observer does not have such a PDF, but only a finite set of empirically determined mean values of the system, then a fundamental scientific principle called the Maximum Entropy one (MaxEnt) asserts that the "best" PDF is the one that, reproducing the known expectation values, maximizes otherwise Shannon's IM. Fisher's information measure Fisher's information (FIM), named after Ronald Fisher, (1925) is another kind of measure, in two respects, namely, 1) it reflects the amount of (positive) information of the observer, 2) it depends not only on the PD but also on its first derivatives, a property that makes it a local quantity (Shannon's is instead a global one). The corresponding counterpart of MaxEnt is now the FIM-minimization, since Fisher's measure grows when Shannon's diminishes, and vice versa. The minimization here referred to (MFI) is an important theoretical tool in a manifold of disciplines, beginning with physics. In a sense it is clearly superior to MaxEnt because the later procedure yields always as the solution an exponential PD, while the MFI solution is a differential equation for the PD, which allows for greater flexibility and versatility. Applications of the MFI Thermodynamics Much effort has been devoted to Fisher's information measure, shedding much light upon the manifold physical applications. As a small sample, it can be shown that the whole field of thermodynamics (both equilibrium and non-equilibrium) can be derived from the MFI approach. Here FIM is specialized to the particular but important case of translation families, i.e., distribution functions whose form does not change under translational transformations. In this case, Fisher measure becomes shift-invariant. Such minimizing of Fisher's measure leads to a Schrödinger-like equation for the probability amplitude, where the ground state describes equilibrium physics and the excited states account for non-equilibrium situations. Scale-invariant phenomena More recently, Zipf's law has been shown to arise as the variational solution of the MFI when scale invariance is introduced in the measure, leading for the first time an explanation of this regularity from first principles. It has been also shown that MFI can be used to formulate a thermodynamics based on scale invariance instead of translational invariance, allowing the definition of the scale-free ideal gas, the scale invariant equivalent of the ideal gas. References Information theory
Minimum Fisher information
[ "Mathematics", "Technology", "Engineering" ]
700
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
24,474,524
https://en.wikipedia.org/wiki/Scale-free%20ideal%20gas
The scale-free ideal gas (SFIG) is a physical model assuming a collection of non-interacting elements with a stochastic proportional growth. It is the scale-invariant version of an ideal gas. Some cases of city-population, electoral results and cites to scientific journals can be approximately considered scale-free ideal gases. In a one-dimensional discrete model with size-parameter k, where k1 and kM are the minimum and maximum allowed sizes respectively, and v = dk/dt is the growth, the bulk probability density function F(k, v) of a scale-free ideal gas follows where N is the total number of elements, Ω = ln k1/kM is the logarithmic "volume" of the system, is the mean relative growth and is the standard deviation of the relative growth. The entropy equation of state is where is a constant that accounts for dimensionality and is the elementary volume in phase space, with the elementary time and M the total number of allowed discrete sizes. This expression has the same form as the one-dimensional ideal gas, changing the thermodynamical variables (N, V, T) by (N, Ω,σw). Zipf's law may emerge in the external limits of the density since it is a special regime of scale-free ideal gases. References Ideal gas Scale-invariant systems
Scale-free ideal gas
[ "Physics" ]
282
[ "Thermodynamic systems", "Symmetry", "Physical phenomena", "Critical phenomena", "Physical systems", "Scale-invariant systems", "Ideal gas", "Scaling symmetries" ]
24,476,128
https://en.wikipedia.org/wiki/Acoustic%20metamaterial
An acoustic metamaterial, sonic crystal, or phononic crystal is a material designed to control, direct, and manipulate sound waves or phonons in gases, liquids, and solids (crystal lattices). Sound wave control is accomplished through manipulating parameters such as the bulk modulus β, density ρ, and chirality. They can be engineered to either transmit, or trap and amplify sound waves at certain frequencies. In the latter case, the material is an acoustic resonator. Acoustic metamaterials are used to model and research extremely large-scale acoustic phenomena like seismic waves and earthquakes, but also extremely small-scale phenomena like atoms. The latter is possible due to band gap engineering: acoustic metamaterials can be designed such that they exhibit band gaps for phonons, similar to the existence of band gaps for electrons in solids or electron orbitals in atoms. That has also made the phononic crystal an increasingly widely researched component in quantum technologies and experiments that probe quantum mechanics. Important branches of physics and technology that rely heavily on acoustic metamaterials are negative refractive index material research, and (quantum) optomechanics. History Acoustic metamaterials have developed from the research and findings in metamaterials. A novel material was originally proposed by Victor Veselago in 1967, but not realized until some 33 years later. John Pendry produced the basic elements of metamaterials in the late 1990s. His materials were combined, with negative index materials first realized in 2000, broadening the possible optical and material responses. Research in acoustic metamaterials has the same goal of broader material responses with sound waves. Research employing acoustic metamaterials began in 2000 with the fabrication and demonstration of sonic crystals in a liquid. This was followed by transposing the behavior of the split-ring resonator to research in acoustic metamaterials. After this, double negative parameters (negative bulk modulus βeff and negative density ρeff) were produced by this type of medium. Then a group of researchers presented the design and test results of an ultrasonic metamaterial lens for focusing 60 kHz. Acoustical engineering is typically concerned with noise control, medical ultrasound, sonar, sound reproduction, and how to measure some other physical properties using sound. With acoustic metamaterials the direction of sound through the medium can be controlled by manipulating the acoustic refractive index. Therefore, the capabilities of traditional acoustic technologies are extended, for example, eventually being able to cloak certain objects from acoustic detection. The first successful industrial applications of acoustic metamaterials were tested for aircraft insulation. Basic principles Properties of acoustic metamaterials usually arise from structure rather than composition, with techniques such as the controlled fabrication of small inhomogeneities to enact effective macroscopic behavior. Bulk modulus and mass density The bulk modulus β is a measure of a substance's resistance to uniform compression. It is defined as the ratio of pressure increase needed to cause a given relative decrease in volume. The mass density (or just "density") of a material is defined as mass per unit volume and is expressed in grams per cubic centimeter (g/cm3). In all three classic states of matter—gas, liquid, or solid—the density varies with a change in temperature or pressure, with gases being the most susceptible to those changes. The spectrum of densities is wide-ranging: from 1015 g/cm3 for neutron stars, 1.00 g/cm3 for water, to 1.2×10−3 g/cm3 for air. Other relevant parameters are area density which is mass over a (two-dimensional) area, linear density - mass over a one-dimensional line, and relative density, which is a density divided by the density of a reference material, such as water. For acoustic materials and acoustic metamaterials, both bulk modulus and density are component parameters, which define their refractive index. The acoustic refractive index is similar to the concept used in optics, but it concerns pressure or shear waves, instead of electromagnetic waves. Theoretical model Acoustic metamaterials or phononic crystals can be understood as the acoustic analog of photonic crystals: instead of electromagnetic waves (photons) propagating through a material with a periodically modified optical refractive index (resulting in a modified speed of light), the phononic crystal comprises pressure waves (phonons) propagating through a material with a periodically modified acoustic refractive index, resulting in a modified speed of sound. In addition to the parallel concepts of refractive index and crystal structure, electromagnetic waves and acoustic waves are both mathematically described by the wave equation. The simplest realization of an acoustic metamaterial would constitute the propagation of a pressure wave through a slab with a periodically modified refractive index in one dimension. In that case, the behavior of the wave through the slab or 'stack' can be predicted and analyzed using transfer matrices. This method is ubiquitous in optics, where it is used for the description of light waves propagating through a distributed Bragg reflector. Negative refractive index acoustic metamaterials In certain frequency bands, the effective mass density and bulk modulus may become negative. This results in a negative refractive index. Flat slab focusing, which can result in super resolution, is similar to electromagnetic metamaterials. The double negative parameters are a result of low-frequency resonances. In combination with a well-defined polarization during wave propagation; k = |n|ω, is an equation for refractive index as sound waves interact with acoustic metamaterials (below): The inherent parameters of the medium are the mass density ρ, bulk modulus β, and chirality k. Chirality, or handedness, determines the polarity of wave propagation (wave vector). Hence within the last equation, Veselago-type solutions (n2 = u*ε) are possible for wave propagation as the negative or positive state of ρ and β determine the forward or backward wave propagation. In electromagnetic metamaterials negative permittivity can be found in natural materials. However, negative permeability has to be intentionally created in the artificial transmission medium. For acoustic materials neither negative ρ nor negative β are found in naturally occurring materials; they are derived from the resonant frequencies of an artificially fabricated transmission medium, and such negative values are an anomalous response. Negative ρ or β means that at certain frequencies the medium expands when experiencing compression (negative modulus), and accelerates to the left when being pushed to the right (negative density). Electromagnetic field vs acoustic field The electromagnetic spectrum extends from low frequencies used for modern radio to gamma radiation at the short-wavelength end, covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. In comparison, infrasonic frequencies range from 20 Hz down to 0.001 Hz, audible frequencies are 20 Hz to 20 kHz and the ultrasonic range is above 20 kHz. While electromagnetic waves can travel in vacuum, acoustic wave propagation requires a medium. Mechanics of lattice waves In a rigid lattice structure, atoms exert force on each other, maintaining equilibrium. Most of these atomic forces, such as covalent or ionic bonds, are of electric nature. The magnetic force, and the force of gravity are negligible. Because of the bonding between them, the displacement of one or more atoms from their equilibrium positions gives rise to a set of vibration waves propagating through the lattice. One such wave is shown in the figure to the right. The amplitude of the wave is given by the displacements of the atoms from their equilibrium positions. The wavelength λ is marked. There is a minimum possible wavelength, given by the equilibrium separation a between atoms. Any wavelength shorter than this can be mapped onto a long wavelength, due to effects similar to aliasing. Research and applications Applications of acoustic metamaterial research include seismic wave reflection and vibration control technologies related to earthquakes, as well as precision sensing. Phononic crystals can be engineered to exhibit band gaps for phonons, similar to the existence of band gaps for electrons in solids and to the existence of electron orbitals in atoms. However, unlike atoms and natural materials, the properties of metamaterials can be fine-tuned (for example through microfabrication). For that reason, they constitute a potential testbed for fundamental physics and quantum technologies. They also have a variety of engineering applications, for example they are widely used as a mechanical component in optomechanical systems. Sonic crystals In 2000, the research of Liu et al. paved the way to acoustic metamaterials through sonic crystals, which exhibit spectral gaps two orders of magnitude smaller than the wavelength of sound. The spectral gaps prevent the transmission of waves at prescribed frequencies. The frequency can be tuned to desired parameters by varying the size and geometry. The fabricated material consisted of high-density solid lead balls as the core, one centimeter in size and coated with a 2.5-mm layer of rubber silicone. These were arranged in an 8 × 8 × 8 cube crystal lattice structure. The balls were cemented into the cubic structure with an epoxy. Transmission was measured as a function of frequency from 250 to 1600 Hz for a four-layer sonic crystal. A two-centimeter slab absorbed sound that normally would require a much thicker material, at 400 Hz. A drop in amplitude was observed at 400 and 1100 Hz. The amplitudes of the sound waves entering the surface were compared with the sound waves at the center of the structure. The oscillations of the coated spheres absorbed sonic energy, which created the frequency gap; the sound energy was absorbed exponentially as the thickness of the material increased. The key result was the negative elastic constant created from resonant frequencies of the material. Projected applications of sonic crystals are seismic wave reflection and ultrasonics. Split-ring resonators for acoustic metamaterials In 2004 split-ring resonators (SRR) became the object of acoustic metamaterial research. An analysis of the frequency band gap characteristics, derived from the inherent limiting properties of artificially created SRRs, paralleled an analysis of sonic crystals. The band gap properties of SRRs were related to sonic crystal band gap properties. Inherent in this inquiry is a description of mechanical properties and problems of continuum mechanics for sonic crystals, as a macroscopically homogeneous substance. The correlation in band gap capabilities includes locally resonant elements and elastic moduli which operate in a certain frequency range. Elements which interact and resonate in their respective localized area are embedded throughout the material. In acoustic metamaterials, locally resonant elements would be the interaction of a single 1-cm rubber sphere with the surrounding liquid. The values of the stopband and band-gap frequencies can be controlled by choosing the size, types of materials, and the integration of microscopic structures which control the modulation of the frequencies. These materials are then able to shield acoustic signals and attenuate the effects of anti-plane shear waves. By extrapolating these properties to larger scales it could be possible to create seismic wave filters (see Seismic metamaterials). Arrayed metamaterials can create filters or polarizers of either electromagnetic or elastic waves. Methods which can be applied to two-dimensional stopband and band gap control with either photonic or sonic structures have been developed. Similar to photonic and electromagnetic metamaterial fabrication, a sonic metamaterial is embedded with localized sources of mass density ρ and the bulk modulus β parameters, which are analogous to permittivity and permeability, respectively. The sonic (or phononic) metamaterials are sonic crystals. These crystals have a solid lead core and a softer, more elastic silicone coating. The sonic crystals had built-in localized resonances due to the coated spheres which result in almost flat dispersion curves. Movchan and Guenneau analyzed and presented low-frequency band gaps and localized wave interactions of the coated spheres. This method can be used to tune band gaps inherent in the material, and to create new low-frequency band gaps. It is also applicable for designing low-frequency phononic crystal waveguides. Phononic crystals Phononic crystals are synthetic materials formed by periodic variation of the acoustic properties of the material (i.e., elasticity and mass). One of their main properties is the possibility of having a phononic band gap. A phononic crystal with phononic band gap prevents phonons of selected ranges of frequencies from being transmitted through the material. To obtain the frequency band structure of a phononic crystal, Bloch's theorem is applied on a single unit cell in the reciprocal lattice space (Brillouin zone). Several numerical methods are available for this problem, such as the planewave expansion method, the finite element method, and the finite difference method. In order to speed up the calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. The RBME applies "on top" of any of the primary expansion numerical methods mentioned above. For large unit cell models, the RBME method can reduce the time for computing the band structure by up to two orders of magnitude. The basis of phononic crystals dates back to Isaac Newton who imagined that sound waves propagated through air in the same way that an elastic wave would propagate along a lattice of point masses connected by springs with an elastic force constant E. This force constant is identical to the modulus of the material. With phononic crystals of materials with differing modulus the calculations are more complicated than this simple model. A key factor for acoustic band gap engineering is the impedance mismatch between periodic elements comprising the crystal and the surrounding medium. When an advancing wave-front meets a material with very high impedance it will tend to increase its phase velocity through that medium. Likewise, when the advancing wave-front meets a low impedance medium it will slow down. This concept can be exploited with periodic arrangements of impedance-mismatched elements to affect acoustic waves in the crystal. The position of the band gap in frequency space for a phononic crystal is controlled by the size and arrangement of the elements comprising the crystal. The width of the band gap is generally related to the difference in the speed of sound (due to impedance differences) through the materials that form the composite. Phononic crystals effectively reduce low-frequency noise, since their locally resonant systems act as spatial frequency filters. However, they have narrow band gaps, impose additional weight on the primary system, and work only at the adjusted frequency range. For widening band gaps, the unit cells must be large in size or contain dense materials. As a solution to the disadvantages mentioned above of phononic crystals, proposes a novel three-dimensional lightweight re-entrant meta-structure composed of a cross-shaped beam scatterer embedded in a host plate with holes based on the square lattice metamaterial. By combining the re-entry networks mechanism and the Floquet–Bloch theory, on the basis of cross-shaped beam theory and perforation mechanism, it was demonstrated that such a lightweight phononic structure can filter elastic waves across a broad frequency range (not just a specific narrow region) while simultaneously reducing structure weight to a significant degree. Double-negative acoustic metamaterial Electromagnetic (isotropic) metamaterials have built-in resonant structures that exhibit effective negative permittivity and negative permeability for some frequency ranges. In contrast, it is difficult to build composite acoustic materials with built-in resonances such that the two effective response functions are negative within the capability or range of the transmission medium. The mass density ρ and bulk modulus β are position dependent. Using the formulation of a plane wave the wave vector is: With angular frequency represented by ω, and c being the propagation speed of acoustic signal through the homogeneous medium. With constant density and bulk modulus as constituents of the medium, the refractive index is expressed as n2 = ρ / β. In order to develop a propagating plane wave through the material, it is necessary for both ρ and β to be either positive or negative. When the negative parameters are achieved, the mathematical result of the Poynting vector is in the opposite direction of the wave vector . This requires negativity in bulk modulus and density. Natural materials do not have a negative density or a negative bulk modulus, but, negative values are mathematically possible, and can be demonstrated when dispersing soft rubber in a liquid. Even for composite materials, the effective bulk modulus and density should be normally bounded by the values of the constituents, i.e., the derivation of lower and upper bounds for the elastic moduli of the medium. The expectation for positive bulk modulus and positive density is intrinsic. For example, dispersing spherical solid particles in a fluid result in the ratio governed by the specific gravity when interacting with the long acoustic wavelength (sound). Mathematically, it can be proven that βeff and ρeff are definitely positive for natural materials. The exception occurs at low resonant frequencies. As an example, acoustic double negativity is theoretically demonstrated with a composite of soft, silicone rubber spheres suspended in water. In soft rubber, sound travels much slower than through the water. The high velocity contrast of sound speeds between the rubber spheres and the water allows for the transmission of very low monopolar and dipolar frequencies. This is an analogue to analytical solution for the scattering of electromagnetic radiation, or electromagnetic plane wave scattering, by spherical particles - dielectric spheres. Hence, there is a narrow range of normalized frequencies 0.035 < ωa/(2πc) < 0.04 where the bulk modulus and negative density are both negative. Here a is the lattice constant if the spheres are arranged in a face-centered cubic (fcc) lattice; ω is angular frequency and c is speed of the acoustic signal. The effective bulk modulus and density near the static limit are positive as predicted. The monopolar resonance creates a negative bulk modulus above the normalized frequency at about 0.035 while the dipolar resonance creates a negative density above the normalized frequency at about 0.04. This behavior is analogous to low-frequency resonances produced in SRRs (electromagnetic metamaterial). The wires and split rings create intrinsic electric dipolar and magnetic dipolar response. With this artificially constructed acoustic metamaterial of rubber spheres and water, only one structure (instead of two) creates the low-frequency resonances to achieve double negativity. With monopolar resonance, the spheres expand, which produces a phase shift between the waves passing through rubber and water. This creates a negative response. The dipolar resonance creates a negative response such that the frequency of the center of mass of the spheres is out of phase with the wave vector of the sound wave (acoustic signal). If these negative responses are large enough to compensate the background fluid, one can have both negative effective bulk modulus and negative effective density. Both the mass density and the reciprocal of the bulk modulus decrease in magnitude fast enough for the group velocity to become negative (double negativity). This gives rise to the desired results of negative refraction. The double negativity is a consequence of resonance and the resulting negative refraction properties. Metamaterial with simultaneously negative bulk modulus and mass density In 2007 a metamaterial was reported which simultaneously possesses a negative bulk modulus and negative mass density. This metamaterial is a zinc blende structure consisting of one fcc array of bubble-contained-water spheres (BWSs) and another relatively shifted fcc array of rubber-coated-gold spheres (RGSs) in special epoxy. Negative bulk modulus is achieved through monopolar resonances of the BWS series. Negative mass density is achieved with dipolar resonances of the gold sphere series. Rather than rubber spheres in liquid, this is a solid based material. This is also as yet a realization of simultaneously negative bulk modulus and mass density in a solid based material, which is an important distinction. Double C resonators Double C resonators (DCRs) are rings cut in half, which can be arranged in multiple cell configurations, similarly to the SRRS. Each cell consists of a large rigid disk and two thin ligaments, and acts as a tiny oscillator connected by springs. One spring anchors the oscillator, and the other connects to the mass. It is analogous to an LC resonator of capacitance, C, and inductance, L, and resonant frequency √1/(LC). The speed of sound in the matrix is expressed as c = √ρ/μ with density ρ and shear modulus μ. Although linear elasticity is considered, the problem is mainly defined by shear waves directed at angles to the plane of the cylinders. A phononic band gap occurs in association with the resonance of the split cylinder ring. There is a phononic band gap within a range of normalized frequencies. This is when the inclusion moves as a rigid body. The DCR design produced a suitable band with a negative slope in a range of frequencies. This band was obtained by hybridizing the modes of a DCR with the modes of thin stiff bars. Calculations have shown that at these frequencies: a beam of sound negatively refracts across a slab of such a medium, the phase vector in the medium possesses real and imaginary parts with opposite signs, the medium is well impedance-matched with the surrounding medium, a flat slab of the metamaterial can image a source across the slab like a Veselago lens, the image formed by the flat slab has considerable sub-wavelength image resolution, and a double corner of the metamaterial can act as an open resonator for sound. Acoustic metamaterial superlens In 2009 Shu Zhang et al. presented the design and test results of an ultrasonic metamaterial lens for focusing 60 kHz (~2 cm wavelength) sound waves under water. The lens was made of sub-wavelength elements, potentially more compact than phononic lenses operating in the same frequency range. The lens consists of a network of fluid-filled cavities called Helmholtz resonators that oscillate at certain frequencies. Similar to a network of inductors and capacitors in an electromagnetic metamaterial, the arrangement of Helmholtz cavities designed by Zhang et al. have a negative dynamic modulus for ultrasound waves. A point source of 60.5 kHz sound was focused to a spot roughly the width of half a wavelength, and there is potential of improving the spatial resolution even further. Result were in agreement with the transmission line model, which derived the effective mass density and compressibility. This metamaterial lens also displays variable focal length at different frequencies. This lens could improve acoustic imaging techniques, since the spatial resolution of the conventional methods is restricted by the incident ultrasound wavelength. This is due to the quickly fading evanescent fields which carry the sub-wavelength features of objects. Acoustic diode An acoustic diode was introduced in 2009, which converts sound to a different frequency and blocks backward flow of the original frequency. This device could provide more flexibility for designing ultrasonic sources like those used in medical imaging. The proposed structure combines two components: The first is a sheet of nonlinear acoustic material—one whose sound speed varies with air pressure. An example of such a material is a collection of grains or beads, which becomes stiffer as it is squeezed. The second component is a filter that allows the doubled frequency to pass through but reflects the original. Acoustic cloaking An acoustic cloak is a hypothetical device that would make objects impervious to sound waves. This could be used to build sound proof homes, advanced concert halls, or stealth warships. The idea of acoustic cloaking is simply to deviate the sounds waves around the object that has to be cloaked, but realizing has been difficult since mechanical metamaterials are needed. Making such a metamaterial for a sound means modifying the acoustic analogues to permittivity and permeability in light waves, which are the material's mass density and its elastic constant. Researchers from Wuhan University, China in a 2007 paper reported a metamaterial which simultaneously possessed a negative bulk modulus and mass density. A laboratory metamaterial device that is applicable to ultrasound waves was demonstrated in 2011 for frequencies from 40 to 80 kHz. The metamaterial acoustic cloak was designed to hide objects submerged in water, bending and twists sound waves. The cloaking mechanism consists of 16 concentric rings in a cylindrical configuration, each ring having acoustic circuits and a different index of refraction. This causes sound waves to vary their speed from ring to ring. The sound waves propagate around the outer ring, guided by the channels in the circuits, which bend the waves to wrap them around the outer layers. This device has been described as an array of cavities which actually slow the speed of the propagating sound waves. An experimental cylinder was submerged in a tank, and made to disappear from sonar detection. Other objects of various shapes and densities were also hidden from sonar. Phononic metamaterials for thermal management As phonons are responsible for thermal conduction in solids, acoustic metamaterials may be designed to control heat transfer. Quantum-like computing with acoustic metamaterials Researchers have demonstrated a quantum-like computing method using acoustic metamaterials. Recently operations similar to the Controlled-NOT (CNOT) gate, a key component in quantum computing, have been demonstrated. By employing a nonlinear acoustic metamaterial, consisting of three elastically coupled waveguides, the team created classical qubit analogues called logical phi-bits. This approach allows for scalable, systematic, and predictable CNOT gate operations using a simple physical manipulation. This innovation brings promise to the field of quantum-like computing using acoustic metamaterials. See also Acoustic dispersion Metamaterial cloaking Metamaterial Metamaterial absorber Metamaterial antennas Negative index metamaterials Photonic metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Superlens Tunable metamaterials Transformation optics Books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations Metamaterials scientists Richard W. Ziolkowski Pierre Deymier John Pendry David R. Smith Nader Engheta Vladimir Shalaev References Further reading Richard V. Craster, et al.: Acoustic metamaterials: negative refraction, imaging, lensing and cloaking. Springer, Dordrecht 2013, . External links Phononic crystals Negative refractive index materials Acoustic cloaking Acoustics Metamaterials
Acoustic metamaterial
[ "Physics", "Materials_science", "Engineering" ]
5,523
[ "Metamaterials", "Classical mechanics", "Acoustics", "Materials science" ]
24,479,046
https://en.wikipedia.org/wiki/Planetary%20mass
In astronomy, planetary mass is a measure of the mass of a planet-like astronomical object. Within the Solar System, planets are usually measured in the astronomical system of units, where the unit of mass is the solar mass (), the mass of the Sun. In the study of extrasolar planets, the unit of measure is typically the mass of Jupiter () for large gas giant planets, and the mass of Earth () for smaller rocky terrestrial planets. The mass of a planet within the Solar System is an adjusted parameter in the preparation of ephemerides. There are three variations of how planetary mass can be calculated: If the planet has natural satellites, its mass can be calculated using Newton's law of universal gravitation to derive a generalization of Kepler's third law that includes the mass of the planet and its moon. This permitted an early measurement of Jupiter's mass, as measured in units of the solar mass. The mass of a planet can be inferred from its effect on the orbits of other planets. In 1931-1948 flawed applications of this method led to incorrect calculations of the mass of Pluto. Data from influence collected from the orbits of space probes can be used. Examples include Voyager probes to the outer planets and the MESSENGER spacecraft to Mercury. Also, numerous other methods can give reasonable approximations. For instance, Varuna, a potential dwarf planet, rotates very quickly upon its axis, as does the dwarf planet Haumea. Haumea has to have a very high density in order not to be ripped apart by centrifugal forces. Through some calculations, one can place a limit on the object's density. Thus, if the object's size is known, a limit on the mass can be determined. See the links in the aforementioned articles for more details on this. Choice of units The choice of solar mass, , as the basic unit for planetary mass comes directly from the calculations used to determine planetary mass. In the most precise case, that of the Earth itself, the mass is known in terms of solar masses to twelve significant figures: the same mass, in terms of kilograms or other Earth-based units, is only known to five significant figures, which is less than a millionth as precise. The difference comes from the way in which planetary masses are calculated. It is impossible to "weigh" a planet, and much less the Sun, against the sort of mass standards which are used in the laboratory. On the other hand, the orbits of the planets give a great range of observational data as to the relative positions of each body, and these positions can be compared to their relative masses using Newton's law of universal gravitation (with small corrections for General Relativity where necessary). To convert these relative masses to Earth-based units such as the kilogram, it is necessary to know the value of the Newtonian constant of gravitation, G. This constant is remarkably difficult to measure in practice, and its value is known to a relative precision of only The solar mass is quite a large unit on the scale of the Solar System: . The largest planet, Jupiter, is 0.09% the mass of the Sun, while the Earth is about three millionths (0.0003%) of the mass of the Sun. When comparing the planets among themselves, it is often convenient to use the mass of the Earth (ME or ) as a standard, particularly for the terrestrial planets. For the mass of gas giants, and also for most extrasolar planets and brown dwarfs, the mass of Jupiter () is a convenient comparison. Planetary mass and planet formation The mass of a planet has consequences for its structure by having a large mass, especially while it is in the hand of process of formation. A body with enough mass can overcome its compressive strength and achieve a rounded shape (roughly hydrostatic equilibrium). Since 2006, these objects have been classified as dwarf planet if it orbits around the Sun (that is, if it is not the satellite of another planet). The threshold depends on a number of factors, such as composition, temperature, and the presence of tidal heating. The smallest body that is known to be rounded is Saturn's moon Mimas, at about the mass of Earth; on the other hand, bodies as large as the Kuiper belt object Salacia, at about the mass of Earth, may not have overcome their compressive strengths. Smaller bodies like asteroids are classified as "small Solar System bodies". A dwarf planet, by definition, is not massive enough to have gravitationally cleared its neighbouring region of planetesimals. The mass needed to do so depends on location: Mars clears its orbit in its current location, but would not do so if it orbited in the Oort cloud. The smaller planets retain only silicates and metals, and are terrestrial planets like Earth or Mars. The interior structure of rocky planets is mass-dependent: for example, plate tectonics may require a minimum mass to generate sufficient temperatures and pressures for it to occur. Geophysical definitions would also include the dwarf planets and moons in the outer Solar System, which are like terrestrial planets except that they are composed of ice and rock rather than rock and metal: the largest such bodies are Ganymede, Titan, Callisto, Triton, and Pluto. If the protoplanet grows by accretion to more than about twice the mass of Earth, its gravity becomes large enough to retain hydrogen in its atmosphere. In this case, it will grow into an ice giant or gas giant. As such, Earth and Venus are close to the maximum size a planet can usually grow to while still remaining rocky. If the planet then begins migration, it may move well within its system's frost line, and become a hot Jupiter orbiting very close to its star, then gradually losing small amounts of mass as the star's radiation strips its atmosphere. The theoretical minimum mass a star can have, and still undergo hydrogen fusion at the core, is estimated to be about , though fusion of deuterium can occur at masses as low as 13 Jupiters. Values from the DE405 ephemeris The DE405/LE405 ephemeris from the Jet Propulsion Laboratory is a widely used ephemeris dating from 1998 and covering the whole Solar System. As such, the planetary masses form a self-consistent set, which is not always the case for more recent data (see below). Earth mass and lunar mass Where a planet has natural satellites, its mass is usually quoted for the whole system (planet + satellites), as it is the mass of the whole system which acts as a perturbation on the orbits of other planets. The distinction is very slight, as natural satellites are much smaller than their parent planets (as can be seen in the table above, where only the largest satellites are even listed). The Earth and the Moon form a case in point, partly because the Moon is unusually large (just over 1% of the mass of the Earth) in relation to its parent planet compared with other natural satellites. There are also very precise data available for the Earth–Moon system, particularly from the Lunar Laser Ranging experiment (LLR). The geocentric gravitational constant – the product of the mass of the Earth times the Newtonian constant of gravitation – can be measured to high precision from the orbits of the Moon and of artificial satellites. The ratio of the two masses can be determined from the slight wobble in the Earth's orbit caused by the gravitational attraction of the Moon. More recent values The construction of a full, high-precision Solar System ephemeris is an onerous task. It is possible (and somewhat simpler) to construct partial ephemerides which only concern the planets (or dwarf planets, satellites, asteroids) of interest by "fixing" the motion of the other planets in the model. The two methods are not strictly equivalent, especially when it comes to assigning uncertainties to the results: however, the "best" estimates – at least in terms of quoted uncertainties in the result – for the masses of minor planets and asteroids usually come from partial ephemerides. Nevertheless, new complete ephemerides continue to be prepared, most notably the EPM2004 ephemeris from the Institute of Applied Astronomy of the Russian Academy of Sciences. EPM2004 is based on separate observations between 1913 and 2003, more than seven times as many as DE405, and gave more precise masses for Ceres and five asteroids. IAU best estimates (2009) A new set of "current best estimates" for various astronomical constants was approved the 27th General Assembly of the International Astronomical Union (IAU) in August 2009. IAU current best estimates (2012) The 2009 set of "current best estimates" was updated in 2012 by resolution B2 of the IAU XXVIII General Assembly. Improved values were given for Mercury and Uranus (and also for the Pluto system and Vesta). See also Astronomical system of units Standard gravitational parameter Planetary-mass object Footnotes References Mass Planetary science Units of measurement in astronomy
Planetary mass
[ "Physics", "Astronomy", "Mathematics" ]
1,864
[ "Scalar physical quantities", "Astronomical sub-disciplines", "Units of measurement", "Physical quantities", "Quantity", "Mass", "Units of measurement in astronomy", "Size", "Planetary science", "Wikipedia categories named after physical quantities", "Matter" ]
33,099,208
https://en.wikipedia.org/wiki/Plane-wave%20expansion
In physics, the plane-wave expansion expresses a plane wave as a linear combination of spherical waves: where is the imaginary unit, is a wave vector of length , is a position vector of length , are spherical Bessel functions, are Legendre polynomials, and the hat denotes the unit vector. In the special case where is aligned with the z axis, where is the spherical polar angle of . Expansion in spherical harmonics With the spherical-harmonic addition theorem the equation can be rewritten as where are the spherical harmonics and the superscript denotes complex conjugation. Note that the complex conjugation can be interchanged between the two spherical harmonics due to symmetry. Applications The plane wave expansion is applied in Acoustics Optics S-matrix Quantum mechanics See also Helmholtz equation Plane wave expansion method in computational electromagnetism Weyl expansion References Scattering Mathematical physics
Plane-wave expansion
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
176
[ "Applied mathematics", "Theoretical physics", "Scattering stubs", "Scattering", "Particle physics", "Condensed matter physics", "Nuclear physics", "Mathematical physics" ]
33,104,716
https://en.wikipedia.org/wiki/Twisted%20geometries
Twisted geometries are discrete geometries that play a role in loop quantum gravity and spin foam models, where they appear in the semiclassical limit of spin networks. A twisted geometry can be visualized as collections of polyhedra dual to the nodes of the spin network's graph. Intrinsic and extrinsic curvatures are defined in a manner similar to Regge calculus, but with the generalisation of including a certain type of metric discontinuities: the face shared by two adjacent polyhedra has a unique area, but its shape can be different. This is a consequence of the quantum geometry of spin networks: ordinary Regge calculus is "too rigid" to account for all the geometric degrees of freedom described by the semiclassical limit of a spin network. The name twisted geometry captures the relation between these additional degrees of freedom and the off-shell presence of torsion in the theory, but also the fact that this classical description can be derived from Twistor theory, by assigning a pair of twistors to each link of the graph, and suitably constraining their helicities and incidence relations. References Loop quantum gravity Physics beyond the Standard Model
Twisted geometries
[ "Physics" ]
241
[ "Unsolved problems in physics", "Quantum mechanics", "Particle physics", "Physics beyond the Standard Model", "Quantum physics stubs" ]
33,105,284
https://en.wikipedia.org/wiki/Chernobyl%20necklace
A Chernobyl necklace is a horizontal scar at the base of the throat which results from surgery to remove a thyroid cancer caused by fallout from a nuclear accident. The scar has come to be seen as one of the most graphic demonstrations of the impact of the Chernobyl disaster. The term takes its name from the increased rate of thyroid cancer after the Chernobyl disaster. The scar has also been referred as the Belarus necklace or the Belarusian Necklace, in reference to the large number of thyroid cancer occurrences in the nation caused by the nuclear fallout from neighboring Ukraine. The use of the word necklace indicates its visual resemblance to the horizontal scar around the neck, but also contrasts the negative connotations of the scar with the beauty of an actual necklace. Cause The radioactive iodine isotope iodine-131 (131I) has a relatively high fission product yield; in the case of a nuclear accident, 131I is released into the environment in the nuclear fallout. Iodine is a vital micronutrient in vertebrate biology, and tends to bioaccumulate in the thyroid gland—the primary iodine-reliant organ of the body—which requires it in order to synthesise thyroid hormones. Environmental 131I is taken up in the diet, and like the stable isotope 127I, is accumulated in the thyroid; once there, the high-energy beta radiation emitted by 131I significantly increases the risk of cancer. Treatment of thyroid cancer may require surgery, potentially leaving the patient with one or two horizontal scars at the base of the neck. It is these scars that have been dubbed the "Chernobyl necklace". Occurrences After the Chernobyl disaster, incidents of thyroid cancer among civilians in Belarus, Ukraine, Russia, and Poland have risen sharply. It is estimated that many of those affected have the necklace, however, no statistical information of the affected population exists at this time. See the article on Chernobyl disaster effects for details. After the Fukushima Daiichi nuclear disaster, there has been some speculation that Japan faces a similar situation: its affected population may receive similar surgery and scarring ("wear the Chernobyl necklace") in the future. In literature The phenomenon inspired the title of the 1999 book Bagrjane namisto ("The Crimson Necklace"), by poet and Chernobyl survivor Valentin Mikhailjuk. References Aftermath of the Chernobyl disaster Radiation health effects Thyroid cancer Scarring
Chernobyl necklace
[ "Chemistry", "Materials_science", "Technology" ]
491
[ "Radiation health effects", "Aftermath of the Chernobyl disaster", "Environmental impact of nuclear power", "Radiation effects", "Radioactivity" ]
38,580,904
https://en.wikipedia.org/wiki/Halin%27s%20grid%20theorem
In graph theory, a branch of mathematics, Halin's grid theorem states that the infinite graphs with thick ends are exactly the graphs containing subdivisions of the hexagonal tiling of the plane. It was published by , and is a precursor to the work of Robertson and Seymour linking treewidth to large grid minors, which became an important component of the algorithmic theory of bidimensionality. Definitions and statement A ray, in an infinite graph, is a semi-infinite path: a connected infinite subgraph in which one vertex has degree one and the rest have degree two. defined two rays r0 and r1 to be equivalent if there exists a ray r2 that includes infinitely many vertices from each of them. This is an equivalence relation, and its equivalence classes (sets of mutually equivalent rays) are called the ends of the graph. defined a thick end of a graph to be an end that contains infinitely many rays that are pairwise disjoint from each other. An example of a graph with a thick end is provided by the hexagonal tiling of the Euclidean plane. Its vertices and edges form an infinite cubic planar graph, which contains many rays. For example, some of its rays form Hamiltonian paths that spiral out from a central starting vertex and cover all the vertices of the graph. One of these spiraling rays can be used as the ray r2 in the definition of equivalence of rays (no matter what rays r0 and r1 are given), showing that every two rays are equivalent and that this graph has a single end. There also exist infinite sets of rays that are all disjoint from each other, for instance the sets of rays that use only two of the six directions that a path can follow within the tiling. Because it has infinitely many pairwise disjoint rays, all equivalent to each other, this graph has a thick end. Halin's theorem states that this example is universal: every graph with a thick end contains as a subgraph either this graph itself, or a graph formed from it by modifying it in simple ways, by subdividing some of its edges into finite paths. The subgraph of this form can be chosen so that its rays belong to the given thick end. Conversely, whenever an infinite graph contains a subdivision of the hexagonal tiling, it must have a thick end, namely the end that contains all of the rays that are subgraphs of this subdivision. Analogues for finite graphs As part of their work on graph minors leading to the Robertson–Seymour theorem and the graph structure theorem, Neil Robertson and Paul Seymour proved that a family F of finite graphs has unbounded treewidth if and only if the minors of graphs in F include arbitrarily large square grid graphs, or equivalently subgraphs of the hexagonal tiling formed by intersecting it with arbitrarily large disks. Although the precise relation between treewidth and grid minor size remains elusive, this result became a cornerstone in the theory of bidimensionality, a characterization of certain graph parameters that have particularly efficient fixed-parameter tractable algorithms and polynomial-time approximation schemes. For finite graphs, the treewidth is always one less than the maximum order of a haven, where a haven describes a certain type of strategy for a robber to escape the police in a pursuit–evasion game played on the graph, and the order of the haven gives the number of police needed to catch a robber using this strategy. Thus, the relation between treewidth and grid minors can be restated: in a family of finite graphs, the order of the havens is unbounded if and only if the size of the grid minors is unbounded. For infinite graphs, the equivalence between treewidth and haven order is no longer true, but instead havens are intimately connected to ends: the ends of a graph are in one-to-one correspondence with the havens of order ℵ0. It is not always true that an infinite graph has a haven of infinite order if and only if it has a grid minor of infinite size, but Halin's theorem provides an extra condition (the thickness of the end corresponding to the haven) under which it becomes true. Notes References . . . . . . Graph minor theory Infinite graphs
Halin's grid theorem
[ "Mathematics" ]
886
[ "Mathematical objects", "Graph theory", "Infinity", "Theorems in discrete mathematics", "Infinite graphs", "Mathematical relations", "Theorems in graph theory", "Graph minor theory" ]
38,586,988
https://en.wikipedia.org/wiki/Boggio%27s%20formula
In the mathematical field of potential theory, Boggio's formula is an explicit formula for the Green's function for the polyharmonic Dirichlet problem on the ball of radius 1. It was discovered by the Italian mathematician Tommaso Boggio. The polyharmonic problem is to find a function u satisfying where m is a positive integer, and represents the Laplace operator. The Green's function is a function satisfying where represents the Dirac delta distribution, and in addition is equal to 0 up to order m-1 at the boundary. Boggio found that the Green's function on the ball in n spatial dimensions is The constant is given by where Sources Elliptic partial differential equations Potential theory
Boggio's formula
[ "Mathematics" ]
146
[ "Functions and mappings", "Mathematical relations", "Mathematical objects", "Potential theory" ]
38,587,811
https://en.wikipedia.org/wiki/F.%20M.%20Devienne
Fernand Marcel Devienne (20 February 1913 – 19 April 2003) was a French physicist who developed research on molecular beams and spectrum analysis in rarefied gas environment. Life Devienne was born in Marseille on 20 February 1913. A Doctor of physics, F. Marcel Devienne was director of a research laboratory (Laboratoire de physique moléculaire des hautes énergies de Peymeinade, now closed) in Peymeinade, Alpes Maritimes. He also presided yearly symposiums on molecular beams. He was one of the first to study the energy properties of triatomic hydrogen molecules and triatomic deuterium. His researches also sought to recreate interstellar-like conditions to experiment synthesis of biological compounds in such environments. Devienne also conducted extensive fast atom bombardment experiments in mass spectrometry. Devienne died on 19 April 2003 in Cannes. Honours F. M. Devienne was chevalier of the Legion of Honour, member of the New York Academy of Sciences, Fellow of the International Symposiulm on Molecular Beams, laureate of the 1997 Lazare-Carnot Prize and of the 1972 Gustave Ribaud Prize of the French Academy of Sciences. Works F. M. Devienne (ed.) Rarefied Gas Dynamics, Pergamon Press, 1960 F. M. Devienne Jets Moléculaires de Hautes Énergies, 1961 Resources F. M. Devienne facts on WorldCat F-Marcel Devienne facts on SciTech References French physicists Scientists from Marseille 1913 births 2003 deaths Molecular physics
F. M. Devienne
[ "Physics", "Chemistry" ]
316
[ "Molecular physics", " molecular", "nan", "Atomic", " and optical physics" ]
21,496,085
https://en.wikipedia.org/wiki/Fock%E2%80%93Lorentz%20symmetry
Lorentz invariance follows from two independent postulates: the principle of relativity and the principle of constancy of the speed of light. Dropping the latter while keeping the former leads to a new invariance, known as Fock–Lorentz symmetry or the projective Lorentz transformation. The general study of such theories began with Fock, who was motivated by the search for the general symmetry group preserving relativity without assuming the constancy of c. This invariance does not distinguish between inertial frames (and therefore satisfies the principle of relativity) but it allows for a varying speed of light in space, c; indeed it allows for a non-invariant c. According to Maxwell's equations, the speed of light satisfies where ε0 and μ0 are the electric constant and the magnetic constant. If the speed of light depends upon the spacetime coordinates of the medium, say x, then where represents the vacuum as a variable medium. See also Doubly special relativity Orders of magnitude (length) Planck scale Planck units Quantum gravity Planck epoch References Further reading 40th Winter School on Theoretical Physics Special relativity Symmetry
Fock–Lorentz symmetry
[ "Physics", "Mathematics" ]
231
[ "Special relativity", "Geometry", "Symmetry", "Theory of relativity" ]
21,496,872
https://en.wikipedia.org/wiki/Infrared%20gas%20analyzer
An infrared gas analyzer measures trace gases by determining the absorption of an emitted infrared light source through a certain air sample. Trace gases found in the Earth's atmosphere become excited under specific wavelengths found in the infrared range. The concept behind the technology can be understood as testing how much of the light is absorbed by the air. Different molecules in the air absorb different frequencies of light. Air with much of a certain gas will absorb more of a certain frequency, allowing the sensor to report a high concentration of the corresponding molecule. Infrared gas analyzers usually have two chambers, one is a reference chamber while the other chamber is a measurement chamber. Infrared light is emitted from some type of source on one end of the chamber, passes through a series of chambers that contains given quantities of the various gases in question. Principles of Operation The design from 1975 (pictured above) is a Nondispersive infrared sensor. It is the first improved analyzer that is able to detect more than one component of a sample gas at one time. Earlier analyzers were held back by the fact that a particular gas also has lower absorption bands in the infrared. The invention of 1975 has as many detectors as the number of gases to be measured. Each detector has two chambers which both have an optically aligned infrared source and detector, and are both filled with one of the gases in the sample of air to be analyzed. Lying in the optical path are two cells with transparent ends. One contains a reference gas and one will contain the gas to be analyzed. Between the infrared source and the cells is a modulator which interrupts the beams of energy. The output from each detector is combined with the output from any other detector which is measuring a signal opposite to the principal signal of each detector. The amount of signal from other detectors is the amount that will offset the proportion of the total signal that corresponds to the interference. This interference is from gases with a principal lower absorption band that is the same as the principal band of the gas being measured. For instance, if the analyzer is to measure carbon monoxide and dioxide, the chambers must contain a certain amount of these gases. The infrared light is emitted and passes through the sample gas, a reference gas with a known mixture of the gases in question and then through the "detector" chambers containing the pure forms of the gases in question. When a "detector" chamber absorbs some of the infrared radiation, it heats up and expands. This causes a rise in pressure within the sealed vessel that can be detected either with a pressure transducer or with a similar device. The combination of output voltages from the detector chambers from the sample gas can then be compared to the output voltages from the reference chamber. The latest Infrared Gas Analyzers Like earlier infrared gas analyzers, modern analyzers also use nondispersive infrared technology to detect a certain gas by detecting the absorption of infrared wavelengths that is characteristic of that gas. Infrared energy is emitted from a heated filament. By optically filtering the energy, the radiation spectrum is limited to the absorption band of the gas being measured. A detector measures the energy after the infrared energy has passed through the gas to be measured. This is compared to the energy at reference condition of no absorption. Many analyzers are wall-mounted devices intended for long-term, unattended gas monitoring. There are now analysers that measure a range of gases and are highly portable to be suitable for a wider range of geoscience applications. Fast response high-precision analyzers are widely used to measure gas emissions and ecosystem fluxes using eddy covariance method when used together with fast-response sonic anemometer. In some analyzers, the reliability of measurements is enhanced by calibrating the analyzer at the reference condition and a known span concentration. If the air would interfere with measurements, the chamber that houses the energy source is filled with a gas that has no detectable concentration of the gas being measured. Depending on the gas being measured, fresh air, chemically stripped air or nitrogen may be used. See also Nondispersive infrared sensor Eddy covariance References Auble, D.L.; Meyers, T.P. (1992). "An open path, fast response infrared absorption gas analyzer for H2O and CO2". Boundary-Layer Meteorology 59(3):243–256. Measuring instruments
Infrared gas analyzer
[ "Technology", "Engineering" ]
889
[ "Measuring instruments" ]
21,504,334
https://en.wikipedia.org/wiki/Home%20energy%20monitor
A home energy monitor is a device that provides information about a personal electrical energy usage to a consumer of electricity. Devices may display the amount of electricity used, plus the cost of energy used and estimates of greenhouse gas emissions. The purpose of such devices is to assist in the management of power consumption. Several initiatives has been launched to increase the usage of home energy monitors. Studies have shown a reduction of home energy when the devices are used. Description A home energy monitor device provides information about electrical energy usage to a consumer of electricity (i.e., a homeowner). In addition to the amount of electrical usage, devices may display other information, including the cost of energy used and estimates of greenhouse gas emissions. The purpose of such devices is to assist in the management of power consumption. Monitors consist of a measuring component and a display component. Electricity use is measured with an inductive clamp placed around the electric main, via the electric meter (either through an optical port, or by sensing the meters actions), by communicating with a smart meter, or by direct connection to the electrical system. Some, but not all, plugin units store their readings when not connected. The display portion may be remote from the measurement, communicating with the sensor using a cable, network, power line communications, or using radio. Online displays are also available which allow the user to use an internet connected display to show near real-time consumption. Initiatives Australia In January 2009 the government of the state of Queensland, Australia began offering wireless energy monitors as part of its ClimateSmart Home Service program. By August 2009, almost 100,000 homes had signed up for the service, by August 2010 that number had risen to 200,000 homes. By the end of the program more than 335,000 households across Queensland had received the service with the Elite energy monitoring device supplied by Efergy Technologies. In mid-2013 the government of the state of Victoria, Australia enabled Zigbee-based In-Home Displays to be connected to Victorian Smart Meter. From September 2019, the Victorian households are eligible to avail rebates for home energy monitor installation under the Victorian Energy Upgrades Program. Google PowerMeter Google PowerMeter was a software project of Google's philanthropic arm, Google.org, to help consumers track their home electricity usage that ran from October 5, 2009 to September 16, 2011. Studies Various studies have shown a reduction in home energy use of 4-15% through use of home energy display. A study using the PowerCost Monitor deployed in 500 Ontario homes by Hydro One showed an average 6.5% drop in total electricity use when compared with a similarly sized control group. Hydro One subsequently offered power monitors for $8.99 shipping and handling to 30,000 customers based on the success of the pilot. According to World Economic Forum 2022, Google supports some companies around the world in different segments. A study in the city of Sabadell, Spain in 2009 using the Efergy e2 in 29 households during a six-month period found a drop of 11.8% in weekly consumption between the first and last weeks of the campaign. On a monthly basis, the savings were 14.3%. Expected annual emissions for all households were estimated to reduce by 4.1 tonnes; projected emissions savings for 2020 were 180.6 tonnes. See also AlertMe Energy management software Google PowerMeter Energy conservation Hohm Home automation Kill A Watt Nonintrusive load monitoring Smart meter Wattmeter References External links gov.uk Saving electricity with a home energy monitor Electric power Electricity meters
Home energy monitor
[ "Physics", "Engineering" ]
721
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
21,505,767
https://en.wikipedia.org/wiki/Linear%20dichroism
Linear dichroism (LD) or diattenuation is the difference between absorption of light polarized parallel and polarized perpendicular to an orientation axis. It is the property of a material whose transmittance depends on the orientation of linearly polarized light incident upon it. As a technique, it is primarily used to study the functionality and structure of molecules. LD measurements are based on the interaction between matter and light and thus are a form of electromagnetic spectroscopy. This effect has been applied across the EM spectrum, where different wavelengths of light can probe a host of chemical systems. The predominant use of LD currently is in the study of bio-macromolecules (e.g. DNA) as well as synthetic polymers. Basic information Linear polarization LD uses linearly polarized light, which is light that has been polarized in one direction only. This produces a wave, the electric field vector, which oscillates in only one plane, giving rise to a classic sinusoidal wave shape as the light travels through space. By using light parallel and perpendicular to the orientation direction it is possible to measure how much more energy is absorbed in one dimension of the molecule relative to the other, providing information to the experimentalist. As light interacts with the molecule being investigated, should the molecule start absorbing the light then electron density inside the molecule will be shifted as the electron becomes photoexcited. This movement of charge is known as an electronic transition, the direction of which is called the electric transition polarisation. It is this property for which LD is a measurement. The LD of an oriented molecule can be calculated using the following equation:- LD = A║- A┴ Where A║ is the absorbance parallel to the orientation axis and A┴ is the absorbance perpendicular to the orientation axis. Note that light of any wavelength can be used to generate an LD signal. The LD signal generated therefore has two limits upon the signal that can be generated. For a chemical system whose electric transition is parallel to the orientation axis, the following equation can be written: LD = A║- A┴ = A║ > 0 For most chemical systems this represents an electric transition polarised across the length of the molecule (i.e. parallel to the orientation axis). Alternatively, the electric transition polarisation can be found to be perfectly perpendicular to the orientation of the molecule, giving rise to the following equation: LD = A║- A┴ = - A┴ < 0 This equation represents the LD signal recorded if the electric transition is polarised across the width of the molecule (i.e. perpendicular to the orientation axis), which in the case of LD is the smaller of the two investigable axes. LD can therefore be used in two ways. If the orientation of the molecules in flow is known, then the experimentalist can look at the direction of polarisation in the molecule (which gives an insight into the chemical structure of a molecule), or if the polarisation direction is unknown it can be used as a means of working out how oriented in flow a molecule is. UV linear dichroism Ultraviolet (UV) LD is typically employed in the analysis of biological molecules, especially large, flexible, long molecules that prove difficult to structurally determine by such methods as NMR and X-ray diffraction. DNA DNA is almost ideally suited for UV LD detection. The molecule is very long and very thin, making it very easy to orient in flow. This gives rise to a strong LD signal. DNA systems that have been studied using UV LD include DNA-enzyme complexes and DNA-ligand complexes, the formation of the latter being easily observable through kinetic experiments. Fibrous protein Fibrous proteins, such as proteins involved in Alzheimer's disease and prion proteins fulfil the requirements for UV LD in that they are a class of long, thin molecules. In addition, cytoskeletal proteins can also be measured using LD. Membrane proteins The insertion of membrane proteins into a lipid membrane has been monitored using LD, supplying the experimentalist with information about the orientation of the protein relative to the lipid membrane at different time points. In addition, other types of molecule have been analysed by UV LD, including carbon nanotubes and their associated ligand complexes. Alignment methods Couette flow The Couette flow orientation system is the most widely used method of sample orientation for UV LD. It has a number of characteristics which make it highly suitable as a method of sample alignment. Couette flow is currently the only established means of orientating molecules in the solution phase. This method also requires only very small amounts of analysis sample ( 20 - 40 μL) in order to generate an LD spectrum. The constant recirculation of sample is another useful property of the system, allowing many repeat measurements to be taken of each sample, decreasing the effect of noise on the final recorded spectrum. Its mode of operation is very simple, with the sample sandwiched between a spinning tube and a stationary rod. As the sample is spun inside the cell, the light beam is shone through the sample, the parallel absorbance calculated from horizontally polarised light, the perpendicular absorbance from the vertically polarised light. Couette flow UV LD is currently the only commercially available means of LD orientation. Stretched film Stretched film linear dichroism is a method of orientation based on incorporating the sample molecules into a polyethylene film. The polyethylene film is then stretched, causing the randomly oriented molecules on the film to ‘follow’ the movement of the film. The stretching of the film results in the sample molecules being oriented in the direction of the stretch. Associated techniques Circular Dichroism LD is very similar to Circular Dichroism (CD), but with two important differences. (i) CD spectroscopy uses circularly polarized light whereas LD uses linearly polarized light. (ii) In CD experiments molecules are usually free in solution so they are randomly oriented. The observed spectrum is then a function only of the chiral or asymmetric nature of the molecules in the solution. With biomacromolecules CD is particularly useful for determining the secondary structure. By way of contrast, in LD experiments the molecules need to have a preferential orientation otherwise the LD=0. With biomacromolecules flow orientation is often used, other methods include stretched films, magnetic fields, and squeezed gels. Thus LD gives information such as alignment on a surface or the binding of a small molecule to a flow-oriented macromolecule, endowing it with different functionality from other spectroscopic techniques. The differences between LD and CD are complementary and can be a potent means for elucidating the structure of biological molecules when used in conjunction with one another, the combination of techniques revealing far more information than a single technique in isolation. For example, CD tells us when a membrane peptide or protein folds whereas LD tells when it inserts into a membrane. Fluorescence detected Linear Dichroism Fluorescence-detected linear dichroism (FDLD) is a very useful technique to the experimentalist as it combines the advantages of UV LD whilst also offering the confocal detection of the fluorescence emission. FDLD has applications in microscopy, where can be used as a means of two-dimensional surface mapping through differential polarisation spectroscopy (DPS) where the anisotropy of the scanned object allows an image to be recorded. FDLD can also be used in conjunction with intercalating fluorescent dyes (which can also be monitored using UV LD). The intensity difference recorded between the two types of polarised light for the fluorescence reading is proportional to the UV LD signal, allowing the use of DPS to image surfaces References Spectroscopy
Linear dichroism
[ "Physics", "Chemistry" ]
1,571
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
21,506,869
https://en.wikipedia.org/wiki/Neutron%20research%20facility
A neutron research facility is most commonly a big laboratory operating a large-scale neutron source that provides thermal neutrons to a suite of research instruments. The neutron source usually is a research reactor or a spallation source. In some cases, a smaller facility will provide high energy neutrons (e.g. 2.5 MeV or 14 MeV fusion neutrons) using existing neutron generator technologies. List of neutron facilities The following list is intended to be exhaustive and to cover active facilities as well as those that are shut down. Australia ANSTO-HIFAR Reactor, Sydney Open-pool Australian lightwater reactor (OPAL) Bangladesh Atomic Energy Research Establishment (AERE), Bangladesh Atomic Energy Commission (BAEC) Canada NRC Canadian Neutron Beam Centre at Chalk River Laboratories RE-Labs Inc. – Single Event Effects Testing Services China China Spallation Neutron Source – Dongguan, Guangdong. CNPG – Light ion (D,T), China Institute of Atomic Energy HI-13 – Light ion (D,T), China Institute of Atomic Energy Czech Republic Neutron Physics Laboratory (within CANAM infrastructure) Denmark Risø (reactors 1958–2000) Egypt ETRR-1 – Nuclear Research Center, Inshas (1961–) ETRR-2 – Nuclear Research Center, Inshas (1997–) France ILL – Institut Laue–Langevin (1972–) LLB – Laboratoire Léon Brillouin at CEA Saclay NFS – GANIL Germany FRM I – Technical University, Garching (1957–2000) FRM II – Technical University, Garching (2004–) FRMZ –Johannes Gutenberg-Universität, Mainz (1967–) FRJ-2 at Forschungszentrum Jülich (1962–2006) Jülich Centre for Neutron Science (2005–), a virtual facility that operates instruments at other facilities (FRM II, ILL, SNS) FRG-1 – GKSS, Geesthacht near Hamburg (1958–2010) Helmholtz-Zentrum Berlin, formerly HMI – Hahn-Meitner-Institut Hungary KFKI Research Institutes, Budapest India Dhruva, CIRUS and Apsara: Bhabha Atomic Research Centre, Mumbai KAMINI Indonesia Neutron Scattering Laboratory – (BATAN) Japan JAERI – Japan Atomic Energy Research Institute KENS – High Energy Accelerator Organisation, KEK KURRI – Research Reactor Institute (Kyoto) JSNS – (part of the Japan proton accelerator research complex (J-PARC) Netherlands RID – Reactor Institute Delft, Delft University of Technology Norway IFE – Jeep 2 reactor at Kjeller Institute for Energy Technology Poland Maria reactor – POLATOM Institute of Nuclear Energy, Świerk-Otwock Ewa reactor – POLATOM Institute of Nuclear Energy, Świerk-Otwock (1958–1995) Russia IBR Fast Pulsed Reactors (Dubna) JINR – Joint Institute for Nuclear Research, Dubna Gatchina South Africa NECSA SAFARI-1 South Korea High-Flux Advanced Neutron Application Reactor (HANARO) – Korea Atomic Energy Research Institute (KAERI) Sweden NFL – Studsvik Neutron Research Laboratory, Studsvik ESS – European Spallation Source (project) Switzerland SINQ@PSI – Paul Scherrer Institute UCN@PSI – Paul Scherrer Institute – Ultra Cold Neutron Source n_TOF – CERN Ukraine United Kingdom DIDO ISIS Neutron and Muon Source, Rutherford Appleton Laboratory, Oxfordshire United States HFBR – High Flux Beam Reactor, Brookhaven (1965–1996) IPNS – Intense Pulsed Neutron Source, Argonne National Laboratory (1981–2008) LANSCE – Los Alamos Neutron Science Center (Los Alamos) LENS – Low Energy Neutron Source, Indiana University, Bloomington, IN. NIST – Center for Neutron Research, Gaithersburg near Washington D.C. NSL – Neutron Science Laboratory, University of Michigan College of Engineering. HFIR – High Flux Isotope Reactor, Oak Ridge National Laboratory SNS – Spallation Neutron Source, Oak Ridge National Laboratory MURR – University of Missouri Research Reactor, Columbia, MO. MNRC – MacClellan Nuclear Research Center, Sacramento, CA. RPI LINAC - Rensselaer Gaerttner LINAC Center, Troy, NY. References External links List of major active neutron facilities NMI3 – a European consortium of 18 partner organisations from 12 countries, including all major facilities in the fields of neutron scattering and muon spectroscopy Nuclear physics
Neutron research facility
[ "Physics" ]
924
[ "Nuclear physics" ]
6,855,071
https://en.wikipedia.org/wiki/Ceramic%20membrane
Ceramic membranes are a type of artificial membranes made from inorganic materials (such as alumina, titania, zirconia oxides, silicon carbide or some glassy materials). They are used in membrane operations for liquid filtration. By contrast with polymeric membranes, they can be used in separations where aggressive media (acids, strong solvents) are present. They also have excellent thermal stability which makes them usable in high-temperature membrane operations. Like polymeric membranes, they are either dense or porous. Researchers have studied ceramic membranes for potential applications in wastewater treatment, gas separation, and membrane reactors. Ceramic membranes typically last longer than polymeric membranes which are more commonly used for these applications. Currently ceramic membranes have not seen widespread usage mainly due to their high cost of production. Configurations include tubular cross flow and dead-end membranes as well as flat sheet membranes. Dense membranes Dense ceramic membranes are used for the purpose of gas separation. Examples are the separation of oxygen from air, or the separation of hydrogen gas from a mixture. Dense ceramic membranes have been studied for process intensification applications to reduce the energy consumption of many technologies used in the petroleum industry. One such application is membrane reactors, through the use of dense oxygen permeable membranes. Porous membranes Porous ceramic membranes are chiefly used for gas separation and micro- or nanofiltration. They can be made from both crystalline as well as amorphous solids. An example of an amorphous membrane is the silica membrane. An example of a highly porous membrane is the type made of silicon carbide. Porous ceramic membranes are typically manufactured through a slip coating-sintering process. In this process a support is initially made by sintering particles of a ceramic material into a mold with a binding agent. The surface of this support is then coated in a solution of finer ceramic particles and a polymeric binder. This coating is then sintered to form a porous layer of the membrane. This process can then be repeated to form new layers that are typically formed with smaller part ceramic particles. This repeated process with increasingly small particles creates an anisotropic membrane. History & manufacturers of ceramic membranes The first ceramic membranes were produced in France in the 1980s for the purpose of uranium enrichment in the nuclear industry. After many of the nuclear plants were set up in France other industrial application areas for the ceramic membranes were sought out. At the same time academic research on ceramic membranes was conducted. The leading group was directed by Professor Louis Cot at the National Graduate School of Chemistry in Montpellier. The group growth gave rise to the creation of a laboratory fully dedicated to the membrane materials and processes from 1994 and to the European Membrane Institute of Montpellier in 2000. French manufacturers of ceramic membranes include Orelis Environnement (Alsys group), Pall Exekia and Tami Industries. Other companies outside France include CoorsTek (http://www.coorstek.com), Atech (http://www.atech-innovations.com), Inopor, Jiangsu Jiuwu, Meidensha, METAWATER (https://www.metawater.co.jp/eng/), Liqtech, and Mantec Technical Ceramics Ltd (http://www.mantectechnicalceramics.com/products-services/porous-ceramics/filtration/star-sep-membranes/elements) While most of the ceramic membrane manufacturers produce the membranes of carriers and membrane layers of alumina oxide, titanium oxide and zirconia oxide only a few manufacturers work with silicon carbide. Silicon carbide requires higher sintering temperatures (>2000 °C) compared to oxide based membranes (1200-1600 °C). The pioneers in developing and commercializing silicon carbide membranes are the Danish company Liqtech, CeraMem (Alsys group) and American company Kemco Systems. References Membrane technology
Ceramic membrane
[ "Chemistry" ]
805
[ "Membrane technology", "Separation processes" ]
6,855,413
https://en.wikipedia.org/wiki/Soft%20Matter%20%28journal%29
Soft Matter is a peer-reviewed scientific journal covering the science of soft matter. It is published by the Royal Society of Chemistry and the editor-in-chief is Darrin Pochan (University of Delaware, USA). The journal was established in 2005. Initially it was published monthly, but as submissions increased it switched to 24 issues a year in 2009 and to 48 issues a year in 2012. Abstracting and indexing The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences Index Medicus/MEDLINE/PubMed Science Citation Index Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 4.046. See also List of scientific journals in chemistry References External links Biochemistry journals Engineering journals Materials science journals Academic journals established in 2005 Royal Society of Chemistry academic journals Weekly journals English-language journals
Soft Matter (journal)
[ "Chemistry", "Materials_science", "Engineering" ]
174
[ "Biochemistry journals", "Biochemistry literature", "Materials science journals", "Materials science" ]
6,855,414
https://en.wikipedia.org/wiki/List%20of%20pioneering%20solar%20buildings
The following buildings are of significance in pioneering the use of solar powered building design: MIT Solar House #1, Massachusetts, United States (Hoyt C. Hottel & others, 1939) Howard Sloan House, Glenview, Illinois, United States (George Fred Keck, 1940) "Solar Hemicycle", near Madison, Wisconsin, United States (Frank Lloyd Wright, 1944) Löf House, Boulder, Colorado, United States (George Löf, 1945) Rosenberg House, Tucson, Arizona, United States (Arthur T. Brown, 1946) MIT Solar House #2, United States, (Hoyt C. Hottel & others, 1947) Peabody House ("Dover Sun House", MIT Solar House #6), Dover, Massachusetts, United States (Eleanor Raymond & Mária Telkes, 1948) Henry P. Glass House, Northfield, Illinois, United States (Henry P. Glass, 1948) Rose Elementary School, Tucson, Arizona, United States (Arthur T. Brown, 1948) MIT Solar House #3, United States, (Hoyt C. Hottel & others, 1949) New Mexico State College House, New Mexico, United States (Lawrence Gardenhire, 1953) Lefever Solar House, Pennsylvania, United States (HR Lefever, 1954) Bliss House, Amado, Arizona, United States (Raymond W. Bliss & M. K. Donavan, 1954) Solar Building, Albuquerque, New Mexico, United States (Frank Bridgers & Don Paxton, 1956) University of Toronto House, Toronto, Ontario, Canada (EA Allcut, 1956) Solar House, Tokyo, Japan (Masanosuke Yanagimachi, 1956) Solar House, Bristol, United Kingdom (L Gardner, 1956) Curtis House, Rickmansworth, United Kingdom (Edward JW Curtis, 1956) Löf House, Denver, Colorado, United States (James M. Hunter & George Löf, 1957) AFASE "Living With the Sun" House, Phoenix, Arizona, United States (Peter Lee, Robert L. Bliss & John Yellott, 1958) MIT Solar House #4, United States (Hoyt C. Hottel & others, 1958) Solar House, Casablanca, Morocco (CM Shaw & Associates, 1958) Solar House, Nagoya, Japan (Masanosuke Yanagimachi, 1958) Curtiss-Wright "Sun Court," Princeton, New Jersey, United States (Maria Telkes & Aladar Olgyay, 1958) "Sun-Tempered House" Van Dresser Residence (Peter van Dresser, 1958) Thomason Solar House "Solaris" #1, Washington D.C., United States (Harry Thomason, 1959) Passive Solar House, Odeillo, France (Félix Trombe & Jacques Michel, 1967) Steve Baer House, Corrales, New Mexico, United States (Steve Baer, 1971) Skytherm House, Atascadero, California, United States (Harold R. Hay, 1973) Solar One, Newark, Delaware, United States (K.W. Böer & Maria Telkes, 1973) MIT Solar Building V, Cambridge, Massachusetts, United States (T.E. Johnson, C.C. Benton, S. Hale, 1978) "Unit One" Balcomb Residence, Santa Fe, New Mexico, United States (William Lumpkins, 1979) The first Zero Energy Design home, Oklahoma, United States (Larry Hartweg, 1979) Saunders Shrewsbury House, Shrewsbury, Massachusetts, United States (Norman B. Saunders, 1981) Multiple IEA SHC "Task 13" houses, Worldwide (IEA SHC, 1989) Multiple passive houses in Darmstadt, Germany (Bott, Ridder & Westermeyer, 1990) Heliotrope, Freiburg im Breisgau, Germany (Rolf Disch, 1994) The Druk White Lotus School, Ladakh, India (Arup, 2002) 31 Tannery Project, Branchburg, New Jersey, United States (2006) Sun Ship, Freiburg im Breisgau, Germany (Rolf Disch, 2006) See also Passive solar building design History of passive solar building design Low-energy house Energy-plus-house Sustainable development References Solar design Low-energy building Building engineering Energy conservation Lists of buildings and structures Lists related to renewable energy
List of pioneering solar buildings
[ "Engineering" ]
874
[ "Building engineering", "Solar design", "Energy engineering", "Civil engineering", "Architecture" ]
6,856,520
https://en.wikipedia.org/wiki/Industrial%20radiography
Industrial radiography is a modality of non-destructive testing that uses ionizing radiation to inspect materials and components with the objective of locating and quantifying defects and degradation in material properties that would lead to the failure of engineering structures. It plays an important role in the science and technology needed to ensure product quality and reliability. In Australia, industrial radiographic non-destructive testing is colloquially referred to as "bombing" a component with a "bomb". Industrial Radiography uses either X-rays, produced with X-ray generators, or gamma rays generated by the natural radioactivity of sealed radionuclide sources. Neutrons can also be used. After crossing the specimen, photons are captured by a detector, such as a silver halide film, a phosphor plate, flat panel detector or CdTe detector. The examination can be performed in static 2D (named radiography), in real time 2D (fluoroscopy), or in 3D after image reconstruction (computed tomography or CT). It is also possible to perform tomography nearly in real time (4-dimensional computed tomography or 4DCT). Particular techniques such as X-ray fluorescence (XRF), X-ray diffractometry (XRD), and several other ones complete the range of tools that can be used in industrial radiography. Inspection techniques can be portable or stationary. Industrial radiography is used in welding, casting parts or composite pieces inspection, in food inspection and luggage control, in sorting and recycling, in EOD and IED analysis, aircraft maintenance, ballistics, turbine inspection, in surface characterisation, coating thickness measurement, in counterfeit drug control, etc. History Radiography started in 1895 with the discovery of X-rays (later also called Röntgen rays after the man who first described their properties in detail), a type of electromagnetic radiation. Soon after the discovery of X-rays, radioactivity was discovered. By using radioactive sources such as radium, far higher photon energies could be obtained than those from normal X-ray generators. Soon these found various applications, with one of the earliest users being Loughborough College. X-rays and gamma rays were put to use very early, before the dangers of ionizing radiation were discovered. After World War II new isotopes such as caesium-137, iridium-192 and cobalt-60 became available for industrial radiography, and the use of radium and radon decreased. Applications Inspection of products Gamma radiation sources, most commonly iridium-192 and cobalt-60, are used to inspect a variety of materials. The vast majority of radiography concerns the testing and grading of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. Other tested materials include concrete (locating rebar or conduit), welder's test coupons, machined parts, plate metal, or pipewall (locating anomalies due to corrosion or mechanical damage). Non-metal components such as ceramics used in the aerospace industries are also regularly tested. Theoretically, industrial radiographers could radiograph any solid, flat material (walls, ceilings, floors, square or rectangular containers) or any hollow cylindrical or spherical object. Inspection of welding The beam of radiation must be directed to the middle of the section under examination and must be normal to the material surface at that point, except in special techniques where known defects are best revealed by a different alignment of the beam. The length of weld under examination for each exposure shall be such that the thickness of the material at the diagnostic extremities, measured in the direction of the incident beam, does not exceed the actual thickness at that point by more than 6%. The specimen to be inspected is placed between the source of radiation and the detecting device, usually the film in a light tight holder or cassette, and the radiation is allowed to penetrate the part for the required length of time to be adequately recorded. The result is a two-dimensional projection of the part onto the film, producing a latent image of varying densities according to the amount of radiation reaching each area. It is known as a radio graph, as distinct from a photograph produced by light. Because film is cumulative in its response (the exposure increasing as it absorbs more radiation), relatively weak radiation can be detected by prolonging the exposure until the film can record an image that will be visible after development. The radiograph is examined as a negative, without printing as a positive as in photography. This is because, in printing, some of the detail is always lost and no useful purpose is served. Before commencing a radiographic examination, it is always advisable to examine the component with one's own eyes, to eliminate any possible external defects. If the surface of a weld is too irregular, it may be desirable to grind it to obtain a smooth finish, but this is likely to be limited to those cases in which the surface irregularities (which will be visible on the radio graph) may make detecting internal defects difficult. After this visual examination, the operator will have a clear idea of the possibilities of access to the two faces of the weld, which is important both for the setting up of the equipment and for the choice of the most appropriate technique. Defects such as delaminations and planar cracks are difficult to detect using radiography, particularly to the untrained eye. Without overlooking the negatives of radiographic inspection, radiography does hold many significant benefits over ultrasonics, particularly insomuch that as a 'picture' is produced keeping a semi permanent record for the life cycle of the film, more accurate identification of the defect can be made, and by more interpreters. Very important as most construction standards permit some level of defect acceptance, depending on the type and size of the defect. To the trained radiographer, subtle variations in visible film density provide the technician the ability to not only accurately locate a defect, but identify its type, size and location; an interpretation that can be physically reviewed and confirmed by others, possibly eliminating the need for expensive and unnecessary repairs. For purposes of inspection, including weld inspection, there exist several exposure arrangements. First, there is the panoramic, one of the four single-wall exposure/single-wall view (SWE/SWV) arrangements. This exposure is created when the radiographer places the source of radiation at the center of a sphere, cone, or cylinder (including tanks, vessels, and piping). Depending upon client requirements, the radiographer would then place film cassettes on the outside of the surface to be examined. This exposure arrangement is nearly ideal – when properly arranged and exposed, all portions of all exposed film will be of the same approximate density. It also has the advantage of taking less time than other arrangements since the source must only penetrate the total wall thickness (WT) once and must only travel the radius of the inspection item, not its full diameter. The major disadvantage of the panoramic is that it may be impractical to reach the center of the item (enclosed pipe) or the source may be too weak to perform in this arrangement (large vessels or tanks). The second SWE/SWV arrangement is an interior placement of the source in an enclosed inspection item without having the source centered up. The source does not come in direct contact with the item, but is placed a distance away, depending on client requirements. The third is an exterior placement with similar characteristics. The fourth is reserved for flat objects, such as plate metal, and is also radiographed without the source coming in direct contact with the item. In each case, the radiographic film is located on the opposite side of the inspection item from the source. In all four cases, only one wall is exposed, and only one wall is viewed on the radiograph. Of the other exposure arrangements, only the contact shot has the source located on the inspection item. This type of radiograph exposes both walls, but only resolves the image on the wall nearest the film. This exposure arrangement takes more time than a panoramic, as the source must first penetrate the WT twice and travel the entire outside diameter of the pipe or vessel to reach the film on the opposite side. This is a double wall exposure/single wall view DWE/SWV arrangement. Another is the superimposure (wherein the source is placed on one side of the item, not in direct contact with it, with the film on the opposite side). This arrangement is usually reserved for very small diameter piping or parts. The last DWE/SWV exposure arrangement is the elliptical, in which the source is offset from the plane of the inspection item (usually a weld in pipe) and the elliptical image of the weld furthest from the source is cast onto the film. Airport security Both hold luggage and carry-on hand luggage are normally examined by X-ray machines using X-ray radiography. See airport security for more details. Non-intrusive cargo scanning Gamma radiography and high-energy X-ray radiography are currently used to scan intermodal freight cargo containers in US and other countries. Also research is being done on adapting other types of radiography like dual-energy X-ray radiography or muon radiography for scanning intermodal cargo containers. Art The American artist Kathleen Gilje has painted copies of Artemisia Gentileschi's Susanna and the Elders and Gustave Courbet's Woman with a Parrot. Before, she painted in lead white similar pictures with differences: Susanna fights the intrusion of the elders; there is a nude Courbet beyond the woman he paints. Then she painted over reproducing the original. Gilje's paintings are exhibited with radiographs that show the underpaintings, simulating the study of pentimentos and providing a comment on the old masters' work. Sources Many types of ionizing radiation sources exist for use in industrial radiography. X-Ray generators X-ray generators produce X-rays by applying a high voltage between the cathode and the anode of an X-ray tube and in heating the tube filament to start the electron emission. The electrons are then accelerated in the resulting electric potential and collide with the anode, which is usually made of Tungsten. The X-rays that are emitted by this generator are directed towards the object to control. They cross it and are absorbed according to the object material's attenuation coefficient. The attenuation coefficient is compiled from all the cross sections of the interactions that are happening in the material. The three most important inelastic interactions with X-rays at those energy levels are the photoelectric effect, compton scattering and pair production. After having crossed the object, the photons are captured by a detector, such as a silver halide film, a phosphor plate or flat panel detector. When an object is too thick, too dense, or its effective atomic number is too high, a linac can be used. They work in a similar way to produce X-rays, by electron collisions on a metal anode, the difference is that they use a much more complex method to accelerate them. Sealed Radioactive Sources Radionuclides are often used in industrial radiography. They have the advantage that they do not need a supply of electricity to function, but it also means that they can't be turned off. The two most common radionuclides used in industrial radiography are Iridium-192 and Cobalt-60. But others are used in general industry as well. Am-241: Backscatter gauges, smoke detectors, fill height and ash content detectors. Sr-90: Thickness gauging for thick materials up to 3 mm. Kr-85: Thickness gauging for thin materials like paper, plastics, etc. Cs-137: Density and fill height level switches. Ra-226: Ash content Cf-255: Ash content Ir-192: Industrial radiography Se-75: Industrial radiography Yb-169: Industrial radiography Co-60: Density and fill height level switches, industrial radiography These isotopes emit radiation in a discrete set of energies, depending on the decay mechanism happening in the atomic nucleus. Each energies will have different intensities depending on the probability of a particular decay interaction. The most prominent energies in Cobalt-60 are 1.33 and 1.17 MeV, and 0.31, 0.47 and 0.60 MeV for Iridium-192. From a radiation safety point of view, this makes them more difficult to handle and manage. They always need to be enclosed in a shielded container and because they are still radioactive after their normal life cycle, their ownership often requires a license and they are usually tracked by a governmental body. If this is the case, their disposal must be done in accordance with the national policies. The radionuclides used in industrial radiography are chosen for their high specific activity. This high activity means that only a small sample is required to obtain a good radiation flux. However, higher activity often means higher dose in the case of an accidental exposure. Radiographic cameras A series of different designs have been developed for radiographic "cameras". Rather than the "camera" being a device that accepts photons to record a picture, the "camera" in industrial radiography is the radioactive photon source. Most industries are moving from film based radiography to a digital sensor based radiography much the same way that traditional photography has made this move. Since the amount of radiation emerging from the opposite side of the material can be detected and measured, variations in this amount (or intensity) of radiation are used to determine thickness or composition of material. Shutter design One design uses a moving shutter to expose the source. The radioactive source is placed inside a shielded box; a hinge allows part of the shielding to be opened, exposing the source and allowing photons to exit the radiography camera. Another design for a shutter is where the source is placed in a metal wheel, which can turn inside the camera to move between the expose and storage positions. Shutter-based devices require the entire device, including the heavy shielding, to be located at the exposure site. This can be difficult or impossible, so they have largely been replaced by cable-driven projectors. Projector design Modern projector designs use a cable drive mechanism to move the source along a hollow guide tube to the exposure location. The source is stored in a block of shielding that has an S-shaped tube-like hole through the block. In the safe position the source is in the center of the block. The source is attached to a flexible metal cable called a pigtail. To use the source a guide tube is attached to one side of the device while a drive cable is attached to the pigtail. Using a hand-operated control the source is then pushed out of the shield and along the source guide tube to the tip of the tube to expose the film, then cranked back into its fully shielded position. Neutrons In some rare cases, radiography is done with neutrons. This type of radiography is called neutron radiography (NR, Nray, N-ray) or neutron imaging. Neutron radiography provides different images than X-rays, because neutrons can pass with ease through lead and steel but are stopped by plastics, water and oils. Neutron sources include radioactive (241Am/Be and Cf) sources, electrically driven D-T reactions in vacuum tubes and conventional critical nuclear reactors. It might be possible to use a neutron amplifier to increase the neutron flux. Safety Radiation safety is a very important part of industrial radiography. The International Atomic Energy Agency has published a report describing the best practices in order to lower the amount of radiation dose the workers are exposed to. It also provides a list of national competent authorities responsible for approvals and authorizations regarding the handling of radioactive material. Shielding Shielding can be used to protect the user of the harmful properties of ionizing radiation. The type of material used for shielding depends on the type of radiation being used. National radiation safety authorities usually regulate the design, commissioning, maintenance and inspection of Industrial Radiography installations. In the industry Industrial radiographers are in many locations required by governing authorities to use certain types of safety equipment and to work in pairs. Depending on location industrial radiographers may have been required to obtain permits, licenses and/or undertake special training. Prior to conducting any testing the nearby area should always first be cleared of all other persons and measures should be taken to ensure that workers do not accidentally enter into an area that may expose them to dangerous levels of radiation. The safety equipment usually includes four basic items: a radiation survey meter (such as a Geiger/Mueller counter), an alarming dosimeter or rate meter, a gas-charged dosimeter, and a film badge or thermoluminescent dosimeter (TLD). The easiest way to remember what each of these items does is to compare them to gauges on an automobile. The survey meter could be compared to the speedometer, as it measures the speed, or rate, at which radiation is being picked up. When properly calibrated, used, and maintained, it allows the radiographer to see the current exposure to radiation at the meter. It can usually be set for different intensities, and is used to prevent the radiographer from being overexposed to the radioactive source, as well as for verifying the boundary that radiographers are required to maintain around the exposed source during radiographic operations. The alarming dosimeter could be most closely compared with the tachometer, as it alarms when the radiographer "redlines" or is exposed to too much radiation. When properly calibrated, activated, and worn on the radiographer's person, it will emit an alarm when the meter measures a radiation level in excess of a preset threshold. This device is intended to prevent the radiographer from inadvertently walking up on an exposed source. The gas-charged dosimeter is like a trip meter in that it measures the total radiation received, but can be reset. It is designed to help the radiographer measure his/her total periodic dose of radiation. When properly calibrated, recharged, and worn on the radiographer's person, it can tell the radiographer at a glance how much radiation to which the device has been exposed since it was last recharged. Radiographers in many states are required to log their radiation exposures and generate an exposure report. In many countries personal dosimeters are not required to be used by radiographers as the dose rates they show are not always correctly recorded. The film badge or TLD is more like a car's odometer. It is actually a specialized piece of radiographic film in a rugged container. It is meant to measure the radiographer's total exposure over time (usually a month) and is used by regulating authorities to monitor the total exposure of certified radiographers in a certain jurisdiction. At the end of the month, the film badge is turned in and is processed. A report of the radiographer's total dose is generated and is kept on file. When these safety devices are properly calibrated, maintained, and used, it is virtually impossible for a radiographer to be injured by a radioactive overexposure. The elimination of just one of these devices can jeopardize the safety of the radiographer and all those who are nearby. Without the survey meter, the radiation received may be just below the threshold of the rate alarm, and it may be several hours before the radiographer checks the dosimeter, and up to a month or more before the film badge is developed to detect a low intensity overexposure. Without the rate alarm, one radiographer may inadvertently walk up on the source exposed by the other radiographer. Without the dosimeter, the radiographer may be unaware of an overexposure, or even a radiation burn, which may take weeks to result in noticeable injury. And without the film badge, the radiographer is deprived of an important tool designed to protect him or her from the effects of a long-term overexposure to occupationally obtained radiation, and thus may suffer long-term health problems as a result. There are three ways a radiographer will ensure they are not exposed to higher than required levels of radiation: time, distance, shielding. The less time that a person is exposed to radiation the lower their dose will be. The further a person is from a radioactive source the lower the level of radiation they receive, this is largely due to the inverse square law. Lastly the more a radioactive source is shielded by either better or greater amounts of shielding the lower the levels of radiation that will escape from the testing area. The most commonly used shielding materials in use are sand, lead (sheets or shot), steel, spent (non-radioactive uranium) tungsten and in suitable situations water. Industrial radiography appears to have one of the worst safety profiles of the radiation professions, possibly because there are many operators using strong gamma sources (> 2 Ci) in remote sites with little supervision when compared with workers within the nuclear industry or within hospitals. Due to the levels of radiation present whilst they are working many radiographers are also required to work late at night when there are few other people present as most industrial radiography is carried out 'in the open' rather than in purpose built exposure booths or rooms. Fatigue, carelessness and lack of proper training are the three most common factors attributed to industrial radiography accidents. Many of the "lost source" accidents commented on by the International Atomic Energy Agency involve radiography equipment. Lost source accidents have the potential to cause a considerable loss of human life. One scenario is that a passerby finds the radiography source and not knowing what it is, takes it home. The person shortly afterwards becomes ill and dies as a result of the radiation dose. The source remains in their home where it continues to irradiate other members of the household. Such an event occurred in March 1984 in Casablanca, Morocco. This is related to the more famous Goiânia accident, where a related chain of events caused members of the public to be exposed to radiation sources. List of standards International Organization for Standardization (ISO) ISO 4993, Steel and iron castings – Radiographic inspection ISO 5579, Non-destructive testing – Radiographic examination of metallic materials by X- and gamma-rays – Basic rules ISO 10675-1, Non-destructive testing of welds – Acceptance levels for radiographic testing – Part 1: Steel, nickel, titanium and their alloys ISO 11699-1, Non-destructive testing – Industrial radiographic films – Part 1: Classification of film systems for industrial radiography ISO 11699-2, Non-destructive testing – Industrial radiographic films – Part 2: Control of film processing by means of reference values ISO 14096-1, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 1: Definitions, quantitative measurements of image quality parameters, standard reference film and qualitative control ISO 14096-2, Non-destructive testing – Qualification of radiographic film digitisation systems – Part 2: Minimum requirements ISO 17636-1: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with film ISO 17636-2: Non-destructive testing of welds. Radiographic testing. X- and gamma-ray techniques with digital detectors ISO 19232, Non-destructive testing – Image quality of radiographs European Committee for Standardization (CEN) EN 444, Non-destructive testing; general principles for the radiographic examination of metallic materials using X-rays and gamma-rays EN 462-1: Non-destructive testing – image quality of radiographs – Part 1: Image quality indicators (wire type) – determination of image quality value EN 462-2, Non-destructive testing – image quality of radiographs – Part 2: image quality indicators (step/hole type) determination of image quality value EN 462-3, Non-destructive testing – Image quality of radiogrammes – Part 3: Image quality classes for ferrous metals EN 462-4, Non-destructive testing – Image quality of radiographs – Part 4: Experimental evaluation of image quality values and image quality tables EN 462-5, Non-destructive testing – Image quality of radiographs – Part 5: Image quality of indicators (duplex wire type), determination of image unsharpness value EN 584-1, Non-destructive testing – Industrial radiographic film – Part 1: Classification of film systems for industrial radiography EN 584-2, Non-destructive testing – Industrial radiographic film – Part 2: Control of film processing by means of reference values EN 1330-3, Non-destructive testing – Terminology – Part 3: Terms used in industrial radiographic testing EN 2002–21, Aerospace series – Metallic materials; test methods – Part 21: Radiographic testing of castings EN 10246-10, Non-destructive testing of steel tubes – Part 10: Radiographic testing of the weld seam of automatic fusion arc welded steel tubes for the detection of imperfections EN 12517-1, Non-destructive testing of welds – Part 1: Evaluation of welded joints in steel, nickel, titanium and their alloys by radiography – Acceptance levels EN 12517-2, Non-destructive testing of welds – Part 2: Evaluation of welded joints in aluminium and its alloys by radiography – Acceptance levels EN 12679, Non-destructive testing – Determination of the size of industrial radiographic sources – Radiographic method EN 12681, Founding – Radiographic examination EN 13068, Non-destructive testing – Radioscopic testing EN 14096, Non-destructive testing – Qualification of radiographic film digitisation systems EN 14784-1, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 1: Classification of systems EN 14584-2, Non-destructive testing – Industrial computed radiography with storage phosphor imaging plates – Part 2: General principles for testing of metallic materials using X-rays and gamma rays ASTM International (ASTM) ASTM E 94, Standard Guide for Radiographic Examination ASTM E 155, Standard Reference Radiographs for Inspection of Aluminum and Magnesium Castings ASTM E 592, Standard Guide to Obtainable ASTM Equivalent Penetrameter Sensitivity for Radiography of Steel Plates 1/4 to 2 in. [6 to 51 mm] Thick with X Rays and 1 to 6 in. [25 to 152 mm] Thick with Cobalt-60 ASTM E 747, Standard Practice for Design, Manufacture and Material Grouping Classification of Wire Image Quality Indicators (IQI) Used for Radiology ASTM E 801, Standard Practice for Controlling Quality of Radiological Examination of Electronic Devices ASTM E 1030, Standard Test Method for Radiographic Examination of Metallic Castings ASTM E 1032, Standard Test Method for Radiographic Examination of Weldments ASTM 1161, Standard Practice for Radiologic Examination of Semiconductors and Electronic Components ASTM E 1648, Standard Reference Radiographs for Examination of Aluminum Fusion Welds ASTM E 1735, Standard Test Method for Determining Relative Image Quality of Industrial Radiographic Film Exposed to X-Radiation from 4 to 25 MeV ASTM E 1815, Standard Test Method for Classification of Film Systems for Industrial Radiography ASTM E 1817, Standard Practice for Controlling Quality of Radiological Examination by Using Representative Quality Indicators (RQIs) ASTM E 2104, Standard Practice for Radiographic Examination of Advanced Aero and Turbine Materials and Components American Society of Mechanical Engineers (ASME) BPVC Section V, Nondestructive Examination: Article 2 Radiographic Examination American Petroleum Institute (API) API 1104, Welding of Pipelines and Related Facilities: 11.1 Radiographic Test Methods See also Collimator Industrial computed tomography Medical radiography Notes References External links NIST's XAAMDI: X-Ray Attenuation and Absorption for Materials of Dosimetric Interest Database NIST's XCOM: Photon Cross Sections Database NIST's FAST: Attenuation and Scattering Tables List of incidents UN information on the security of industrial sources Nondestructive testing Radiography Casting (manufacturing) Welding
Industrial radiography
[ "Materials_science", "Engineering" ]
5,825
[ "Nondestructive testing", "Materials testing", "Welding", "Mechanical engineering" ]
6,857,112
https://en.wikipedia.org/wiki/Stanford%20Institute%20for%20Theoretical%20Physics
The Stanford Institute for Theoretical Physics (SITP) is a research institute within the Physics Department at Stanford University. Led by 16 physics faculty members, the institute conducts research in high energy and condensed matter theoretical physics. Research Research within SITP includes a strong focus on fundamental questions about the new physics underlying the Standard Models of particle physics and cosmology, and on the nature and applications of our basic frameworks (quantum field theory and string theory) for attacking these questions. Principal areas of research include: Biophysics Condensed matter theory Cosmology Formal theory Physics beyond the standard model "Precision frontiers" Quantum computing Quantum gravity Central questions include: What governs particle theory beyond the scale of electroweak symmetry breaking? How do string theory and holography resolve the basic puzzles of general relativity, including the deep issues arising in black hole physics and the study of cosmological horizons? Which class of models of inflationary cosmology captures the physics of the early universe, and what preceded inflation? Can physicists develop new techniques in quantum field theory and string theory to shed light on mysterious phases arising in many contexts in condensed matter physics (notably, in the high temperature superconductors)? Faculty Current faculty include: Savas Dimopoulos, theorist focusing on physics beyond the standard model; winner of Sakurai Prize Sebastian Doniach, condensed matter physicist Daniel Fisher, biophysicist Surya Ganguli, theoretical neuroscientist Peter Graham, winner of 2017 New Horizons Prize Sean Hartnoll, AdS/CFT, winner of New Horizons Prize Patrick Hayden, quantum information theorist Shamit Kachru, string theorist; Stanford Physics Department chair Renata Kallosh, noted string theorist Vedika Khemani, condensed matter theorist Steven Kivelson, condensed matter theorist Robert Laughlin, Nobel Laureate known for work on fractional quantum Hall effect Andrei Linde, cosmologist and winner of Breakthrough Prize in Fundamental Physics Xiaoliang Qi, quantum gravity and quantum information Srinivas Raghu, condensed matter theorist Leonardo Senatore, cosmologist and winner of New Horizons prize in physics Stephen Shenker, string theorist Eva Silverstein, cosmologist, string theorist, and recipient of MacArthur "Genius grant" award Douglas Stanford, quantum gravity theorist Leonard Susskind, string theorist known for string landscape; popular science book author References See also Institute for Theoretical Physics (disambiguation) Center for Theoretical Physics (disambiguation) External links Stanford Institute for Theoretical Physics Stanford Physics Department SLAC Theory Group Stanford University Theoretical physics institutes
Stanford Institute for Theoretical Physics
[ "Physics" ]
518
[ "Theoretical physics", "Theoretical physics institutes" ]
6,858,518
https://en.wikipedia.org/wiki/Integrated%20Ocean%20Observing%20System
The United States Integrated Ocean Observing System (U.S. IOOS) is a national-regional partnership of ocean observing systems that routinely and continuously provide quality-controlled data and observations of the oceans within the United States exclusive economic zone (EEZ) and Great Lakes. The U.S. Integrated Ocean Observing System program office is seated within the National Ocean Service of the National Oceanic and Atmospheric Administration. U.S. IOOS is a multidisciplinary system, consisting of eleven Regional Associations, that provide data in forms and at rates required by decision makers to address various societal needs, such as maritime safety, natural hazards, the blue economy, and human impacts on marine life. It is part of the UNESCO Intergovernmental Oceanographic Commission's Global Ocean Observing System efforts. Regional Associations The U.S. Integrated Ocean Observing System consists of eleven independent Regional Associations (RAs) that serve stakeholder needs within their respective regions. From a coastal perspective, the global ocean component is critical for providing data and information on basin scale forcings (e.g., ENSO events), as well as providing the data and information necessary to run coastal models (such as storm surge models). Alaska Ocean Observing System (AOOS) Central California Ocean Observing System (CeNCOOS) Great Lakes Observing System (GLOS) Northeastern Regional Association of Coastal Ocean Observing Systems (NERACOOS) Gulf of Mexico Coastal Ocean Observing System (GCOOS) Pacific Islands Ocean Observing System (PacIOOS) Mid-Atlantic Coastal Ocean Observing Regional Association (MACOORA) Northwest Association of Networked Ocean Observing Systems (NANOOS) Southern California Coastal Ocean Observing System (SCCOOS) Southeast Coastal Ocean Observing Regional Association (SECOORA) Caribbean Integrated Ocean Observing System (CarICOOS) See also GOOS Global Earth Observing System of Systems (GEOSS) Ocean acoustic tomography Argo (oceanography) Alliance for Coastal Technologies Omnibus Public Land Management Act of 2009 (authorizing legislation for IOOS) References External links Monterey Accelerated Research System (MARS) IOOS Regional Associations Coastal Ocean Observing System Social & Economic Benefits of IOOS from "NOAA Socioeconomics" website initiative Rutgers University RU27 through the IOOS - Smithsonian Ocean Portal Oceanography Earth observation projects Oceanographic organizations
Integrated Ocean Observing System
[ "Physics", "Environmental_science" ]
461
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
5,220,019
https://en.wikipedia.org/wiki/Charge%20%28physics%29
In physics, a charge is any of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. Charges correspond to the time-invariant generators of a symmetry group, and specifically, to the generators that commute with the Hamiltonian. Charges are often denoted by , and so the invariance of the charge corresponds to the vanishing commutator , where is the Hamiltonian. Thus, charges are associated with conserved quantum numbers; these are the eigenvalues of the generator . A "charge" can also refer to a point-shaped object with an electric charge and a position, such as in the method of image charges. Abstract definition Abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge", the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge. Thus, for example, the electric charge is the generator of the U(1) symmetry of electromagnetism. The conserved current is the electric current. In the case of local, dynamical symmetries, associated with every charge is a gauge field; when quantized, the gauge field becomes a gauge boson. The charges of the theory "radiate" the gauge field. Thus, for example, the gauge field of electromagnetism is the electromagnetic field; and the gauge boson is the photon. The word "charge" is often used as a synonym for both the generator of a symmetry, and the conserved quantum number (eigenvalue) of the generator. Thus, letting the upper-case letter refer to the generator, one has that the generator commutes with the Hamiltonian Commutation implies that the eigenvalues (lower-case) are time-invariant: So, for example, when the symmetry group is a Lie group, then the charge operators correspond to the simple roots of the root system of the Lie algebra; the discreteness of the root system accounting for the quantization of the charge. The simple roots are used, as all the other roots can be obtained as linear combinations of these. The general roots are often called raising and lowering operators, or ladder operators. The charge quantum numbers then correspond to the weights of the highest-weight modules of a given representation of the Lie algebra. So, for example, when a particle in a quantum field theory belongs to a symmetry, then it transforms according to a particular representation of that symmetry; the charge quantum number is then the weight of the representation. Examples Various charge quantum numbers have been introduced by theories of particle physics. These include the charges of the Standard Model: The color charge of quarks. The color charge generates the SU(3) color symmetry of quantum chromodynamics. The weak isospin quantum numbers of the electroweak interaction. It generates the SU(2) part of the electroweak SU(2) × U(1) symmetry. Weak isospin is a local symmetry, whose gauge bosons are the W and Z bosons. The electric charge for electromagnetic interactions. In mathematics texts, this is sometimes referred to as the -charge of a Lie algebra module. Note that these charge quantum numbers show up in the Lagrangian via the Gauge covariant derivative#Standard_Model. Charges of approximate symmetries: The strong isospin charges. The symmetry groups is SU(2) flavor symmetry; the gauge bosons are the pions. The pions are not elementary particles, and the symmetry is only approximate. It is a special case of flavor symmetry. Other quark-flavor charges, such as strangeness or charm. Together with the – isospin mentioned above, these generate the global SU(6) flavor symmetry of the fundamental particles; this symmetry is badly broken by the masses of the heavy quarks. Charges include the hypercharge, the X-charge and the weak hypercharge. Hypothetical charges of extensions to the Standard Model: The hypothetical magnetic charge is another charge in the theory of electromagnetism. Magnetic charges are not seen experimentally in laboratory experiments, but would be present for theories including magnetic monopoles. In supersymmetry: The supercharge refers to the generator that rotates the fermions into bosons, and vice versa, in the supersymmetry. In conformal field theory: The central charge of the Virasoro algebra, sometimes referred to as the conformal central charge or the conformal anomaly. Here, the term 'central' is used in the sense of the center in group theory: it is an operator that commutes with all the other operators in the algebra. The central charge is the eigenvalue of the central generator of the algebra; here, it is the energy–momentum tensor of the two-dimensional conformal field theory. In gravitation: Eigenvalues of the energy–momentum tensor correspond to physical mass. Charge conjugation In the formalism of particle theories, charge-like quantum numbers can sometimes be inverted by means of a charge conjugation operator called C. Charge conjugation simply means that a given symmetry group occurs in two inequivalent (but still isomorphic) group representations. It is usually the case that the two charge-conjugate representations are complex conjugate fundamental representations of the Lie group. Their product then forms the adjoint representation of the group. Thus, a common example is that the product of two charge-conjugate fundamental representations of SL(2,C) (the spinors) forms the adjoint rep of the Lorentz group SO(3,1); abstractly, one writes That is, the product of two (Lorentz) spinors is a (Lorentz) vector and a (Lorentz) scalar. Note that the complex Lie algebra sl(2,C) has a compact real form su(2) (in fact, all Lie algebras have a unique compact real form). The same decomposition holds for the compact form as well: the product of two spinors in su(2) being a vector in the rotation group O(3) and a singlet. The decomposition is given by the Clebsch–Gordan coefficients. A similar phenomenon occurs in the compact group SU(3), where there are two charge-conjugate but inequivalent fundamental representations, dubbed and , the number 3 denoting the dimension of the representation, and with the quarks transforming under and the antiquarks transforming under . The Kronecker product of the two gives That is, an eight-dimensional representation, the octet of the eight-fold way, and a singlet. The decomposition of such products of representations into direct sums of irreducible representations can in general be written as for representations . The dimensions of the representations obey the "dimension sum rule": Here, is the dimension of the representation , and the integers being the Littlewood–Richardson coefficients. The decomposition of the representations is again given by the Clebsch–Gordan coefficients, this time in the general Lie-algebra setting. See also Casimir operator References Electromagnetism Quantum chromodynamics Physical quantities
Charge (physics)
[ "Physics", "Mathematics" ]
1,528
[ "Electromagnetism", "Physical phenomena", "Physical quantities", "Quantity", "Fundamental interactions", "Physical properties" ]
5,220,510
https://en.wikipedia.org/wiki/List%20of%20centroids
The following is a list of centroids of various two-dimensional and three-dimensional objects. The centroid of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. Informally, it is the "average" of all points of . For an object of uniform composition, or in other words, has the same density at all points, the centroid of a body is also its center of mass. In the case of two-dimensional objects shown below, the hyperplanes are simply lines. 2-D Centroids For each two-dimensional shape below, the area and the centroid coordinates are given: Where the centroid coordinates are marked as zero, the coordinates are at the origin, and the equations to get those points are the lengths of the included axes divided by two, in order to reach the center which in these cases are the origin and thus zero. 3-D Centroids For each three-dimensional body below, the volume and the centroid coordinates are given: See also List of moments of inertia List of second moments of area References External links http://www.engineering.com/Library/ArticlesPage/tabid/85/articleType/ArticleView/articleId/109/Centroids-of-Common-Shapes.aspx Mechanics Physics-related lists Geometric centers
List of centroids
[ "Physics", "Mathematics", "Engineering" ]
281
[ "Point (geometry)", "Geometric centers", "Mechanics", "Mechanical engineering", "Symmetry" ]
5,221,266
https://en.wikipedia.org/wiki/Gauged%20supergravity
Gauged supergravity is a supergravity theory in which some R-symmetry is gauged such that the gravitinos (superpartners of the graviton) are charged with respect to the gauge fields. Consistency of the supersymmetry transformation often requires the presence of the potential for the scalar fields of the theory, or the cosmological constant if the theory contains no scalar degree of freedom. The gauged supergravity often has the anti-de Sitter space as a supersymmetric vacuum. Notable exception is a six-dimensional N=(1,0) gauged supergravity. "Gauged supergravity" in this sense should be contrasted with Yang–Mills–Einstein supergravity in which some other would-be global symmetries of the theory are gauged and fields other than the gravitinos are charged with respect to the gauge fields. See also Velo–Zwanziger problem Supersymmetry Theory of relativity Quantum gravity
Gauged supergravity
[ "Physics" ]
209
[ "Unsolved problems in physics", "Quantum gravity", "Relativity stubs", "Physics beyond the Standard Model", "Theory of relativity", "Supersymmetry", "Symmetry" ]
5,221,850
https://en.wikipedia.org/wiki/B-cell%20receptor
The B-cell receptor (BCR) is a transmembrane protein on the surface of a B cell. A B-cell receptor is composed of a membrane-bound immunoglobulin molecule and a signal transduction moiety. The former forms a type 1 transmembrane receptor protein, and is typically located on the outer surface of these lymphocyte cells. Through biochemical signaling and by physically acquiring antigens from the immune synapses, the BCR controls the activation of the B cell. B cells are able to gather and grab antigens by engaging biochemical modules for receptor clustering, cell spreading, generation of pulling forces, and receptor transport, which eventually culminates in endocytosis and antigen presentation. B cells' mechanical activity adheres to a pattern of negative and positive feedbacks that regulate the quantity of removed antigen by manipulating the dynamic of BCR–antigen bonds directly. Particularly, grouping and spreading increase the relation of antigen with BCR, thereby proving sensitivity and amplification. On the other hand, pulling forces delinks the antigen from the BCR, thus testing the quality of antigen binding. The receptor's binding moiety is composed of a membrane-bound antibody that, like all antibodies, has two identical paratopes that are unique and randomly determined. The BCR for an antigen is a significant sensor that is required for B cell activation, survival, and development. A B cell is activated by its first encounter with an antigen (its "cognate antigen") that binds to its receptor, resulting in cell proliferation and differentiation to generate a population of antibody-secreting plasma B cells and memory B cells. The B cell receptor (BCR) has two crucial functions upon interaction with the antigen. One function is signal transduction, involving changes in receptor oligomerization. The second function is to mediate internalization for subsequent processing of the antigen and presentation of peptides to helper T cells. Development and structure of the B cell receptor The first checkpoint in the development of a B cell is the production of a functional pre-BCR, which is composed of two surrogate light chains and two immunoglobulin heavy chains, which are normally linked to Ig-α (or CD79A) and Ig-β (or CD79B) signaling molecules. Each B cell, produced in the bone marrow, is highly specific to an antigen. The BCR can be found in a number of identical copies of membrane proteins that are exposed at the cell surface. The B-cell receptor is composed of two parts: A membrane-bound immunoglobulin molecule of one isotype (IgD, IgM, IgA, IgG, or IgE). With the exception of the presence of a transmembrane alpha-helix, these are identical to their secreted forms. Signal transduction moiety: a heterodimer called Ig-α/Ig-β (CD79), bound together by disulfide bridges. Each member of the dimer spans the plasma membrane and has a cytoplasmic tail bearing an immunoreceptor tyrosine-based activation motif (ITAM). More analytically, the BCR complex consists of an antigen-binding subunit known as the membrane immunoglobulin (mIg), which is composed of two immunoglobulin light chains (IgLs) and two immunoglobulin heavy chains (IgHs) as well as two heterodimer subunits of Ig-α and Ig-β. In order for membrane mIgM molecules to transport to the surface of the cell, there must be a combination of Ig-α and Ig-β with the mIgM molecules. Pre-B cells that do not generate any Ig molecule normally carry both Ig-α and Ig-β to the cell surface. Heterodimers may exist in the B cells as either an association or combination with another pre B cell-specific proteins or alone, thereby replacing the mIgM molecule. Within the BCR, the part that recognizes antigens is composed of three distinct genetic regions, referred to as V, D, and J. All these regions are recombined and spliced at the genetic level in a combinatorial process that is exceptional to the immune system. There are a number of genes that encode each of these regions in the genome and can be joined in various ways to generate a wide range of receptor molecules. The production of this variety is crucial since the body may encounter many more antigens than the available genes. Through this process, the body finds a way of producing multiple different combinations of antigen-recognizing receptor molecules. Heavy chain rearrangement of the BCR entails the initial steps in the development of B cell. The short JH (joining) and DH (diversity) regions are recombined first in early pro-B cells in a process that is dependent on the enzymes RAG2 and RAG1. After the recombination of the D and J regions, the cell is now referred to as a “late pro-B” cell and the short DJ region can now be recombined with a longer segment of the VH gene. BCRs have distinctive binding sites that rely on the complementarity of the surface of the epitope and the surface of the receptor, which often occurs by non-covalent forces. Mature B cells can only survive in the peripheral circulation for a limited time when there is no specific antigen. This is because when cells do not meet any antigen within this time, they will go through apoptosis. It is notable that in the peripheral circulation, apoptosis is important in maintaining an optimal circulation of B-lymphocytes. In structure, the BCR for antigens are almost identical to secreted antibodies. However, there is a distinctive structural dissimilarity in the C-terminal area of the heavy chains, as it consists of a hydrophobic stretch that is short, which spreads across the lipid bilayer of the membrane. Signaling pathways of the B cell receptor There are several signaling pathways that the B-cell receptor can follow through. The physiology of B cells is intimately connected with the function of their B-cell receptor. The BCR signaling pathway is initiated when the mIg subunits of the BCR bind a specific antigen. The initial triggering of the BCR is similar for all receptors of the non-catalytic tyrosine-phosphorylated receptor family. The binding event allows phosphorylation of immunoreceptor tyrosine-based activation motifs (ITAMs) in the associated Igα/Igβ heterodimer subunits by the tyrosine kinases of the Src family, including Blk, Lyn, and Fyn. Multiple models have been proposed how BCR-antigen binding induces phosphorylation, including conformational change of the receptor and aggregation of multiple receptors upon antigen binding. Tyrosine kinase Syk binds to and is activated by phosphorylated ITAMs and in turn phosphorylates scaffold protein BLNK on multiple sites. After phosphorylation, downstream signalling molecules are recruited to BLNK, which results in their activation and the transduction of the signal to the interior. IKK/NF-κB Transcription Factor Pathway: CD79 and other proteins, microsignalosomes, go to activate PLC-γ after antigen recognition by the BCR and before it goes to associate into the c-SMAC. It then cleaves PIP2 into IP3 and DAG (diacylglycerol). IP3 acts as a second messenger to dramatically increase ionic calcium inside the cytosol (via release from the endoplasmic reticulum or influx from the extracellular environment via ion channels). This leads to eventual activation of PKCβ from the calcium and DAG. PKCβ phosphorylates (either directly or indirectly) the NF-κB signaling complex protein CARMA1 (the complex itself comprising CARMA1, BCL10, and MALT1). These result in recruitment and summoning of the IKK (IkB kinase), TAK1, by several ubiquitylation enzymes also associated with the CARMA1/BCL10/MALT1 complex. MALT1 itself is a caspase-like protein that cleaves A20, an inhibitory protein of NF-κB signaling (which acts by deubiquitylating NF-κB's ubiquitylation substrates, having an inhibitory effect). TAK1 phosphorylates the IKK trimer after it too has been recruited to the signaling complex by its associated ubiquitylation enzymes. IKK then phosphorylates IkB (an inhibitor of and bound to NF-κB), which induces its destruction by marking it for proteolytic degradation, freeing cytosolic NF-κB. NF-κB then migrates to the nucleus to bind to DNA at specific response elements, causing recruitment of transcription molecules and beginning the transcription process. Ligand binding to the BCR also leads to the phosphorylation of the protein BCAP. This leads to the binding and activation of several proteins with phosphotyrosine-binding SH2 domains. One of these proteins is PI3K. Activation of PI3K leads to PIP2 phosphorylation, forming PIP3. Proteins with PH (Pleckstrin homology) domains can bind to the newly created PIP3 and become activated. These include proteins of the FoxO family, which stimulate cell cycle progression, and protein kinase D, which enhances glucose metabolism. Another important protein with a PH domain is Bam32. This recruits and activates small GTPases such as Rac1 and Cdc42. These, in turn, are responsible for the cytoskeletal changes associated with BCR activation by modifying actin polymerisation. The B-cell receptor in malignancy The B-cell receptor has been shown to be involved in the pathogenesis of various B cell derived lymphoid cancers. Although it may be possible that stimulation by antigen binding contributes to the proliferation of malignant B cells, increasing evidence implicates antigen-independent self-association of BCRs as a key feature in a growing number of B cell neoplasias. B cell receptor signalling is currently a therapeutic target in various lymphoid neoplasms. It has been shown that BCR signaling is synchronised with CD40 pathway activation provided by B-T cell interactions, and this seems to be essential to trigger proliferation of leukemic B cells. See also Co-stimulation T-cell receptor IMGT References External links Lymphocytes Receptors Immune receptors
B-cell receptor
[ "Chemistry" ]
2,274
[ "Receptors", "Signal transduction" ]
5,222,005
https://en.wikipedia.org/wiki/Dipleidoscope
A dipleidoscope is an instrument used to determine true noon; its name comes from the Greek for double image viewer. It consists of a small telescope and a prism that creates a double image of the sun. When the two images overlap, it is local true noon. The instrument is capable of determining true noon to within ten seconds. The dipleidoscope was invented by Giovanni Battista Amici in the first half of the 19th century. Edward John Dent, a chronometer and clockmaker in London, was working in the 1830s on a simple contrivance that would allow the public to set clocks correctly based on the transit of the sun (more complex and expensive transit telescopes had been developed by Ole Rømer in 1690). By 1840, he felt he had come to a suitable design using shadows, however when he communicated his ideas to J.M. Bloxam (a barrister), he found he had also been working on his own design using reflections, which Dent felt was superior. The two formed a partnership and worked together on the device, and after a further 2 years of work, they finalised the design and patented it (GB Patent 9793 of 1843), with Dent manufacturing and selling it as Dent's Dipleidoscope. The instrument could use the moon as well as the sun and when correctly calibrated and aligned the accuracy was said to be less than a second. Dent exhibited the device at the Great Exhibition of 1851. After Edward Dent died in 1853, his son Frederick William Dent took over manufacture. The significance of this device relates in part to the development of the railways, when an absolute knowledge of the time became more important, whereas previously it was often sufficient that an entire rural community would use the parish clock, and this would periodically be set by 'the announcement of the guard of the mail coach' or similar. The instrument came with a detailed instruction booklet, which had a substantial section on correcting local time to Greenwich Mean Time (as used by the railways). References External links A dipleidoscope of the National Observatory of Athens Optical instruments Clocks Italian inventions
Dipleidoscope
[ "Physics", "Technology", "Engineering" ]
431
[ "Physical systems", "Machines", "Clocks", "Measuring instruments" ]
5,222,945
https://en.wikipedia.org/wiki/Polynomial%20lemniscate
In mathematics, a polynomial lemniscate or polynomial level curve is a plane algebraic curve of degree 2n, constructed from a polynomial p with complex coefficients of degree n. For any such polynomial p and positive real number c, we may define a set of complex numbers by This set of numbers may be equated to points in the real Cartesian plane, leading to an algebraic curve ƒ(x, y) = c2 of degree 2n, which results from expanding out in terms of z = x + iy. When p is a polynomial of degree 1 then the resulting curve is simply a circle whose center is the zero of p. When p is a polynomial of degree 2 then the curve is a Cassini oval. Erdős lemniscate A conjecture of Erdős which has attracted considerable interest concerns the maximum length of a polynomial lemniscate ƒ(x, y) = 1 of degree 2n when p is monic, which Erdős conjectured was attained when p(z) = zn − 1. This is still not proved but Fryntov and Nazarov proved that p gives a local maximum. In the case when n = 2, the Erdős lemniscate is the Lemniscate of Bernoulli and it has been proven that this is indeed the maximal length in degree four. The Erdős lemniscate has three ordinary n-fold points, one of which is at the origin, and a genus of (n − 1)(n − 2)/2. By inverting the Erdős lemniscate in the unit circle, one obtains a nonsingular curve of degree n. Generic polynomial lemniscate In general, a polynomial lemniscate will not touch at the origin, and will have only two ordinary n-fold singularities, and hence a genus of (n − 1)2. As a real curve, it can have a number of disconnected components. Hence, it will not look like a lemniscate, making the name something of a misnomer. An interesting example of such polynomial lemniscates are the Mandelbrot curves. If we set p0 = z, and pn = pn−12 + z, then the corresponding polynomial lemniscates Mn defined by |pn(z)| = 2 converge to the boundary of the Mandelbrot set. The Mandelbrot curves are of degree 2n+1. Notes References Alexandre Eremenko and Walter Hayman, On the length of lemniscates, Michigan Math. J., (1999), 46, no. 2, 409–415 O. S. Kusnetzova and V. G. Tkachev, Length functions of lemniscates, Manuscripta Math., (2003), 112, 519–538 Plane curves Algebraic curves
Polynomial lemniscate
[ "Mathematics" ]
598
[ "Planes (geometry)", "Euclidean plane geometry", "Plane curves" ]
5,224,898
https://en.wikipedia.org/wiki/Grundy%20number
In graph theory, the Grundy number or Grundy chromatic number of an undirected graph is the maximum number of colors that can be used by a greedy coloring strategy that considers the vertices of the graph in sequence and assigns each vertex its first available color, using a vertex ordering chosen to use as many colors as possible. Grundy numbers are named after P. M. Grundy, who studied an analogous concept for directed graphs in 1939. The undirected version was introduced by . Examples The path graph with four vertices provides the simplest example of a graph whose chromatic number differs from its Grundy number. This graph can be colored with two colors, but its Grundy number is three: if the two endpoints of the path are colored first, the greedy coloring algorithm will use three colors for the whole graph. The complete bipartite graphs are the only connected graphs whose Grundy number is two. All other connected graphs contain either a triangle or a four-vertex path, which cause the Grundy number to be at least three. The crown graphs are obtained from complete bipartite graphs by removing a perfect matching. As a result, for each vertex on one side of the bipartition, there is exactly one vertex on the opposite side of the bipartition that it is not adjacent to. As bipartite graphs, they can be colored with two colors, but their Grundy number is : if a greedy coloring algorithm considers each matched pair of vertices in order, each pair will receive a different color. As this example shows, the Grundy number can be larger than the chromatic number by a factor linear in the number of graph vertices. Atoms defines a sequence of graphs called -atoms, with the property that a graph has Grundy number at least if and only if it contains a -atom. Each -atom is formed from an independent set and a -atom, by adding one edge from each vertex of the -atom to a vertex of the independent set, in such a way that each member of the independent set has at least one edge incident to it. A Grundy coloring of a -atom can be obtained by coloring the independent set first with the smallest-numbered color, and then coloring the remaining -atom with an additional colors. For instance, the only 1-atom is a single vertex, and the only 2-atom is a single edge, but there are two possible 3-atoms: a triangle and a four-vertex path. In sparse graphs For a graph with vertices and degeneracy , the Grundy number is . In particular, for graphs of bounded degeneracy (such as planar graphs) or graphs for which the chromatic number and degeneracy are bounded within constant factors of each other (such as chordal graphs) the Grundy number and chromatic number are within a logarithmic factor of each other. For interval graphs, the chromatic number and Grundy number are within a factor of 8 of each other. Computational complexity Testing whether the Grundy number of a given graph is at least , for a fixed constant , can be performed in polynomial time, by searching for all possible -atoms that might be subgraphs of the given graph. However, this algorithm is not fixed-parameter tractable, because the exponent in its running time depends on . When is an input variable rather than a parameter, the problem is NP-complete. The Grundy number is at most one plus the maximum degree of the graph, and it remains NP-complete to test whether it equals one plus the maximum degree. There exists a constant such that it is NP-hard under randomized reductions to approximate the Grundy number to within an approximation ratio better than . There is an exact exponential time algorithm for the Grundy number that runs in time . For trees, and graphs of bounded treewidth, the Grundy number may be unboundedly large. Nevertheless, the Grundy number can be computed in polynomial time for trees, and is fixed-parameter tractable when parameterized by both the treewidth and the Grundy number, although (assuming the exponential time hypothesis) the dependence on treewidth must be greater than singly exponential. When parameterized only by the Grundy number, it can be computed in fixed-parameter tractable time for chordal graphs and claw-free graphs, and also (using general results on subgraph isomorphism in sparse graphs to search for atoms) for graphs of bounded expansion. However, on general graphs the problem is W[1]-hard when parameterized by the Grundy number. Well-colored graphs A graph is called well-colored if its Grundy number equals its chromatic number. Testing whether a graph is well-colored is coNP-complete. The hereditarily well-colored graphs (graphs for which every induced subgraph is well-colored) are exactly the cographs, the graphs that do not have a four-vertex path as an induced subgraph. References Graph coloring Graph invariants NP-complete problems
Grundy number
[ "Mathematics" ]
1,054
[ "Graph coloring", "Graph theory", "Computational problems", "Graph invariants", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
31,591,503
https://en.wikipedia.org/wiki/Surface%20acoustic%20wave%20sensor
Surface acoustic wave sensors are a class of microelectromechanical systems (MEMS) which rely on the modulation of surface acoustic waves to sense a physical phenomenon. The sensor transduces an input electrical signal into a mechanical wave which, unlike an electrical signal, can be easily influenced by physical phenomena. The device then transduces this wave back into an electrical signal. Changes in amplitude, phase, frequency, or time-delay between the input and output electrical signals can be used to measure the presence of the desired phenomenon. Device Layout The basic surface acoustic wave device consists of a piezoelectric substrate with an input interdigitated transducer (IDT) on one side of the surface of the substrate, and an output IDT on the other side of the substrate. The space between the IDTs across which the surface acoustic wave propagates is known as the delay line; the signal produced by the input IDT - a physical wave - moves much slower than its associated electromagnetic form, causing a measurable delay. Device Operation Surface acoustic wave technology takes advantage of the piezoelectric effect in its operation. Most modern surface acoustic wave sensors use an input interdigitated transducer (IDT) to convert an electrical signal into an acoustic wave. The sinusoidal electrical input signal creates alternating polarity between the fingers of the interdigitated transducer. Between two adjacent sets of fingers, polarity of the fingers will be switched (e.g. + - +). As a result, the direction of the electric field between two fingers will alternate between adjacent sets of fingers. This creates alternating regions of tensile and compressive strain between fingers of the electrode by the piezoelectric effect, producing a mechanical wave at the surface known as a surface acoustic wave. As fingers on the same side of the device will be at the same level of compression or tension, the space between them---known as the pitch---is the wavelength of the mechanical wave. We can express the synchronous frequency f0 of the device with phase velocity vp and pitch p as: The synchronous frequency is the natural frequency at which mechanical waves should propagate. Ideally, the input electric signal should be at the synchronous frequency to minimize insertion loss. As the mechanical wave will propagate in both directions from the input IDT, half of the energy of the waveform will propagate across the delay line in the direction of the output IDT. In some devices, a mechanical absorber or reflector is added between the IDTs and the edges of the substrate to prevent interference patterns or reduce insertion losses, respectively. The acoustic wave travels across the surface of the device substrate to the other interdigitated transducer, converting the wave back into an electric signal by the piezoelectric effect. Any changes that were made to the mechanical wave will be reflected in the output electric signal. As the characteristics of the surface acoustic wave can be modified by changes in the surface properties of the device substrate, sensors can be designed to quantify any phenomenon which alters these properties. Typically, this is accomplished by the addition of mass to the surface or changing the length of the substrate and the spacing between the fingers. Inherent Functionality The structure of the basic surface acoustic wave sensor allows for the phenomena of pressure, strain, torque, temperature, and mass to be sensed. The mechanisms for this are discussed below: Pressure, Strain, Torque, Temperature The phenomena of pressure, strain, torque, temperature, and mass can be sensed by the basic device, consisting of two IDTs separated by some distance on the surface of a piezoelectric substrate. These phenomena can all cause a change in length along the surface of the device. A change in length will affect both the spacing between the interdigitated electrodes---altering the pitch---and the spacing between IDTs---altering the delay. This can be sensed as a phase-shift, frequency-shift, or time-delay in the output electrical signal. The fundamental measurement of a surface acoustic wave sensor is typically strain. When a diaphragm is placed between the environment at a variable pressure and a reference cavity at a fixed pressure, the diaphragm will bend in response to a pressure differential. As the diaphragm bends, the distance along the surface in compression will increase. A surface acoustic wave pressure sensor either replaces the diaphragm with a piezoelectric substrate patterned with interdigitated electrodes or connects a larger diaphragm to the substrate in order to create a measurable strain in the surface acoustic wave device. When measuring Torque, the principle surface strain of the shaft is in the rotating direction is measured, as application to the sensor will cause a deformation of the piezoelectric substrate. A surface acoustic wave temperature sensor can be fashioned from a piezoelectric substrate with a relatively high coefficient of thermal expansion in the direction of the length of the device. Temperature sensing and strain sensing can be combined into a single device in order to deliver temperature compensation of the sensing system. Due to the ability of Surface Acoustic Wave sensors to operate within electromagnetically noisy environments and in close proximity to magnets it has been found that they can be embedded into electric motors in order to improve control by providing active torque and temperature measurement of the machine rotor shaft. They have also been applied to robotic control systems in order to provide dynamic torque feedback in robot movement reducing jitter. Mass The accumulation of mass on the surface of an acoustic wave sensor will affect the surface acoustic wave as it travels across the delay line. The velocity v of a wave traveling through a solid is proportional to the square root of product of the Young's modulus E and the density of the material. Therefore, the wave velocity will decrease with added mass. This change can be measured by a change in time-delay or phase-shift between input and output signals. Signal attenuation could be measured as well, as the coupling with the additional surface mass will reduce the wave energy. In the case of mass-sensing, as the change in the signal will always be due to an increase in mass from a reference signal of zero additional mass, signal attenuation can be effectively used. Extended Functionality The inherent functionality of a surface acoustic wave sensor can be extended by the deposition of a thin film of material across the delay line which is sensitive to the physical phenomena of interest. If a physical phenomenon causes a change in length or mass in the deposited thin film, the surface acoustic wave will be affected by the mechanisms mentioned above. Some extended functionality examples are listed below: Chemical Vapors Chemical vapor sensors use the application of a thin film polymer across the delay line which selectively absorbs the gas or gases of interest. An array of such sensors with different polymeric coatings can be used to sense a large range of gases on a single sensor with resolution down to parts per trillion, allowing for the creation of a sensitive "lab on a chip." Biological Matter A biologically active layer can be placed between the interdigitated electrodes which contains immobilized antibodies. If the corresponding antigen is present in a sample, the antigen will bind to the antibodies, causing a mass-loading on the device. These sensors can be used to detect bacteria and viruses in samples, as well as to quantify the presence of certain mRNA and proteins. Humidity Surface acoustic wave humidity sensors require a thermoelectric cooler in addition to a surface acoustic wave device. The thermoelectric cooler is placed below the surface acoustic wave device. Both are housed in a cavity with an inlet and outlet for gases. By cooling the device, water vapor will tend to condense on the surface of the device, causing a mass-loading. Ultraviolet Radiation Surface acoustic wave devices are made sensitive to optical wavelengths through the phenomenon known as acoustic charge transport (ACT), which involves the interaction between a surface acoustic wave and photogenerated charge carriers from a photoconducting layer. Ultraviolet radiation sensors use a thin layer of zinc oxide across the delay line. When exposed to ultraviolet radiation, zinc oxide generates charge carriers which interact with the electric fields produced in the piezoelectric substrate by the traveling surface acoustic wave. This interaction produces measurable decreases in both the velocity and amplitude of the acoustic wave signal. Magnetic Fields Ferromagnetic materials (such as iron, nickel, and cobalt) change their physical dimensions in the presence of an applied magnetic field, a property called magnetostriction. The Young's modulus of the material is dependent on ambient magnetic field strength. If a film of magnetostrictive material is deposited in the delay line of a surface acoustic wave sensor, the change in length of the deposited film in response to a change in the magnetic field will stress the underlying substrate. The resulting strain (i.e., the deformation of the surface of the substrate) produces measurable changes in the phase velocity, phase-shift, and time-delay of the acoustic wave signal, providing information about the magnetic field. Viscosity Surface acoustic wave devices can be used to measure changes in viscosity of a liquid placed upon it. As the liquid becomes more viscous the resonant frequency of the device will change in correspondence. A network analyser is needed to view the resonant frequency. External links and references A Fabrication Study of a Surface Acoustic Wave Device for Magnetic Field Detection Chemical SAW sensor SAW sensor research paper from article author Surface Acoustic Wave Torque Measuring Technology SAW flow meter References Microelectronic and microelectromechanical systems
Surface acoustic wave sensor
[ "Materials_science", "Engineering" ]
1,969
[ "Microelectronic and microelectromechanical systems", "Materials science", "Microtechnology" ]
31,594,536
https://en.wikipedia.org/wiki/Incremental%20dynamic%20analysis
Incremental dynamic analysis (IDA) is a computational analysis method of earthquake engineering for performing a comprehensive assessment of the behavior of structures under seismic loads. It has been developed to build upon the results of probabilistic seismic hazard analysis in order to estimate the seismic risk faced by a given structure. It can be considered to be the dynamic equivalent of the static pushover analysis. Description IDA involves performing multiple nonlinear dynamic analyses of a structural model under a suite of ground motion records, each scaled to several levels of seismic intensity. The scaling levels are appropriately selected to force the structure through the entire range of behavior, from elastic to inelastic and finally to global dynamic instability, where the structure essentially experiences collapse. Appropriate postprocessing can present the results in terms of IDA curves, one for each ground motion record, of the seismic intensity, typically represented by a scalar Intensity Measure (IM), versus the structural response, as measured by an engineering demand parameter (EDP). Possible choices for the IM are scalar (or rarely vector) quantities that relate to the severity of the recorded ground motion and scale linearly or nonlinearly with its amplitude. The IM is properly chosen well so that appropriate hazard maps (hazard curves) can be produced for them by probabilistic seismic hazard analysis. In addition, the IM should be correlated with the structural response of interest to decrease the number of required response history analyses. Possible choices are the peak ground acceleration, peak ground velocity or Arias intensity, but the most widely used is the 5%-damped spectral acceleration at the first-mode period of the structure. The results of the recent studies show that spectrum intensity (SI) is an appropriate IM. The EDP can be any structural response quantity that relates to structural, non-structural or contents' damage. Typical choices are the maximum (over all stories and time) interstory drift, the individual peak story drifts and the peak floor accelerations. Development history IDA grew out of the typical practice of scaling accelerograms by multiplying with a constant factor to represent more or less severe ground motions than the ones that were recorded at a site. Since the natural recordings available are never enough to cover all possible needs, scaling is a simple, yet potentially problematic method (if misused) to "fill-in" gaps in the current catalog of events. Still, in most cases, researchers would scale only a small set of three to seven records and typically only once, just to get an estimate of response in the area of interest. In the wake of the damage wrought by the 1994 Northridge earthquake, the SAC/FEMA project was launched to resolve the issue of poor performance of steel moment-resisting frames due to the fracturing beam-column connections. Within the creative environment of research cooperation, the idea of subjecting a structure to a wider range of scaling emerged. Initially, the method was called Dynamic Pushover and it was conceived as a way to estimate a proxy for the global collapse of the structure. It was later recognized that such a method would also enable checking for multiple limit-states, e.g. for life-safety, as is the standard for most seismic design methods, but also for lower and higher levels of intensity that represent different threat levels, such as immediate-occupancy and collapse-prevention. Thus, the idea for Incremental Dynamic Analysis was born, which was mainly adopted and later popularized by researchers at the John A. Blume Earthquake Research Center of Stanford University. This has now met with wider recognition in the earthquake research community and has spawned several different methods and concepts for estimating structural performance. A substantial debate has been raised regarding the potential bias in the IDA results due to using the scaled ground motions records that do not appropriately characterize the seismic hazard of the considered site over different earthquake intensity levels. See also C. Allin Cornell References External links SAC/FEMA 350 Report SAC/FEMA 351 Report IDA-related publications from D. Vamvatsikos at the National Technical University of Athens Earthquake engineering Earthquake and seismic risk mitigation Structural analysis
Incremental dynamic analysis
[ "Engineering" ]
833
[ "Structural engineering", "Structural analysis", "Civil engineering", "Mechanical engineering", "Aerospace engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation" ]
31,595,206
https://en.wikipedia.org/wiki/Hexaferrum
Hexaferrum and epsilon iron (ε-Fe) are synonyms for the hexagonal close-packed (HCP) phase of iron that is stable only at extremely high pressure. A 1964 study at the University of Rochester mixed 99.8% pure α-iron powder with sodium chloride, and pressed a 0.5-mm diameter pellet between the flat faces of two diamond anvils. The deformation of the NaCl lattice, as measured by x-ray diffraction (XRD), served as a pressure indicator. At a pressure of 13 GPa and room temperature, the body-centered cubic (BCC) ferrite powder transformed to the HCP phase in Figure 1. When the pressure was lowered, ε-Fe transformed back to ferrite (α-Fe) rapidly. A specific volume change of −0.20 cm3/mole ± 0.03 was measured. Hexaferrum, much like austenite, is more dense than ferrite at the phase boundary. A shock wave experiment confirmed the diamond anvil results. Epsilon was chosen for the new phase to correspond with the HCP form of cobalt. The triple point between the alpha, gamma and epsilon phases in the unary phase diagram of iron has been calculated as T = 770 K and P = 11 GPa, although it was determined at a lower temperature of T = 750 K (477 °C) in Figure 1. The Pearson symbol for hexaferrum is hP2 and its space group is P63/mmc. Another study concerning the ferrite-hexaferrum transformation metallographically determined that it is a martensitic rather than equilibrium transformation. While hexaferrum is purely academic in metallurgical engineering, it may have significance in geology. The pressure and temperature of Earth's iron core are on the order of 150–350 GPa and 3000 ± 1000 °C. An extrapolation of the austenite-hexaferrum phase boundary in Figure 1 suggests hexaferrum could be stable or metastable in Earth's core. For this reason, many experimental studies have investigated the properties of HCP iron under extreme pressures and temperatures. Figure 2 shows the compressional behaviour of ε-iron at room temperature up to a pressure as would be encountered halfway through the outer core of the Earth; there are no points at pressures lower than approximately 6 GPa, because this allotrope is not thermodynamically stable at low pressures but will slowly transform into α-iron. References Metallurgy Iron Steel
Hexaferrum
[ "Chemistry", "Materials_science", "Engineering" ]
530
[ "Metallurgy", "Materials science", "nan" ]
30,557,864
https://en.wikipedia.org/wiki/%282%2B1%29-dimensional%20topological%20gravity
In two spatial and one time dimensions, general relativity has no propagating gravitational degrees of freedom. In fact, in a vacuum, spacetime will always be locally flat (or de Sitter or anti-de Sitter depending upon the cosmological constant). This makes (2+1)-dimensional topological gravity (2+1D topological gravity) a topological theory with no gravitational local degrees of freedom. Physicists became interested in the relation between Chern–Simons theory and gravity during the 1980s. During this period, Edward Witten argued that 2+1D topological gravity is equivalent to a Chern–Simons theory with the gauge group for a negative cosmological constant, and for a positive one. This theory can be exactly solved, making it a toy model for quantum gravity. The Killing form involves the Hodge dual. Witten later changed his mind, and argued that nonperturbatively 2+1D topological gravity differs from Chern–Simons because the functional measure is only over nonsingular vielbeins. He suggested the CFT dual is a monster conformal field theory, and computed the entropy of BTZ black holes. References Quantum gravity
(2+1)-dimensional topological gravity
[ "Physics" ]
243
[ "Unsolved problems in physics", "Quantum gravity", "Relativity stubs", "Theory of relativity", "Physics beyond the Standard Model" ]
30,560,133
https://en.wikipedia.org/wiki/Run-around%20coil
A run-around coil is a type of energy recovery heat exchanger most often positioned within the supply and exhaust air streams of an air handling system, or in the exhaust gases of an industrial process, to recover the heat energy. Generally, it refers to any intermediate stream used to transfer heat between two streams that are not directly connected for reasons of safety or practicality. It may also be referred to as a run-around loop, a pump-around coil or a liquid coupled heat exchanger. Description A typical run-around coil system comprises two or more multi-row finned tube coils connected to each other by a pumped pipework circuit. The pipework is charged with a heat exchange fluid, normally water, which picks up heat from the exhaust air coil and gives up heat to the supply air coil before returning again. Thus heat from the exhaust air stream is transferred through the pipework coil to the circulating fluid, and then from the fluid through the pipework coil to the supply air stream. The use of this system is generally limited to situations where the air streams are separated and no other type of device can be utilised since the heat recovery efficiency is lower than other forms of air-to-air heat recovery. Gross efficiencies are usually in the range of 40 to 50%, but more significantly seasonal efficiencies of this system can be very low, due to the extra electrical energy used by the pumped fluid circuit. The fluid circuit containing the circulating pump also contains an expansion vessel, to accommodate changes in fluid pressure. In addition, there is a fill device to ensure the system remains charged. There are also controls to bypass and shut down the system when not required, and other safety devices. Pipework runs should be as short as possible, and should be sized for low velocities to minimize frictional losses, hence reducing pump energy consumption. It is possible to recover some of this energy in the form of heat given off by the motor if a glandless pump is used, where a water jacket surrounds the motor stator, thus picking up some of its heat. The pumped fluid will have to be protected from freezing, and is normally treated with a glycol based anti-freeze. This also reduces the specific heat capacity of the fluid and increases the viscosity, increasing pump power consumption, further reducing the seasonal efficiency of the device. For example, a 20% glycol mixture will provide protection down to , but will increase system resistance by 15%. For the finned tube coil design, there is a performance maximum corresponding to an eight- or ten-row coil, above this the fan and pump motor energy consumption increases substantially and seasonal efficiency starts to decrease. The main cause of increased energy consumption lies with the fan, for the same face velocity, fewer coil rows will decrease air pressure drop and increase water pressure drop. The total energy consumption will usually be less than that for a greater number of coil rows with higher air pressure drops and lower water pressure drops. Energy transfer process Normally the heat transfer between airstreams provided by the device is termed as 'sensible', which is the exchange of energy, or enthalpy, resulting in a change in temperature of the medium (air in this case), but with no change in moisture content. Other types of air-to-air heat exchangers Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel) Recuperator, or cross plate heat exchanger Heat pipe See also HVAC Energy recovery ventilation Heat recovery ventilation Regenerative heat exchanger Air handler Thermal comfort Indoor air quality CCSI References Heating, ventilation, and air conditioning Mechanical engineering Low-energy building Energy recovery Heating Sustainable building Energy conservation Industrial equipment Thermodynamics Heat transfer
Run-around coil
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
767
[ "Transport phenomena", "Sustainable building", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Building engineering", "Construction", "Thermodynamics", "Mechanical engineering", "nan", "Dynamical systems" ]
30,563,979
https://en.wikipedia.org/wiki/Lusin%27s%20separation%20theorem
In descriptive set theory and mathematical logic, Lusin's separation theorem states that if A and B are disjoint analytic subsets of Polish space, then there is a Borel set C in the space such that A ⊆ C and B ∩ C = ∅. It is named after Nikolai Luzin, who proved it in 1927. The theorem can be generalized to show that for each sequence (An) of disjoint analytic sets there is a sequence (Bn) of disjoint Borel sets such that An ⊆ Bn for each n. An immediate consequence is Suslin's theorem, which states that if a set and its complement are both analytic, then the set is Borel. Notes References ( for the European edition) . Descriptive set theory Theorems in the foundations of mathematics Theorems in topology
Lusin's separation theorem
[ "Mathematics" ]
169
[ "Foundations of mathematics", "Mathematical logic", "Theorems in topology", "Topology", "Mathematical problems", "Mathematical logic stubs", "Mathematical theorems", "Theorems in the foundations of mathematics" ]
30,573,317
https://en.wikipedia.org/wiki/Caldwell%20Tanks
Caldwell Tanks is a large privately held company that designs, fabricates, and builds tanks for the water, wastewater, grain, coal and energy industries. Caldwell is the largest elevated tank company in the world. Caldwell has approximately 500 total employees with 206 employees in Louisville at its headquarters campus. Caldwell has two major facilities: fabrication facilities in Louisville, Kentucky, and Newnan, Georgia. Operating divisions Caldwell Tanks - Constructs customized elevated and ground level potable water storage tanks Industrial API Tanks and ASME Vessels - Industrial field-erected projects in industries such as oil, gas, midstream and chemical. Caldwell Energy - Established in 1995, Caldwell Energy provides Turbine Inlet Air-Cooling (TIC) systems History Caldwell Tanks was founded in 1887 by William E. Caldwell. The company was originally known as W.E. Caldwell Co. The firm remained in the Caldwell family until 1986 when it was purchased by James W. Robinson. Robinson appointed a former banker, Bernard S. Fineman, to be president of Caldwell. The W.E. Caldwell Company headquarters was originally located on what is now part of the campus of the University of Louisville. The university's Red Barn multi-purpose activities facility is a remaining building of the original Caldwell Tanks complex. The university purchased the facility from Caldwell in 1969. In April 2011, Caldwell Tanks completed a expansion of its headquarters in Louisville. The expanded space doubled the office space at its Louisville headquarters, adding additional offices, conference rooms, and file-storage rooms. Caldwell also announced plans for a addition to its production facility. Notable projects Earffel Tower - Caldwell's most widely recognized installation was the Earffel (or Earful) Tower, located at Disney's Hollywood Studios theme park from 1989 to 2016. The tank won the Tank of the Year award in 1987 from the Steel Plate Fabricators Association. Solana Generating Station - Caldwell was awarded the contract to build 12 molten salt thermal energy storage tanks for the first large-scale solar plant in the United States capable of storing energy. Western Kentucky University water tower - In Fall 2004, Caldwell completed the new water tank on the Western Kentucky campus. The landmark tank features WKU's mascot Big Red, and it was constructed at a cost of $1,662,000. Victory Junction Gang Camp - Caldwell completed a tall, capacity hot air balloon styled tank for Kyle and Pattie Petty's Victory Junction Gang Camp. St. Charles County, Missouri Water District 2 - Caldwell was awarded the 2005 Steel Tank of the Year Award by the Steel Tank Institute / Steel Plate Fabricators Association (STI/SPFA) for an elevated tank completed in December 2005. The tall tank has a capacity of of water. South Carolina Electric & Gas Company - Caldwell Energy installed the world's largest central chilled plant dedicated to combustion turbine inlet air cooling at the Jasper Power Plant in Hardeeville, South Carolina Collinsville, Illinois - Caldwell completed the "World's Largest Ketchup Bottle" water tower for the G.S. Suppiger catsup bottling plant in 1949. Brown-Forman Distillery - Caldwell completed the Louisville distillery's Old Forester whiskey bottle tank in 1936. The Bottle District - Caldwell constructed the giant Vess bottle in St. Louis, Missouri in 1953. The "World's Largest Bat" - Caldwell fabricated and delivered the largest baseball bat in the world for Louisville Slugger Museum & Factory, standing at 120 ft. tall. and weighing 68,000 pounds. Use as a filming location Caldwell's Newnan, Georgia facilities have been used as a filming location for films and television series. Portions of The Hunger Games: Mockingjay – Part 1 were filmed on the Caldwell Tanks property in December 2013 to January 2014. Portions of the AMC television series The Walking Dead have also been filmed at Caldwell's facilities in Newnan. See also Brooks Catsup Bottle Water Tower References Manufacturing companies based in Louisville, Kentucky Storage tanks Manufacturing companies established in 1887 American companies established in 1887 1887 establishments in Kentucky
Caldwell Tanks
[ "Chemistry", "Engineering" ]
810
[ "Chemical equipment", "Storage tanks" ]
30,575,830
https://en.wikipedia.org/wiki/Spacecraft%20attitude%20determination%20and%20control
Spacecraft attitude control is the process of controlling the orientation of a spacecraft (vehicle or satellite) with respect to an inertial frame of reference or another entity such as the celestial sphere, certain fields, and nearby objects, etc. Controlling vehicle attitude requires actuators to apply the torques needed to orient the vehicle to a desired attitude, and algorithms to command the actuators based on the current attitude and specification of a desired attitude. Before and during attitude control can be performed, spacecraft attitude determination must be performed, which requires sensors for absolute or relative measurement. The broader integrated field that studies the combination of sensors, actuators and algorithms is called guidance, navigation and control, which also involves non-attitude concepts, such as position determination and navigation. Motivation A spacecraft's attitude must typically be stabilized and controlled for a variety of reasons. It is often needed so that the spacecraft high-gain antenna may be accurately pointed to Earth for communications, so that onboard experiments may accomplish precise pointing for accurate collection and subsequent interpretation of data, so that the heating and cooling effects of sunlight and shadow may be used intelligently for thermal control, and also for guidance: short propulsive maneuvers must be executed in the right direction. Many spacecraft have components that require articulation or pointing. Voyager and Galileo, for example, were designed with scan platforms for pointing optical instruments at their targets largely independently of spacecraft orientation. Many spacecraft, such as Mars orbiters, have solar panels that must track the Sun so they can provide electrical power to the spacecraft. Cassinis main engine nozzles were steerable. Knowing where to point a solar panel, or scan platform, or a nozzle — that is, how to articulate it — requires knowledge of the spacecraft's attitude. Because a single subsystem keeps track of the spacecraft's attitude, the Sun's location, and Earth's location, it can compute the proper direction to point the appendages. It logically falls to the same subsystem – the Attitude and Articulation Control Subsystem (AACS), then, to manage both attitude and articulation. The name AACS may even be carried over to a spacecraft even if it has no appendages to articulate. Background Attitude is part of the description of how an object is placed in the space it occupies. Attitude and position fully describe how an object is placed in space. (For some applications such as in robotics and computer vision, it is customary to combine position and attitude together into a single description known as Pose.) Attitude can be described using a variety of methods; however, the most common are Rotation matrices, Quaternions, and Euler angles. While Euler angles are oftentimes the most straightforward representation to visualize, they can cause problems for highly-maneuverable systems because of a phenomenon known as Gimbal lock. A rotation matrix, on the other hand, provides a full description of the attitude at the expense of requiring nine values instead of three. The use of a rotation matrix can lead to increased computational expense and they can be more difficult to work with. Quaternions offer a decent compromise in that they do not suffer from gimbal lock and only require four values to fully describe the attitude. Control Types of stabilization Attitude control of spacecraft is maintained using one of two principal approaches: Spin stabilization is accomplished by setting the spacecraft spinning, using the gyroscopic action of the rotating spacecraft mass as the stabilizing mechanism. Propulsion system thrusters are fired only occasionally to make desired changes in spin rate, or in the spin-stabilized attitude. If desired, the spinning may be stopped through the use of thrusters or by yo-yo de-spin. The Pioneer 10 and Pioneer 11 probes in the outer Solar System are examples of spin-stabilized spacecraft. is an alternative method of spacecraft attitude control in which the spacecraft is held fixed in the desired orientation without any rotation. One method is to use small thrusters to continually nudge the spacecraft back and forth within a deadband of allowed attitude error. Thrusters may also be referred to as mass-expulsion control (MEC) systems, or reaction control systems (RCS). The space probes Voyager 1 and Voyager 2 employ this method, and have used up about three quarters of their 100 kg of propellant as of July 2015. Another method for achieving three-axis stabilization is to use electrically powered reaction wheels, also called momentum wheels, which are mounted on three orthogonal axes aboard the spacecraft. They provide a means to trade angular momentum back and forth between spacecraft and wheels. To rotate the vehicle on a given axis, the reaction wheel on that axis is accelerated in the opposite direction. To rotate the vehicle back, the wheel is slowed. Excess momentum that builds up in the system due to external torques from, for example, solar photon pressure or gravity gradients, must be occasionally removed from the system by applying controlled torque to the spacecraft to allowing the wheels to return to a desired speed under computer control. This is done during maneuvers called momentum desaturation or momentum unload maneuvers. Most spacecraft use a system of thrusters to apply the torque for desaturation maneuvers. A different approach was used by the Hubble Space Telescope, which had sensitive optics that could be contaminated by thruster exhaust, and instead used magnetic torquers for desaturation maneuvers. There are advantages and disadvantages to both spin stabilization and three-axis stabilization. Spin-stabilized craft provide a continuous sweeping motion that is desirable for fields and particles instruments, as well as some optical scanning instruments, but they may require complicated systems to de-spin antennas or optical instruments that must be pointed at targets for science observations or communications with Earth. Three-axis controlled craft can point optical instruments and antennas without having to de-spin them, but they may have to carry out special rotating maneuvers to best utilize their fields and particle instruments. If thrusters are used for routine stabilization, optical observations such as imaging must be designed knowing that the spacecraft is always slowly rocking back and forth, and not always exactly predictably. Reaction wheels provide a much steadier spacecraft from which to make observations, but they add mass to the spacecraft, they have a limited mechanical lifetime, and they require frequent momentum desaturation maneuvers, which can perturb navigation solutions because of accelerations imparted by the use of thrusters. Actuators Attitude control can be obtained by several mechanisms, including: Thrusters Vernier thrusters are the most common actuators, as they may be used for station keeping as well. Thrusters must be organized as a system to provide stabilization about all three axes, and at least two thrusters are generally used in each axis to provide torque as a couple in order to prevent imparting a translation to the vehicle. Their limitations are fuel usage, engine wear, and cycles of the control valves. The fuel efficiency of an attitude control system is determined by its specific impulse (proportional to exhaust velocity) and the smallest torque impulse it can provide (which determines how often the thrusters must fire to provide precise control). Thrusters must be fired in one direction to start rotation, and again in the opposing direction if a new orientation is to be held. Thruster systems have been used on most crewed space vehicles, including Vostok, Mercury, Gemini, Apollo, Soyuz, and the Space Shuttle. To minimize the fuel limitation on mission duration, auxiliary attitude control systems may be used to reduce vehicle rotation to lower levels, such as small ion thrusters that accelerate ionized gases electrically to extreme velocities, using power from solar cells. Reaction/momentum wheels Momentum wheels are electric motor driven rotors made to spin in the direction opposite to that required to re-orient the vehicle. Because momentum wheels make up a small fraction of the spacecraft's mass and are computer controlled, they give precise control. Momentum wheels are generally suspended on magnetic bearings to avoid bearing friction and breakdown problems. Spacecraft Reaction wheels often use mechanical ball bearings. To maintain orientation in three dimensional space a minimum of three reaction wheels must be used, with additional units providing single failure protection. See Euler angles. Control moment gyros These are rotors spun at constant speed, mounted on gimbals to provide attitude control. Although a CMG provides control about the two axes orthogonal to the gyro spin axis, triaxial control still requires two units. A CMG is a bit more expensive in terms of cost and mass, because gimbals and their drive motors must be provided. The maximum torque (but not the maximum angular momentum change) exerted by a CMG is greater than for a momentum wheel, making it better suited to large spacecraft. A major drawback is the additional complexity, which increases the number of failure points. For this reason, the International Space Station uses a set of four CMGs to provide dual failure tolerance. Solar sails Small solar sails (devices that produce thrust as a reaction force induced by reflecting incident light) may be used to make small attitude control and velocity adjustments. This application can save large amounts of fuel on a long-duration mission by producing control moments without fuel expenditure. For example, Mariner 10 adjusted its attitude using its solar cells and antennas as small solar sails. Gravity-gradient stabilization In orbit, a spacecraft with one axis much longer than the other two will spontaneously orient so that its long axis points at the planet's center of mass. This system has the virtue of needing no active control system or expenditure of fuel. The effect is caused by a tidal force. The upper end of the vehicle feels less gravitational pull than the lower end. This provides a restoring torque whenever the long axis is not co-linear with the direction of gravity. Unless some means of damping is provided, the spacecraft will oscillate about the local vertical. Sometimes tethers are used to connect two parts of a satellite, to increase the stabilizing torque. A problem with such tethers is that meteoroids as small as a grain of sand can part them. Magnetic torquers Coils or (on very small satellites) permanent magnets exert a moment against the local magnetic field. This method works only where there is a magnetic field against which to react. One classic field "coil" is actually in the form of a conductive tether in a planetary magnetic field. Such a conductive tether can also generate electrical power, at the expense of orbital decay. Conversely, by inducing a counter-current, using solar cell power, the orbit may be raised. Due to massive variability in Earth's magnetic field from an ideal radial field, control laws based on torques coupling to this field will be highly non-linear. Moreover, only two-axis control is available at any given time meaning that a vehicle reorient may be necessary to null all rates. Passive attitude control Three main types of passive attitude control exist for satellites. The first one uses gravity gradient, and it leads to four stable states with the long axis (axis with smallest moment of inertia) pointing towards Earth. As this system has four stable states, if the satellite has a preferred orientation, e.g. a camera pointed at the planet, some way to flip the satellite and its tether end-for-end is needed. The second passive system orients the satellite along Earth's magnetic field thanks to a magnet. These purely passive attitude control systems have limited pointing accuracy, because the spacecraft will oscillate around energy minima. This drawback is overcome by adding damper, which can be hysteretic materials or a viscous damper. The viscous damper is a small can or tank of fluid mounted in the spacecraft, possibly with internal baffles to increase internal friction. Friction within the damper will gradually convert oscillation energy into heat dissipated within the viscous damper. A third form of passive attitude control is aerodynamic stabilization. This is achieved using a drag gradient, as demonstrated on the Get Away Special Passive Attitude Control Satellite (GASPACS) technology demonstration. In low Earth orbit, the force due to drag is many orders of magnitude more dominant than the force imparted due to gravity gradients. When a satellite is utilizing aerodynamic passive attitude control, air molecules from the Earth's upper atmosphere strike the satellite in such a way that the center of pressure remains behind the center of mass, similar to how the feathers on an arrow stabilize the arrow. GASPACS utilized a 1 m inflatable 'AeroBoom', which extended behind the satellite, creating a stabilizing torque along the satellite's velocity vector. Control algorithms Control algorithms are computer programs that receive data from vehicle sensors and derive the appropriate commands to the actuators to rotate the vehicle to the desired attitude. The algorithms range from very simple, e.g. proportional control, to complex nonlinear estimators or many in-between types, depending on mission requirements. Typically, the attitude control algorithms are part of the software running on the computer hardware, which receives commands from the ground and formats vehicle data telemetry for transmission to a ground station. The attitude control algorithms are written and implemented based on requirement for a particular attitude maneuver. Asides the implementation of passive attitude control such as the gravity-gradient stabilization, most spacecraft make use of active control which exhibits a typical attitude control loop. The design of the control algorithm depends on the actuator to be used for the specific attitude maneuver although using a simple proportional–integral–derivative controller (PID controller) satisfies most control needs. The appropriate commands to the actuators are obtained based on error signals described as the difference between the measured and desired attitude. The error signals are commonly measured as euler angles (Φ, θ, Ψ), however an alternative to this could be described in terms of direction cosine matrix or error quaternions. The PID controller which is most common reacts to an error signal (deviation) based on attitude as follows where is the control torque, is the attitude deviation signal, and are the PID controller parameters. A simple implementation of this can be the application of the proportional control for nadir pointing making use of either momentum or reaction wheels as actuators. Based on the change in momentum of the wheels, the control law can be defined in 3-axes x, y, z as This control algorithm also affects momentum dumping. Another important and common control algorithm involves the concept of detumbling, which is attenuating the angular momentum of the spacecraft. The need to detumble the spacecraft arises from the uncontrollable state after release from the launch vehicle. Most spacecraft in low Earth orbit (LEO) makes use of magnetic detumbling concept which utilizes the effect of the Earth's magnetic field. The control algorithm is called the B-Dot controller and relies on magnetic coils or torque rods as control actuators. The control law is based on the measurement of the rate of change of body-fixed magnetometer signals. where is the commanded magnetic dipole moment of the magnetic torquer and is the proportional gain and is the rate of change of the Earth's magnetic field. Determination Spacecraft attitude determination is the process of determining the orientation of a spacecraft (vehicle or satellite). It is a pre-requisite for spacecraft attitude control. A variety of sensors are utilized for relative and absolute attitude determination. Sensors Relative attitude sensors Many sensors generate outputs that reflect the rate of change in attitude. These require a known initial attitude, or external information to use them to determine attitude. Many of this class of sensor have some noise, leading to inaccuracies if not corrected by absolute attitude sensors. Gyroscopes Gyroscopes are devices that sense rotation in three-dimensional space without reliance on the observation of external objects. Classically, a gyroscope consists of a spinning mass, but there are also "ring laser gyros" utilizing coherent light reflected around a closed path. Another type of "gyro" is a hemispherical resonator gyro where a crystal cup shaped like a wine glass can be driven into oscillation just as a wine glass "sings" as a finger is rubbed around its rim. The orientation of the oscillation is fixed in inertial space, so measuring the orientation of the oscillation relative to the spacecraft can be used to sense the motion of the spacecraft with respect to inertial space. Motion reference units Motion reference units are a kind of inertial measurement unit with single- or multi-axis motion sensors. They utilize MEMS gyroscopes. Some multi-axis MRUs are capable of measuring roll, pitch, yaw and heave. They have applications outside the aeronautical field, such as: Antenna motion compensation and stabilization Dynamic positioning Heave compensation of offshore cranes High speed craft motion control and damping systems Hydro acoustic positioning Motion compensation of single and multibeam echosounders Ocean wave measurements Offshore structure motion monitoring Orientation and attitude measurements on Autonomous underwater vehicles and Remotely operated underwater vehicles Ship motion monitoring Absolute attitude sensors This class of sensors sense the position or orientation of fields, objects or other phenomena outside the spacecraft. Horizon sensor A horizon sensor is an optical instrument that detects light from the 'limb' of Earth's atmosphere, i.e., at the horizon. Thermal infrared sensing is often used, which senses the comparative warmth of the atmosphere, compared to the much colder cosmic background. This sensor provides orientation with respect to Earth about two orthogonal axes. It tends to be less precise than sensors based on stellar observation. Sometimes referred to as an Earth sensor. Orbital gyrocompass Similar to the way that a terrestrial gyrocompass uses a pendulum to sense local gravity and force its gyro into alignment with Earth's spin vector, and therefore point north, an orbital gyrocompass uses a horizon sensor to sense the direction to Earth's center, and a gyro to sense rotation about an axis normal to the orbit plane. Thus, the horizon sensor provides pitch and roll measurements, and the gyro provides yaw. See Tait-Bryan angles. Sun sensor A Sun sensor is a device that senses the direction to the Sun. This can be as simple as some solar cells and shades, or as complex as a steerable telescope, depending on mission requirements. Earth sensor An Earth sensor is a device that senses the direction to Earth. It is usually an infrared camera; nowadays the main method to detect attitude is the star tracker, but Earth sensors are still integrated in satellites for their low cost and reliability. Star tracker A star tracker is an optical device that measures the position(s) of star(s) using photocell(s) or a camera. It uses magnitude of brightness and spectral type to identify and then calculate the relative position of stars around it. Magnetometer A magnetometer is a device that senses magnetic field strength and, when used in a three-axis triad, magnetic field direction. As a spacecraft navigational aid, sensed field strength and direction is compared to a map of Earth's magnetic field stored in the memory of an on-board or ground-based guidance computer. If spacecraft position is known then attitude can be inferred. Attitude estimation Attitude cannot be measured directly by any single measurement, and so must be calculated (or estimated) from a set of measurements (often using different sensors). This can be done either statically (calculating the attitude using only the measurements currently available), or through the use of a statistical filter (most commonly, the Kalman filter) that statistically combine previous attitude estimates with current sensor measurements to obtain an optimal estimate of the current attitude. Static attitude estimation methods Static attitude estimation methods are solutions to Wahba's problem. Many solutions have been proposed, notably Davenport's q-method, QUEST, TRIAD, and singular value decomposition. Crassidis, John L., and John L. Junkins.. Chapman and Hall/CRC, 2004. Sequential estimation methods Kalman filtering can be used to sequentially estimate the attitude, as well as the angular rate. Because attitude dynamics (combination of rigid body dynamics and attitude kinematics) are non-linear, a linear Kalman filter is not sufficient. Because attitude dynamics is not very non-linear, the Extended Kalman filter is usually sufficient (however Crassidis and Markely demonstrated that the Unscented Kalman filter could be used, and can provide benefits in cases where the initial estimate is poor). Multiple methods have been proposed, however the Multiplicative Extended Kalman Filter (MEKF) is by far the most common approach. This approach utilizes the multiplicative formulation of the error quaternion, which allows for the unity constraint on the quaternion to be better handled. It is also common to use a technique known as dynamic model replacement, where the angular rate is not estimated directly, but rather the measured angular rate from the gyro is used directly to propagate the rotational dynamics forward in time. This is valid for most applications as gyros are typically far more precise than one's knowledge of disturbance torques acting on the system (which is required for precise estimation of the angular rate). Position/location determination For some sensors and applications (such as spacecraft using magnetometers) the precise location must also be known. While pose estimation can be employed, for spacecraft it is usually sufficient to estimate the position (via Orbit determination) separate from the attitude estimation. For terrestrial vehicles and spacecraft operating near the Earth, the advent of Satellite navigation systems allows for precise position knowledge to be obtained easily. This problem becomes more complicated for deep space vehicles, or terrestrial vehicles operating in Global Navigation Satellite System (GNSS) denied environments (see Navigation). See also Astrionics#Attitude determination and control Longitudinal static stability Directional stability Reaction control system Spacecraft_flight_dynamics#Attitude_control Triad method Wahba's problem References External links Aerospace engineering Orbits Spaceflight concepts Dynamics (mechanics)
Spacecraft attitude determination and control
[ "Physics", "Engineering" ]
4,502
[ "Physical phenomena", "Classical mechanics", "Motion (physics)", "Dynamics (mechanics)", "Aerospace engineering" ]
30,578,054
https://en.wikipedia.org/wiki/Operation%20chart
The operation chart is a graphical and symbolic representation of the manufacturing operations used to produce a product. The operation chart illustrates only the value-adding activities in the manufacturing process; therefore, material handling and storage are not illustrated in this chart. operation chart records the overall picture of process and sequencewise steps of operations. Operations and their symbols in the operation chart The operations described in the operation chart are: Processing and assembly operations: Processing operation such as changing in shape and properties. On the other hand, joining two or more parts is an assembly operation. Furthermore, these two operations are represented by this symbols, circle (○) and (O) letter. Inspection operations: Inspection operations are represented by square symbol (□) and (I) as letter. It's done by an inspector checks the material, work part and assembly for quality and quantity. See also Outline of manufacturing References Industrial engineering
Operation chart
[ "Engineering" ]
180
[ "Industrial engineering" ]
30,578,523
https://en.wikipedia.org/wiki/Naor%E2%80%93Reingold%20pseudorandom%20function
In 1997, Moni Naor and Omer Reingold described efficient constructions for various cryptographic primitives in private key as well as public-key cryptography. Their result is the construction of an efficient pseudorandom function. Let p and l be prime numbers with l |p−1. Select an element g ∈ of multiplicative order l. Then for each (n+1)-dimensional vector a = (a0,a1, ..., an)∈ they define the function where x = x1 ... xn is the bit representation of integer x, 0 ≤ x ≤ 2n−1, with some extra leading zeros if necessary. Example Let p = 7 and l = 3; so l |p−1. Select g = 4 ∈ of multiplicative order 3 (since 43 = 64 ≡ 1 mod 7). For n = 3, a = (1, 1, 2, 1) and x = 5 (the bit representation of 5 is 101), we can compute as follows: Efficiency The evaluation of function in the Naor–Reingold construction can be done very efficiently. Computing the value of the function at any given point is comparable with one modular exponentiation and n-modular multiplications. This function can be computed in parallel by threshold circuits of bounded depth and polynomial size. The Naor–Reingold function can be used as the basis of many cryptographic schemes including symmetric encryption, authentication and digital signatures. Security of the function Assume that an attacker sees several outputs of the function, e.g. , ... and wants to compute . Assume for simplicity that x1 = 0, then the attacker needs to solve the computational Diffie–Hellman (CDH) between and to get . In general, moving from k to k + 1 changes the bit pattern and unless k + 1 is a power of 2 one can split the exponent in so that the computation corresponds to computing the Diffie–Hellman key between two of the earlier results. This attacker wants to predict the next sequence element. Such an attack would be very bad—but it's also possible to fight it off by working in groups with a hard Diffie–Hellman problem (DHP). Example: An attacker sees several outputs of the function e.g. , as in the previous example, and . Then, the attacker wants to predict the next sequence element of this function, . However, the attacker cannot predict the outcome of from knowing and . There are other attacks that would be very bad for a pseudorandom number generator: the user expects to get random numbers from the output, so of course the stream should not be predictable, but even more, it should be indistinguishable from a random string. Let denote the algorithm with access to an oracle for evaluating the function . Suppose the decisional Diffie–Hellman assumption holds for , Naor and Reingold show that for every probabilistic polynomial time algorithm and sufficiently large n is negligible. The first probability is taken over the choice of the seed s = (p, g, a) and the second probability is taken over the random distribution induced on p, g by , instance generator, and the random choice of the function among the set of all functions. Linear complexity One natural measure of how useful a sequence may be for cryptographic purposes is the size of its linear complexity. The linear complexity of an n-element sequence W(x), x = 0,1,2,...,n – 1, over a ring is the length l of the shortest linear recurrence relation W(x + l) = Al−1 W(x +l−1) + ... + A0 W(x), x = 0,1,2,..., n – l −1 with A0, ..., Al−1 ∈ , which is satisfied by this sequence. For some > 0,n ≥ (1+ ) , for any , sufficiently large l, the linear complexity of the sequence ,0 ≤ x ≤ 2n-1, denoted by satisfies for all except possibly at most vectors a ∈ . The bound of this work has disadvantages, namely it does not apply to the very interesting case Uniformity of distribution The statistical distribution of is exponentially close to uniform distribution for almost all vectors a ∈ . Let be the discrepancy of the set . Thus, if is the bit length of p then for all vectors a ∈ the bound holds, where and Although this property does not seem to have any immediate cryptographic implications, the inverse fact, namely non uniform distribution, if true would have disastrous consequences for applications of this function. Sequences in elliptic curve The elliptic curve version of this function is of interest as well. In particular, it may help to improve the cryptographic security of the corresponding system. Let p > 3 be prime and let E be an elliptic curve over , then each vector a defines a finite sequence in the subgroup as: where is the bit representation of integer . The Naor–Reingold elliptic curve sequence is defined as If the decisional Diffie–Hellman assumption holds, the index k is not enough to compute in polynomial time, even if an attacker performs polynomially many queries to a random oracle.https://en.wikipedia.org/wiki/Elliptic_curve See also Decisional Diffie–Hellman assumption Finite field Inversive congruential generator Generalized inversive congruential pseudorandom numbers Notes References . Pseudorandom number generators Cryptography
Naor–Reingold pseudorandom function
[ "Mathematics", "Engineering" ]
1,139
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
30,578,787
https://en.wikipedia.org/wiki/Bogota%20Declaration
The Declaration of the First Meeting of Equatorial Countries, also known as the Bogota Declaration, is a declaration made and signed in 1976 by eight equatorial countries, and was an attempt to assert sovereignty over those portions of the geostationary orbit that continuously lie over the signatory nations' territory. These claims have been one of the few attempts to challenge the 1967 Outer Space Treaty, but they did not receive wider international support or recognition. Subsequently, they were largely abandoned. Background The Outer Space Treaty is a treaty that forms the basis of international space law. The treaty was opened for signature in the United States, the United Kingdom, and the Soviet Union on 27 January 1967, and entered into force on 10 October 1967. In this time period, many countries in Africa and Asia were either newly independent or still in the process of decolonization from its former European colonizers. The treaty's ban on claims of sovereignty in space was interpreted differently by some of the newly independent countries, who saw the great powers of the time using their power to shape the laws that governed extraterritorial domains to their benefit. Bogota Declaration Representatives of Ecuador, Colombia, Brazil, Congo, Zaire (in 1997 renamed to the Democratic Republic of the Congo), Uganda, Kenya, and Indonesia met in Bogotá, Colombia in 1976 and signed the declaration, thereby claiming control of the segment of the geosynchronous orbital path corresponding to each country, and argued that the segments above the high seas were the "common heritage of mankind" and ought, therefore, to be collectively governed by all nations. They claimed that the space above their territories did not fall under the definition of "outer space" by the 1967 Outer Space Treaty and was, therefore, a "natural resource". This would have led to a space ownership issue of practical importance, seen the satellites present in this geostationary orbit, whose slot allocations were managed by the International Telecommunication Union (ITU). These claims were seen as violating the 1967 Outer Space Treaty and did not receive wider international support or recognition. Subsequently, they were largely abandoned. See also Outer space Space law Extraterrestrial real estate American Declaration of the Rights and Duties of Man References Space treaties Outer space International space agreements
Bogota Declaration
[ "Astronomy" ]
452
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
1,431,559
https://en.wikipedia.org/wiki/Core%E2%80%93mantle%20boundary
The core–mantle boundary (CMB) of Earth lies between the planet's silicate mantle and its liquid iron–nickel outer core, at a depth of below Earth's surface. The boundary is observed via the discontinuity in seismic wave velocities at that depth due to the differences between the acoustic impedances of the solid mantle and the molten outer core. P-wave velocities are much slower in the outer core than in the deep mantle while S-waves do not exist at all in the liquid portion of the core. Recent evidence suggests a distinct boundary layer directly above the CMB possibly made of a novel phase of the basic perovskite mineralogy of the deep mantle named post-perovskite. Seismic tomography studies have shown significant irregularities within the boundary zone and appear to be dominated by the African and Pacific Large low-shear-velocity provinces (LLSVP). The uppermost section of the outer core is thought to be about 500–1,800 K hotter than the overlying mantle, creating a thermal boundary layer. The boundary is thought to harbor topography, much like Earth's surface, that is supported by solid-state convection within the overlying mantle. Variations in the thermal properties of the CMB may affect how the outer core's iron-rich fluids flow, which are ultimately responsible for Earth's magnetic field. D″ region An approximately 200 km thick layer of the lower mantle directly above the CMB is referred to as the D″ region ("D double-prime" or "D prime prime") and is sometimes included in discussions regarding the core–mantle boundary zone. The D″ name originates from geophysicist Keith Bullen's designations for the Earth's layers. His system was to label each layer alphabetically, A through G, with the crust as 'A' and the inner core as 'G'. In his 1942 publication of his model, the entire lower mantle was the D layer. In 1949, Bullen found his 'D' layer to actually be two different layers. The upper part of the D layer, about 1,800 km thick, was renamed D′ (D prime) and the lower part (the bottom 200 km) was named D″. Later it was found that D" is non-spherical. In 1993, Czechowski found that inhomogeneities in D" form structures analogous to continents (i.e. core-continents). They move in time and determine some properties of hotspots and mantle convection. Later research supported this hypothesis. Seismic discontinuity A seismic discontinuity occurs within Earth's interior at a depth of about 2,900 km (1,800 mi) below the surface, where there is an abrupt change in the speed of seismic waves (generated by earthquakes or explosions) that travel through Earth. At this depth, primary seismic waves (P waves) decrease in velocity while secondary seismic waves (S waves) disappear completely. S waves shear material and cannot transmit through liquids, so it is thought that the unit above the discontinuity is solid, while the unit below is in a liquid or molten form. The discontinuity was discovered by Beno Gutenberg, a seismologist who made several important contributions to the study and understanding of the Earth's interior. The CMB has also been referred to as the Gutenberg discontinuity, the Oldham-Gutenberg discontinuity, or the Wiechert-Gutenberg discontinuity. In modern times, however, the term Gutenberg discontinuity or the "G" is most commonly used in reference to a decrease in seismic velocity with depth that is sometimes observed at about 100 km below the Earth's oceans. See also Core–mantle differentiation Ultra low velocity zone References External links Earth's Core–Mantle Boundary Has Core-Rigidity Zone Mineral phase change at the boundary Superplumes at the boundary About.com article on the name of D″ Geophysics Structure of the Earth
Core–mantle boundary
[ "Physics" ]
832
[ "Applied and interdisciplinary physics", "Geophysics" ]
1,432,127
https://en.wikipedia.org/wiki/Mode%20%28statistics%29
In statistics, the mode is the value that appears most often in a set of data values. If is a discrete random variable, the mode is the value at which the probability mass function takes its maximum value (i.e., ). In other words, it is the value that is most likely to be sampled. Like the statistical mean and median, the mode is a way of expressing, in a (usually) single number, important information about a random variable or a population. The numerical value of the mode is the same as that of the mean and median in a normal distribution, and it may be very different in highly skewed distributions. The mode is not necessarily unique in a given discrete distribution since the probability mass function may take the same maximum value at several points , , etc. The most extreme case occurs in uniform distributions, where all values occur equally frequently. A mode of a continuous probability distribution is often considered to be any value at which its probability density function has a locally maximum value. When the probability density function of a continuous distribution has multiple local maxima it is common to refer to all of the local maxima as modes of the distribution, so any peak is a mode. Such a continuous distribution is called multimodal (as opposed to unimodal). In symmetric unimodal distributions, such as the normal distribution, the mean (if defined), median and mode all coincide. For samples, if it is known that they are drawn from a symmetric unimodal distribution, the sample mean can be used as an estimate of the population mode. Mode of a sample The mode of a sample is the element that occurs most often in the collection. For example, the mode of the sample [1, 3, 6, 6, 6, 6, 7, 7, 12, 12, 17] is 6. Given the list of data [1, 1, 2, 4, 4] its mode is not unique. A dataset, in such a case, is said to be bimodal, while a set with more than two modes may be described as multimodal. For a sample from a continuous distribution, such as [0.935..., 1.211..., 2.430..., 3.668..., 3.874...], the concept is unusable in its raw form, since no two values will be exactly the same, so each value will occur precisely once. In order to estimate the mode of the underlying distribution, the usual practice is to discretize the data by assigning frequency values to intervals of equal distance, as for making a histogram, effectively replacing the values by the midpoints of the intervals they are assigned to. The mode is then the value where the histogram reaches its peak. For small or middle-sized samples the outcome of this procedure is sensitive to the choice of interval width if chosen too narrow or too wide; typically one should have a sizable fraction of the data concentrated in a relatively small number of intervals (5 to 10), while the fraction of the data falling outside these intervals is also sizable. An alternate approach is kernel density estimation, which essentially blurs point samples to produce a continuous estimate of the probability density function which can provide an estimate of the mode. The following MATLAB (or Octave) code example computes the mode of a sample: X = sort(x); % x is a column vector dataset indices = find(diff([X, realmax]) > 0); % indices where repeated values change [modeL,i] = max (diff([0, indices])); % longest persistence length of repeated values mode = X(indices(i)); The algorithm requires as a first step to sort the sample in ascending order. It then computes the discrete derivative of the sorted list and finds the indices where this derivative is positive. Next it computes the discrete derivative of this set of indices, locating the maximum of this derivative of indices, and finally evaluates the sorted sample at the point where that maximum occurs, which corresponds to the last member of the stretch of repeated values. Comparison of mean, median and mode Use Unlike mean and median, the concept of mode also makes sense for "nominal data" (i.e., not consisting of numerical values in the case of mean, or even of ordered values in the case of median). For example, taking a sample of Korean family names, one might find that "Kim" occurs more often than any other name. Then "Kim" would be the mode of the sample. In any voting system where a plurality determines victory, a single modal value determines the victor, while a multi-modal outcome would require some tie-breaking procedure to take place. Unlike median, the concept of mode makes sense for any random variable assuming values from a vector space, including the real numbers (a one-dimensional vector space) and the integers (which can be considered embedded in the reals). For example, a distribution of points in the plane will typically have a mean and a mode, but the concept of median does not apply. The median makes sense when there is a linear order on the possible values. Generalizations of the concept of median to higher-dimensional spaces are the geometric median and the centerpoint. Uniqueness and definedness For some probability distributions, the expected value may be infinite or undefined, but if defined, it is unique. The mean of a (finite) sample is always defined. The median is the value such that the fractions not exceeding it and not falling below it are each at least 1/2. It is not necessarily unique, but never infinite or totally undefined. For a data sample it is the "halfway" value when the list of values is ordered in increasing value, where usually for a list of even length the numerical average is taken of the two values closest to "halfway". Finally, as said before, the mode is not necessarily unique. Certain pathological distributions (for example, the Cantor distribution) have no defined mode at all. For a finite data sample, the mode is one (or more) of the values in the sample. Properties Assuming definedness, and for simplicity uniqueness, the following are some of the most interesting properties. All three measures have the following property: If the random variable (or each value from the sample) is subjected to the linear or affine transformation, which replaces by , so are the mean, median and mode. Except for extremely small samples, the mode is insensitive to "outliers" (such as occasional, rare, false experimental readings). The median is also very robust in the presence of outliers, while the mean is rather sensitive. In continuous unimodal distributions the median often lies between the mean and the mode, about one third of the way going from mean to mode. In a formula, median ≈ (2 × mean + mode)/3. This rule, due to Karl Pearson, often applies to slightly non-symmetric distributions that resemble a normal distribution, but it is not always true and in general the three statistics can appear in any order. For unimodal distributions, the mode is within standard deviations of the mean, and the root mean square deviation about the mode is between the standard deviation and twice the standard deviation. Example for a skewed distribution An example of a skewed distribution is personal wealth: Few people are very rich, but among those some are extremely rich. However, many are rather poor. A well-known class of distributions that can be arbitrarily skewed is given by the log-normal distribution. It is obtained by transforming a random variable having a normal distribution into random variable . Then the logarithm of random variable is normally distributed, hence the name. Taking the mean μ of to be 0, the median of will be 1, independent of the standard deviation σ of . This is so because has a symmetric distribution, so its median is also 0. The transformation from to is monotonic, and so we find the median for . When has standard deviation σ = 0.25, the distribution of is weakly skewed. Using formulas for the log-normal distribution, we find: Indeed, the median is about one third on the way from mean to mode. When has a larger standard deviation, , the distribution of is strongly skewed. Now Here, Pearson's rule of thumb fails. Van Zwet condition Van Zwet derived an inequality which provides sufficient conditions for this inequality to hold. The inequality Mode ≤ Median ≤ Mean holds if F( Median - ) + F( Median + ) ≥ 1 for all where F() is the cumulative distribution function of the distribution. Unimodal distributions It can be shown for a unimodal distribution that the median and the mean lie within (3/5)1/2 ≈ 0.7746 standard deviations of each other. In symbols, where is the absolute value. A similar relation holds between the median and the mode: they lie within 31/2 ≈ 1.732 standard deviations of each other: History The term mode originates with Karl Pearson in 1895. Pearson uses the term mode interchangeably with maximum-ordinate. In a footnote he says, "I have found it convenient to use the term mode for the abscissa corresponding to the ordinate of maximum frequency." See also Arg max Central tendency Descriptive statistics Moment (mathematics) Summary statistics Unimodal function References External links A Guide to Understanding & Calculating the Mode Mean, Median and Mode short beginner video from Khan Academy Means Summary statistics Articles with example MATLAB/Octave code
Mode (statistics)
[ "Physics", "Mathematics" ]
1,994
[ "Means", "Mathematical analysis", "Point (geometry)", "Geometric centers", "Symmetry" ]
1,433,747
https://en.wikipedia.org/wiki/Harnack%27s%20principle
In the mathematical field of partial differential equations, Harnack's principle or Harnack's theorem is a corollary of Harnack's inequality which deals with the convergence of sequences of harmonic functions. Given a sequence of harmonic functions on an open connected subset of the Euclidean space , which are pointwise monotonically nondecreasing in the sense that for every point of , then the limit automatically exists in the extended real number line for every . Harnack's theorem says that the limit either is infinite at every point of or it is finite at every point of . In the latter case, the convergence is uniform on compact sets and the limit is a harmonic function on . The theorem is a corollary of Harnack's inequality. If is a Cauchy sequence for any particular value of , then the Harnack inequality applied to the harmonic function implies, for an arbitrary compact set containing , that is arbitrarily small for sufficiently large and . This is exactly the definition of uniform convergence on compact sets. In words, the Harnack inequality is a tool which directly propagates the Cauchy property of a sequence of harmonic functions at a single point to the Cauchy property at all points. Having established uniform convergence on compact sets, the harmonicity of the limit is an immediate corollary of the fact that the mean value property (automatically preserved by uniform convergence) fully characterizes harmonic functions among continuous functions. The proof of uniform convergence on compact sets holds equally well for any linear second-order elliptic partial differential equation, provided that it is linear so that solves the same equation. The only difference is that the more general Harnack inequality holding for solutions of second-order elliptic PDE must be used, rather than that only for harmonic functions. Having established uniform convergence on compact sets, the mean value property is not available in this more general setting, and so the proof of convergence to a new solution must instead make use of other tools, such as the Schauder estimates. References Sources External links Harmonic functions Theorems in complex analysis Mathematical principles
Harnack's principle
[ "Mathematics" ]
431
[ "Mathematical principles", "Theorems in mathematical analysis", "Theorems in complex analysis" ]
1,434,249
https://en.wikipedia.org/wiki/Planetary%20migration
Planetary migration occurs when a planet or other body in orbit around a star interacts with a disk of gas or planetesimals, resulting in the alteration of its orbital parameters, especially its semi-major axis. Planetary migration is the most likely explanation for hot Jupiters (exoplanets with Jovian masses but orbits of only a few days). The generally accepted theory of planet formation from a protoplanetary disk predicts that such planets cannot form so close to their stars, as there is insufficient mass at such small radii and the temperature is too high to allow the formation of rocky or icy planetesimals. It has also become clear that terrestrial-mass planets may be subject to rapid inward migration if they form while the gas disk is still present. This may affect the formation of the cores of the giant planets (which have masses of the order of 10 to 1000 Earth masses), if those planets form via the core-accretion mechanism. Types of disk Gas disk Observations suggest that gas in protoplanetary disks orbiting young stars have lifetimes of a few to several million years. If planets with masses of around an Earth mass or greater form while the gas is still present, the planets can exchange angular momentum with the surrounding gas in the protoplanetary disk so that their orbits change gradually. Although the sense of migration is typically inwards in locally isothermal disks, outward migration may occur in disks that possess entropy gradients. Planetesimal disk During the late phase of planetary system formation, massive protoplanets and planetesimals gravitationally interact in a chaotic manner causing many planetesimals to be thrown into new orbits. This results in angular-momentum exchange between the planets and the planetesimals, and leads to migration (either inward or outward). Outward migration of Neptune is believed to be responsible for the resonant capture of Pluto and other Plutinos into the 3:2 resonance with Neptune. Types of migration There are many different mechanisms by which planets' orbits can migrate, which are described below as disk migration (Type I migration, Type II migration, or Type III migration), tidal migration, planetesimal-driven migration, gravitational scattering, and Kozai cycles and tidal friction. This list of types is not exhaustive or definitive: Depending on what is most convenient for any one type of study, different researchers will distinguish mechanisms in somewhat different ways. Classification of any one mechanism is mainly based on the circumstances in the disk that enable the mechanism to efficiently transfer energy and/or angular momentum to and from planetary orbits. As the loss or relocation of material in the disk changes the circumstances, one migration mechanism will give way to another mechanism, or perhaps none. If there is no follow-on mechanism, migration (largely) stops and the stellar system becomes (mostly) stable. Disk migration Disk migration arises from the gravitational force exerted by a sufficiently massive body embedded in a disk on the surrounding disk's gas, which perturbs its density distribution. By the reaction principle of classical mechanics, the gas exerts an equal and opposite gravitational force on the body, which can also be expressed as a torque. This torque alters the angular momentum of the planet's orbit, resulting in a variation of the semi-major axis and other orbital elements. An increase over time of the semi-major axis leads to outward migration, i.e., away from the star, whereas the opposite behavior leads to inward migration. Three sub-types of disk migration are distinguished as Types I, II, and III. The numbering is not intended to suggest a sequence or stages. Type I migration Small planets undergo Type I disk migration driven by torques arising from Lindblad and co-rotation resonances. Lindblad resonances excite spiral density waves in the surrounding gas, both interior and exterior of the planet's orbit. In most cases, the outer spiral wave exerts a greater torque than does the inner wave, causing the planet to lose angular momentum, and hence migrate toward the star. The migration rate due to these torques is proportional to the mass of the planet and to the local gas density, and results in a migration timescale that tends to be short relative to the million-year lifetime of the gaseous disk. Additional co-rotation torques are also exerted by gas orbiting with a period similar to that of the planet. In a reference frame attached to the planet, this gas follows horseshoe orbits, reversing direction when it approaches the planet from ahead or from behind. The gas reversing course ahead of the planet originates from a larger semi-major axis and may be cooler and denser than the gas reversing course behind the planet. This may result in a region of excess density ahead of the planet and of lesser density behind the planet, causing the planet to gain angular momentum. The planet mass for which migration can be approximated to Type I depends on the local gas pressure scale height and, to a lesser extent, the kinematic viscosity of the gas. In warm and viscous disks, Type I migration may apply to larger mass planets. In locally isothermal disks and far from steep density and temperature gradients, co-rotation torques are generally overpowered by the Lindblad torques. Regions of outward migration may exist for some planetary mass ranges and disk conditions in both local isothermal and non-isothermal disks. The locations of these regions may vary during the evolution of the disk, and in the local-isothermal case are restricted to regions with large density and/or temperature radial gradients over several pressure scale-heights. Type I migration in a local isothermal disk was shown to be compatible with the formation and long-term evolution of some of the observed Kepler planets. The rapid accretion of solid material by the planet may also produce a "heating torque" that causes the planet to gain angular momentum. Type II migration A planet massive enough to open a gap in a gaseous disk undergoes a regime referred to as Type II disk migration. When the mass of a perturbing planet is large enough, the tidal torque it exerts on the gas transfers angular momentum to the gas exterior of the planet's orbit, and does the opposite interior to the planet, thereby repelling gas from around the orbit. In a Type I regime, viscous torques can efficiently counter this effect by resupplying gas and smoothing out sharp density gradients. But when the torques become strong enough to overcome the viscous torques in the vicinity of the planet's orbit, a lower density annular gap is created. The depth of this gap depends on the temperature and viscosity of the gas and on the planet mass. In the simple scenario in which no gas crosses the gap, the migration of the planet follows the viscous evolution of the disk's gas. In the inner disk, the planet spirals inward on the viscous timescale, following the accretion of gas onto the star. In this case, the migration rate is typically slower than would be the migration of the planet in the Type I regime. In the outer disk, however, migration can be outward if the disk is viscously expanding. A Jupiter-mass planet in a typical protoplanetary disk is expected to undergo migration at approximately the Type II rate, with the transition from Type I to Type II occurring at roughly the mass of Saturn, as a partial gap is opened. Type II migration is one explanation for the formation of hot Jupiters. In more realistic situations, unless extreme thermal and viscosity conditions occur in a disk, there is an ongoing flux of gas through the gap. As a consequence of this mass flux, torques acting on a planet can be susceptible to local disk properties, akin to torques at work during Type I migration. Therefore, in viscous disks, Type II migration can be typically described as a modified form of Type I migration, in a unified formalism. The transition between Type I and Type II migration is generally smooth, but deviations from a smooth transition have also been found. In some situations, when planets induce eccentric perturbation in the surrounding disk's gas, Type II migration may slow down, stall, or reverse. From a physical viewpoint, Type I and Type II migration are driven by the same type of torques (at Lindblad and co-rotation resonances). In fact, they can be interpreted and modeled as a single regime of migration, that of Type I appropriately modified by the perturbed gas surface density of the disk. Type III disk migration Type III disk migration applies to fairly extreme disk / planet cases and is characterized by extremely short migration timescales. Although sometimes referred to as "runaway migration", the migration rate does not necessarily increase over time. Type III migration is driven by the co-orbital torques from gas trapped in the planet's libration regions and from an initial, relatively fast, planetary radial motion. The planet's radial motion displaces gas in its co-orbital region, creating a density asymmetry between the gas on the leading and the trailing side of the planet. Type III migration applies to disks that are relatively massive and to planets that can only open partial gaps in the gas disk. Previous interpretations linked Type III migration to gas streaming across the orbit of the planet in the opposite direction as the planet's radial motion, creating a positive feedback loop. Fast outward migration may also occur temporarily, delivering giant planets to distant orbits, if later Type II migration is ineffective at driving the planets back. Gravitational scattering Another possible mechanism that may move planets over large orbital radii is gravitational scattering by larger planets or, in a protoplanetary disk, gravitational scattering by over-densities in the fluid of the disk. In the case of the Solar System, Uranus and Neptune may have been gravitationally scattered onto larger orbits by close encounters with Jupiter and/or Saturn. Systems of exoplanets can undergo similar dynamical instabilities following the dissipation of the gas disk that alter their orbits and in some cases result in planets being ejected or colliding with the star. Planets scattered gravitationally can end on highly eccentric orbits with perihelia close to the star, enabling their orbits to be altered by the tides they raise on the star. The eccentricities and inclinations of these planets are also excited during these encounters, providing one possible explanation for the observed eccentricity distribution of the closely orbiting exoplanets. The resulting systems are often near the limits of stability. As in the Nice model, systems of exoplanets with an outer disk of planetesimals can also undergo dynamical instabilities following resonance crossings during planetesimal-driven migration. The eccentricities and inclinations of the planets on distant orbits can be damped by dynamical friction with the planetesimals with the final values depending on the relative masses of the disk and the planets that had gravitational encounters. Tidal migration Tides between the star and planet modify the semi-major axis and orbital eccentricity of the planet. If the planet is orbiting very near to its star, the tide of the planet raises a bulge on the star. If the star's rotational period is longer than the planet's orbital period the location of the bulge lags behind a line between the planet and the center of the star creating a torque between the planet and the star. As a result, the planet loses angular momentum and its semi-major axis decreases with time. If the planet is in an eccentric orbit the strength of the tide is stronger when it is near perihelion. The planet is slowed the most when near perihelion, causing its aphelion to decrease faster than its perihelion, reducing its eccentricity. Unlike disk migration – which lasts a few million years until the gas dissipates – tidal migration continues for billions of years. Tidal evolution of close-in planets produces semi-major axes typically half as large as they were at the time that the gas nebula cleared. Kozai cycles and tidal friction A planetary orbit that is inclined relative to the plane of a binary star can shrink due to a combination of Kozai cycles and tidal friction. Interactions with the more distant star cause the planet's orbit to undergo an exchange of eccentricity and inclination due to the Kozai mechanism. This process can increase the planet's eccentricity and lower its perihelion enough to create strong tides between the planet on the star increases. When it is near the star the planet loses angular momentum causing its orbit to shrink. The planet's eccentricity and inclination cycle repeatedly, slowing the evolution of the planets semi-major axis. If the planet's orbit shrinks enough to remove it from the influence of the distant star the Kozai cycles end. Its orbit will then shrink more rapidly as it is tidally circularized. The orbit of the planet can also become retrograde due to this process. Kozai cycles can also occur in a system with two planets that have differing inclinations due to gravitational scattering between planets and can result in planets with retrograde orbits. Planetesimal-driven migration The orbit of a planet can change due to gravitational encounters with a large number of planetesimals. Planetesimal-driven migration is the result of the accumulation of the transfers of angular momentum during encounters between the planetesimals and a planet. For individual encounters the amount of angular momentum exchanged and the direction of the change in the planet's orbit depends on the geometry of the encounter. For a large number of encounters the direction of the planet's migration depends on the average angular momentum of the planetesimals relative to the planet. If it is higher, for example a disk outside the planet's orbit, the planet migrates outward, if it is lower the planet migrates inward. The migration of a planet beginning with a similar angular momentum as the disk depends on potential sinks and sources of the planetesimals. For a single planet system, planetesimals can only be lost (a sink) due to their ejection, which would cause the planet to migrate inward. In multiple planet systems the other planets can act as sinks or sources. Planetesimals can be removed from the planet's influence after encountering an adjacent planet or transferred to that planet's influence. These interactions cause the planet's orbits to diverge as the outer planet tends to remove planetesimals with larger momentum from the inner planet influence or add planetesimals with lower angular momentum, and vice versa. The planet's resonances, where the eccentricities of planetesimals are pumped up until they intersect with the planet, also act as a source. Finally, the planet's migration acts as both a sink and a source of new planetesimals creating a positive feedback that tends to continue its migration in the original direction. Planetesimal-driven migration can be damped if planetesimals are lost to various sinks faster than new ones are encountered due to its sources. It may be sustained if the new planetesimals enter its influence faster than they are lost. If sustained migration is due to its migration only, it is called runaway migration. If it is due to the loss of planetesimals to another planet's influence, it is called forced migration. For a single planet orbiting in a planetesial disk the shorter timescales of the encounters with planetesimals with shorter period orbits results in more frequent encounters with the planetesimals with less angular momentum and the inward migration of the planet. Planetesimal-driven migration in a gas disk, however, can be outward for a particular range of planetesimal sizes because of the removal of shorter period planetesimals due to gas drag. Resonance capture The migration of planets can lead to planets being captured in resonances and chains of resonances if their orbits converge. The orbits of the planets can converge if the migration of the inner planet is halted at the inner edge of the gas disk, resulting in a system of tightly orbiting inner planets; or if migration is halted in a convergence zone where the torques driving Type I migration cancel, for example near the ice line, in a chain of more distant planets. Gravitational encounters can also lead to the capture of planets with sizable eccentricities in resonances. In the grand tack hypothesis the migration of Jupiter is halted and reversed when it captured Saturn in an outer resonance. The halting of Jupiter's and Saturn's migration and the capture of Uranus and Neptune in further resonances may have prevented the formation of a compact system of super-earths similar to many of those found by Kepler. The outward migration of planets can also result in the capture of planetesimals in resonance with the outer planet; for example the resonant trans-Neptunian objects in the Kuiper belt. Although planetary migration is expected to lead to systems with chains of resonant planets most exoplanets are not in resonances. The resonance chains can be disrupted by gravitational instabilities once the gas disk dissipates. Interactions with leftover planetesimals can break resonances of low mass planets leaving them in orbits slightly outside the resonance. Tidal interactions with the star, turbulence in the disk, and interactions with the wake of another planet could also disrupt resonances. Resonance capture might be avoided for planets smaller than Neptune with eccentric orbits. In the Solar System The migration of the outer planets is a scenario proposed to explain some of the orbital properties of the bodies in the Solar System's outermost regions. Beyond Neptune, the Solar System continues into the Kuiper belt, the scattered disc, and the Oort cloud, three sparse populations of small icy bodies thought to be the points of origin for most observed comets. At their distance from the Sun, accretion was too slow to allow planets to form before the solar nebula dispersed, because the initial disc lacked enough mass density to consolidate into a planet. The Kuiper belt lies between 30 and 55 AU from the Sun, while the farther scattered disc extends to over 100 AU, and the distant Oort cloud begins at about 50,000 AU. According to this scenario the Kuiper belt was originally much denser and closer to the Sun: it contained millions of planetesimals, and had an outer edge at approximately 30 AU, the present distance of Neptune. After the formation of the Solar System, the orbits of all the giant planets continued to change slowly, influenced by their interaction with the large number of remaining planetesimals. After 500–600 million years (about 4 billion years ago) Jupiter and Saturn divergently crossed the 2:1 orbital resonance, in which Saturn orbited the Sun once for every two Jupiter orbits. This resonance crossing increased the eccentricities of Jupiter and Saturn and destabilized the orbits of Uranus and Neptune. Encounters between the planets followed causing Neptune to surge past Uranus and plough into the dense planetesimal belt. The planets scattered the majority of the small icy bodies inwards, while moving outwards themselves. These planetesimals then scattered off the next planet they encountered in a similar manner, moving the planets' orbits outwards while they moved inwards. This process continued until the planetesimals interacted with Jupiter, whose immense gravity sent them into highly elliptical orbits or even ejected them outright from the Solar System. This caused Jupiter to move slightly inward. This scattering scenario explains the trans-Neptunian populations' present low mass. In contrast to the outer planets, the inner planets are not believed to have migrated significantly over the age of the Solar System, because their orbits have remained stable following the period of giant impacts. See also Nebular hypothesis Rogue planet Tidally detached exomoon Notes References Goldreich, P., and Tremaine, S. 1979, Astrophysical Journal, 233, 857 Lin, D. N. C., and Papaloizou, J. 1979, Monthly Notices of the Royal Astronomical Society, 186, 799 Ward, W. R. 1997, Icarus, 126, 261 Tanaka, H., Takeuchi, T., and Ward, W. R. 2002, Astrophysical Journal, 565, 1257 Celestial mechanics Solar System dynamic theories
Planetary migration
[ "Physics" ]
4,161
[ "Celestial mechanics", "Classical mechanics", "Astrophysics" ]
37,139,183
https://en.wikipedia.org/wiki/Pitting%20resistance%20equivalent%20number
Pitting resistance equivalent number (PREN) is a predictive measurement of a stainless steel's resistance to localized pitting corrosion based on its chemical composition. In general: the higher PREN-value, the more resistant is the stainless steel to localized pitting corrosion by chloride. PREN is frequently specified when stainless steels will be exposed to seawater or other high chloride solutions. In some instances stainless steels with PREN-values > 32 may provide useful resistance to pitting corrosion in seawater, but is dependent on optimal conditions. However, crevice corrosion is also a significant possibility and a PREN > 40 is typically specified for seawater service. These alloys need to be manufactured and heat treated correctly to be seawater corrosion resistant to the expected level. PREN alone is not an indicator of corrosion resistance. The value should be calculated for each heat to ensure compliance with minimum requirements, this is due to chemistry variation within the specified composition limits. PREN formulas (w/w) There are several PREN formulas. They commonly range from: PREN = %Cr + 3.3 × %Mo + 16 × %N to: PREN = %Cr + 3.3 × %Mo + 30 × %N. There are a few stainless steels which add tungsten (W), for those the following formula is used: PREN = %Cr + 3.3 × (%Mo + 0.5 × %W ) + 16 × %N All % values of elements must be expressed by mass, or weight (wt. %), and not by volume. Tolerance on element measurements could be ignored as the PREN value is indicative only. Pitting resistance measurement Exact pitting test procedures are specified in the ASTM G48 standard. References Corrosion Stainless steel
Pitting resistance equivalent number
[ "Chemistry", "Materials_science" ]
366
[ "Metallurgy", "Electrochemistry", "Materials degradation", "Corrosion" ]
37,139,944
https://en.wikipedia.org/wiki/Application%20performance%20engineering
Application performance engineering is a method to develop and test application performance in various settings, including mobile computing, the cloud, and conventional information technology (IT). Methodology According to the American National Institute of Standards and Technology, nearly four out of every five dollars spent on the total cost of ownership of an application is directly attributable to finding and fixing issues post-deployment. A full one-third of this cost could be avoided with better software testing. Application performance engineering attempts to test software before it is published. While practices vary among organizations, the method attempts to emulate the real-world conditions that software in development will confront, including network deployment and access by mobile devices. Techniques include network virtualization. See also Network virtualization Performance engineering Service virtualization Software performance testing References Further reading Practical Performance Analyst - Performance Engineering Community & Body Of Knowledge "Application performance engineering," Computerworld. January 28, 2011. The Mandate for Application Performance Engineering by Jim Metzler. Application Performance Engineering: A Lifecycle Approach to Achieving Confidence in Application Performance Application Performance Engineering Hub Blog Performance Engineering Services The 2011 Application & Service Delivery Handbook Application software Software testing
Application performance engineering
[ "Engineering" ]
229
[ "Software engineering", "Software testing" ]
37,140,999
https://en.wikipedia.org/wiki/Copper%20hydride
Copper hydride is an inorganic compound with the chemical formula CuHn where n ~ 0.95. It is a red solid, rarely isolated as a pure composition, that decomposes to the elements. Copper hydride is mainly produced as a reducing agent in organic synthesis and as a precursor to various catalysts. History In 1844, the French chemist Adolphe Wurtz synthesised copper hydride for the first time. He reduced an aqueous solution of copper(II) sulfate with hypophosphorous acid (H3PO2). In 2011, Panitat Hasin and Yiying Wu were the first to synthesise a metal hydride (copper hydride) using the technique of sonication. Copper hydride has the distinction of being the first metal hydride discovered. In 2013, it was established by Donnerer et al. that, at least up to fifty gigapascals, copper hydride cannot be synthesised by pressure alone. However, they were successful in synthesising several copper-hydrogen alloys under pressure. Chemical properties Structure In copper hydride, elements adopt the Wurtzite crystal structure (polymeric), being connected by covalent bonds. The CuH consists of a core of CuH with a shell of water and this may be largely replaced by ethanol. This offers the possibility of modifying the properties of CuH produced by aqueous routes. While all methods for the synthesis of CuH result in the same bulk product, the synthetic path taken engenders differing surface properties. The different behaviors of CuH obtained by aqueous and nonaqueous routes can be ascribed to a combination of very different particle size and dissimilar surface termination, namely, bonded hydroxyls for the aqueous routes and a coordinated donor for the nonaqueous routes. Chemical reactions CuH generally behaves as a source of H–. For instance, Wurtz reported the double displacement reaction of CuH with hydrochloric acid: CuH + HCl → CuCl + When not cooled below , copper hydride decomposes, to produce hydrogen gas and a mixture containing elemental copper: 2 CuH → xCu•(2-x)CuH + ½x  (0 < x < 2) Solid copper hydride is the irreversible autopolymerisation product of the molecular form, and the molecular form cannot be isolated in concentration. Production Copper does not react with hydrogen even on heating, thus copper hydrides are made indirectly from copper(I) and copper(II) precursors. Examples include the reduction of copper(II) sulfate with sodium hypophosphite in the presence of sulfuric acid, or more simply with just hypophosphorous acid. Other reducing agents, including classical aluminium hydrides can be used. 4 Cu2+ + 6 H3PO2 + 6 H2O → 4 CuH + 6 H3PO3 + 8 H+ The reactions produce a red-colored precipitate of CuH, which is generally impure and slowly decomposes to liberate hydrogen, even at 0 °C. 2 CuH → 2 Cu + H2 This slow decomposition also takes place underwater, however there are reports of the material becoming pyrophoric if dried. A new synthesis method has been published in 2017 by Lousada et al. In this synthesis high purity CuH nanoparticles have been obtained from basic copper carbonate, CuCO3·Cu(OH)2. This method is faster and has a higher chemical yield than the copper sulfate based synthesis and produces nanoparticles of CuH with higher purity and a smaller size distribution. The obtained CuH can easily be converted to conducting thin films of Cu. These films are obtained by spraying the CuH nanoparticles in their synthesis medium into some insulating support. After drying, conducting Cu films protected by a layer of mixed copper oxides are spontaneously formed. Reductive sonication Copper hydride is also produced by reductive sonication. In this process, and hydrogen(•) react to produce copper hydride and oxonium according to the equation: [Cu(H2O)6]2+ + 3 H• → 1/n (CuH)n + 2 [H3O]+ + 4 H2O Hydrogen(•) is obtained in situ from the homolytic sonication of water. Reductive sonication produces molecular copper hydride as an intermediate. Applications in Organic Synthesis Phosphine- and NHC-copper hydride species have been developed as reagents in organic synthesis, albeit of limited use. Most widely used is [(Ph3P)CuH]6 (Stryker's reagent) for the reduction of α,β-unsaturated carbonyl compounds. H2 (at least 80 psi) and hydrosilanes can be used as the terminal reductant, allowing a catalytic amount of [(Ph3P)CuH]6 to be used for conjugate reduction reactions. Chiral phosphine-copper complexes catalyze hydrosilation of ketones and esters with low enantioselectivities. An enantioselective (80 to 92% ee) reduction of prochiral α,β-unsaturated esters uses Tol-BINAP complexes of copper in the presence of PMHS as the reductant. Subsequently, conditions have been developed for the CuH-catalyzed hydrosilylation of ketones and imines proceeding with excellent levels of chemo- and enantioselectivity. The reactivity of LnCuH species with weakly activated (e.g. styrenes, dienes) and unactivated alkenes (e.g. α-olefins) and alkynes has been recognized and has served as the basis for several copper-catalyzed formal hydrofunctionalization reactions. "Hydridocopper" The diatomic species CuH is a gas that has attracted the attention of spectroscopists. It polymerises upon being condensed. A well-known oligomer is octahedro-hexacuprane(6), occurring in Stryker's reagent. Hydridocopper has acidic behavior for the same reason as normal copper hydride. However, it does not form stable aqueous solutions, due in part to its autopolymerisation, and its tendency to be oxidised by water. Copper hydride reversibly precipitates from pyridine solution, as an amorphous solid. However, repeated dissolution affords the regular crystalline form, which is insoluble. Under standard conditions, molecular copper hydride autopolymerises to form the crystalline form, including under aqueous conditions, hence the aqueous production method devised by Wurtz. Production Molecular copper hydride can be formed by reducing copper iodide with lithium aluminium hydride in ether and pyridine. 4CuI + LiAlH4 → CuH + LiI + AlI3 This was discovered by E Wiberg and W Henle in 1952. The solution of this CuH in the pyridine is typically dark red to dark orange. A precipitate is formed if ether is added to this solution. This will redissolve in pyridine. Impurities of the reaction products remain in the product. In this study, it was found that the solidified diatomic substance is distinct from the Wurtzite structure. The Wurtzite substance was insoluble and was decomposed by lithium iodide, but not the solidified diatomic species. Moreover, while the Wurtzite substance's decomposition is strongly base catalysed, whereas the solidified diatomic species is not strongly affected at all. Dilts distinguishes between the two copper hydrides as the 'insoluble-' and 'soluble copper hydrides'. The soluble hydride is susceptible to pyrolysis under vacuum and proceeds to completion under 100 °C. Amorphous copper hydride is also produced by anhydrous reduction. In this process copper(I) and tetrahydroaluminate react to produce molecular copper hydride and triiodoaluminium adducts. The molecular copper hydride is precipitated into amorphous copper hydride with the addition of diethyl ether. Amorphous copper hydride is converted into the Wurtz phase by annealing, accompanied by some decomposition. History Hydridocopper was discovered in the vibration-rotation emission of a hollow-cathode lamp in 2000 by Bernath, who detected it at the University of Waterloo. It was first detected as a contaminant while attempting to generate NeH+ using the hollow-cathode lamp. Molecular copper hydride has the distinction of being the first metal hydride to be detected in this way. (1,0) (2,0) and (2,1) vibrational bands were observed along with line splitting due to the presence of two copper isotopes, 63Cu and 65Cu. The A1Σ+-X1Σ+ absorption lines from CuH have been claimed to have been observed in sunspots and in the star 19 Piscium. In vapour experiments, it was found that copper hydride is produced from the elements upon exposure to 310 nanometre radiation. Cu + H2 ↔ CuH + H• However, this proved to be unviable as a production method as the reaction is difficult to control. The activation barrier for the reverse reaction is virtually non-existent, which allows it to readily proceed even at 20 Kelvin. Other copper hydrides A binary dihydride () also exists, in the form of an unstable reactive intermediate in the reduction of copper hydride by atomic hydrogen. References Copper(I) compounds Metal hydrides Wurtzite structure type
Copper hydride
[ "Chemistry" ]
2,112
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
37,143,605
https://en.wikipedia.org/wiki/IGR%20J17091-3624
IGR J17091-3624 (also IGR J17091) is a stellar mass black hole 28,000 light-years away. It lies in the constellation Scorpius in the Milky Way galaxy. Discovery IGR J17091 was discovered by ESA's INTEGRAL satellite in April 2003. Description IGR J17091 is a stellar mass black hole with a mass between 3 and 10 . It is a binary system in which a star orbits the black hole. Its small size may make it a candidate for the smallest black hole discovered. However, as of 2017 its mass was described as "unknown". Observations by the Chandra X-ray Observatory in 2011 discovered that it produces the fastest winds ever coming from an accretion disk at 3,218,688 km/h (20 million mph) making it about 3% of the speed of light. This is 10 times faster than the next-highest-measured wind speed. According to Ashley King from the University of Michigan "Contrary to the popular perception of black holes pulling in all of the material that gets close, we estimate up to 95 percent of the matter in the disk around IGR J17091 is expelled by the wind." IGR J17091 also exhibits peculiar X-ray variability patterns or "heartbeats" which are small, quasi-periodic, outbursts repeated over a 5- to 70-second timescale. Similar variability has only been observed in the black hole GRS 1915+105; however, IGR J17091's outbursts are 20 times fainter. See also List of black holes Micro black hole Hawking radiation References External links NASA's Chandra Finds Fastest Winds From Stellar-Mass Black Hole - Chandra.Harvard.edu NASA's RXTE Detects 'Heartbeat' of Smallest Black Hole Candidate - NASA.gov, with animation Scorpius Stellar black holes Astronomical X-ray sources Binary stars
IGR J17091-3624
[ "Physics", "Astronomy" ]
407
[ "Black holes", "Stellar black holes", "Unsolved problems in physics", "Constellations", "Scorpius", "Astronomical X-ray sources", "Astronomical objects" ]
37,147,646
https://en.wikipedia.org/wiki/Power%20law%20scheme
The power law scheme was first used by Suhas Patankar (1980). It helps in achieving approximate solutions in computational fluid dynamics (CFD) and it is used for giving a more accurate approximation to the one-dimensional exact solution when compared to other schemes in computational fluid dynamics (CFD). This scheme is based on the analytical solution of the convection diffusion equation. This scheme is also very effective in removing False diffusion error. Working The power-law scheme interpolates the face value of a variable, , using the exact solution to a one-dimensional convection-diffusion equation given below: In the above equation Diffusion Coefficient, and both the density and velocity remains constant u across the interval of integration. Integrating the equation, with Boundary Conditions, Variation of face value with distance, x is given by the expression, where Pe is the Peclet number given by Peclet number is defined to be the ratio of the rate of convection of a physical quantity by the flow to the rate of diffusion of the same quantity driven by an appropriate gradient. The variation between and x is depicted in Figure for a range of values of the Peclet number. It shows that for large Pe, the value of at x=L/2 is approximately equal to the value at upwind boundary which is assumption made by the upwind differencing scheme. In this scheme diffusion is set to zero when cell Pe exceeds 10. This implies that when the flow is dominated by convection, interpolation can be completed by simply letting the face value of a variable be set equal to its upwind or upstream value. When Pe=0 (no flow, or pure diffusion), Figure shows that solution, may be interpolated using a simple linear average between the values at x=0 and x=L. When the Peclet number has an intermediate value, the interpolated value for at x=L/2 must be derived by applying the power law equivalent. The simple average convection coefficient formulation can be replaced with a formula incorporating the power law relationship : where and are the properties on the left node and right node respectively. The central coefficient is given by . Final coefficient form of the discrete equation: References Computational fluid dynamics
Power law scheme
[ "Physics", "Chemistry" ]
449
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
37,152,030
https://en.wikipedia.org/wiki/Decade%20box
A decade box is a piece of test equipment that can be used during prototyping of electronic circuits to substitute the interchanging of different values of certain passive components with a single variable output. Decade boxes are made for resistance, capacitance, and inductance, the values of which can be adjusted incrementally by the turning of a knob or switch, with the contacts of the switch moving along a series of the respective components. The interface for these devices usually consists of dials or adjustable tape counters, and they are operated in-circuit and without any external power source. References Analog circuits Electronic test equipment
Decade box
[ "Technology", "Engineering" ]
128
[ "Analog circuits", "Electronic engineering", "Electronic test equipment", "Measuring instruments" ]
29,044,095
https://en.wikipedia.org/wiki/Black-oil%20equations
The black-oil equations are a set of partial differential equations that describe fluid flow in a petroleum reservoir, constituting the mathematical framework for a black-oil reservoir simulator. The term black-oil refers to the fluid model, in which water is modeled explicitly together with two hydrocarbon components, one (pseudo) oil phase and one (pseudo-)gas phase. This is in contrast with a compositional formulation, in which each hydrocarbon component (arbitrary number) is handled separately. The equations of an extended black-oil model are: where is a porosity of the porous medium, is a water saturation, are saturations of liquid ("oil") and vapor ("gas") phases in the reservoir, are Darcy velocities of the liquid phase, water phase and vapor phase in the reservoir. The oil and gas at the surface (standard conditions) could be produced from both liquid and vapor phases existing at high pressure and temperature of reservoir conditions. This is characterized by the following quantities: is an oil formation volume factor (ratio of some volume of reservoir liquid to the volume of oil at standard conditions obtained from the same volume of reservoir liquid), is a water formation volume factor (ratio of volume of water at reservoir conditions to volume of water at standard conditions), is a gas formation volume factor (ratio of some volume of reservoir vapor to the volume of gas at standard conditions obtained from the same volume of reservoir vapor), is a solution of gas in oil phase (ratio of volume of gas to the volume of oil at standard conditions obtained from some amount of liquid phase at reservoir conditions), is a vaporized oil in gas phase (ratio of volume of oil to the volume of gas at standard conditions obtained from some amount of vapor phase at reservoir conditions). See also Porous medium Darcy's law Relative permeability Petroleum Hydrocarbon References Partial differential equations Equations of fluid dynamics Last updated by Jesse Gabriel, June 2021 Koma Kange
Black-oil equations
[ "Physics", "Chemistry" ]
400
[ "Equations of fluid dynamics", "Equations of physics", "Petroleum stubs", "Petroleum", "Fluid dynamics stubs", "Fluid dynamics" ]
29,049,734
https://en.wikipedia.org/wiki/Nematode%20chemoreceptor
Nematode chemoreceptors are chemoreceptors of nematodes. Animals recognise a wide variety of chemicals using their senses of taste and smell. The nematode Caenorhabditis elegans has only 14 types of chemosensory neuron, yet is able to respond to dozens of chemicals because each neuron detects several stimuli. More than 40 highly divergent transmembrane proteins that could contribute to this functional diversity have been described. Most of the candidate receptor genes are in clusters of similar genes; 11 of these appear to be expressed in small subsets of chemosensory neurons. A single type of neuron can potentially express at least 4 different receptor genes. Some of these might encode receptors for water-soluble attractants, repellents and pheromones, which are divergent members of the G-protein-coupled receptor family. Sequences of the Sra family of C. elegans receptor-like proteins contain 6-7 hydrophobic, putative transmembrane, regions. These can be distinguished from other 7TM proteins (especially those known to couple G-proteins) by their own characteristic TM signatures. More than 1300 potential chemoreceptor genes have been identified in C. elegans, which are generally prefixed sr for serpentine receptor. The receptor superfamilies include Sra (Sra, Srb, Srab, Sre), Str (Srh, Str, Sri, Srd, Srj, Srm, Srn) and Srg (Srx, Srt, Srg, Sru, Srv, Srxa), as well as the families Srw, Srz, Srbc, Srsx and Srr. Many of these proteins have homologues in Caenorhabditis briggsae. These receptors are distantly related to the rhodopsin-like receptors. In contrast the receptor Sro is a true rhodopsin-like receptor. It is a member of the nemopsins a subgroup of the opsins, but unlike most other opsins it does not have a lysine corresponding to position 296 in cattle rhodopsin. The lysine is replaced by an asparagine. The lysine is needed so that the chromophore retinal can covalently bind to the opsin via a Schiff-base, which makes the opsin light sensitive. If the lysine is replaced by another amino acid then the opsin becomes light insensitive. Therefore, Sro is also thought to be a chemoreceptor. References G protein-coupled receptors Chemoreceptor
Nematode chemoreceptor
[ "Chemistry" ]
563
[ "G protein-coupled receptors", "Signal transduction" ]
29,050,062
https://en.wikipedia.org/wiki/Panel%20edge%20staining
Panel edge staining is a naturally occurring problem that occurs to anodized aluminium and stainless steel panelling and façades. It is semi-permanent staining that dulls the panel or façade's surface (in particular the edges of the panelling), reducing the natural lustre and shine produced by the anodizing processes used on the aluminium. Panel edge staining may also appear on powder coated aluminium, painted aluminium, stainless steel and titanium surfaces. Causes Panel edge staining is the by-product of the build-up of dirt and pollution. It is especially more noticeable on buildings using metallic façades in Asia, and regions close to the equator (such as Florida or South East Asia), as higher rates of air pollution, high levels of humidity and consistent rainfall encourage panel edge staining to develop. The unique top-to-bottom stain pattern of panel edge staining is caused when the build-up of dirt and pollution is washed from the higher panels to the lower panels of a surface by natural precipitation. Notes References Staining of facades By Michael Y. L. Chew, Tan Phay Ping Maintenance and Restoration of Architectural Aluminum By Service One, Inc. Technical Paper Aluminium Corrosion Stainless steel
Panel edge staining
[ "Chemistry", "Materials_science", "Engineering" ]
243
[ "Metallurgy", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Architecture stubs", "Materials degradation", "Physical chemistry stubs", "Chemical process stubs", "Architecture" ]
35,800,432
https://en.wikipedia.org/wiki/Kepler-46
Kepler-46, previously designated KOI-872, is a star located in the constellation Lyra. Observed since 2009 by the Kepler space observatory, it has since been found to possess a planetary system consisting of at least three planets and while it has a similar mass to the Sun (90%) it is significantly older at ten billion years. Kepler-46 b (previously KOI-872.01), was the first planet discovered in the system. It was found through detailed analysis of Kepler space observatory data. An additional planet, Kepler-46 c, was discovered by an outside group using Kepler public data through analysis of transit timing variations. While only one additional planet was confirmed by the analysis, the study revealed the potential existence of an unconfirmed planet KOI-872.03 (KOI-872 d). Validation by the multiplicity method confirmed the existence of this planet which was then renamed Kepler-46d. Planetary system Planet b is a gas giant planet with a mass slightly less than that of Jupiter. The second planet in the system was among the first to be discovered through the method of transit timing variations, and through its confirmation of KOI-872 c with a 99% confidence level has shown that the method of detection may be used to detect future extrasolar planets and, possibly, extrasolar moons. This second planet exerted a gravitational force on the first planet, orbiting its host star in just 34 days. While this usually occurs on an extremely regular schedule, additional planets within the system can disrupt the time of the transit, and these disruptions can indicate the presence of a planet, even if the disrupting planet does not pass in front of the host star itself. The data show that Kepler-46 c is an approximately Saturn-mass object with an orbital period of 57 days. As the planet does not itself transit its host star, there is no way of knowing its size (probably a similar size to its sibling). The measurements also suggest the existence of another planet orbiting with a period of about 6.8 days, and this planet was confirmed in 2016. The method in which the planet was detected is similar to the way that the planet Neptune was discovered, in which the newly discovered planet is detected by its pull on another which is already known to exist. In 2021, it was found the orbital plane of Kepler-46b is slowly changing, likely under the gravitational influence of the additional giant planet. References Notes Lyra Planetary systems with three confirmed planets 872 Planetary transit variables
Kepler-46
[ "Astronomy" ]
513
[ "Lyra", "Constellations" ]
35,802,271
https://en.wikipedia.org/wiki/Path%20integral%20molecular%20dynamics
Path integral molecular dynamics (PIMD) is a method of incorporating quantum mechanics into molecular dynamics simulations using Feynman path integrals. In PIMD, one uses the Born–Oppenheimer approximation to separate the wavefunction into a nuclear part and an electronic part. The nuclei are treated quantum mechanically by mapping each quantum nucleus onto a classical system of several fictitious particles connected by springs (harmonic potentials) governed by an effective Hamiltonian, which is derived from Feynman's path integral. The resulting classical system, although complex, can be solved relatively quickly. There are now a number of commonly used condensed matter computer simulation techniques that make use of the path integral formulation including centroid molecular dynamics (CMD), ring polymer molecular dynamics (RPMD), and the Feynman–Kleinert quasi-classical Wigner (FK–QCW) method. The same techniques are also used in path integral Monte Carlo (PIMC). There are two ways to calculate the dynamics calculations of PIMD. The first one is the non-Hamiltonian phase space analysis theory, which has been updated to create an "extended system" of isokinetic equations of motion which overcomes the properties of a system that created issues within the community. The second way is by using Nosé–Hoover chain, which is a chain of variables instead of a single thermostat of variable. Combination with other simulation techniques The simulations done my PIMD can broadly characterize the biomolecular systems, covering the entire structure and organization of the membrane, including the permeability, protein-lipid interactions, along with "lipid-drug interactions, protein–ligand interactions, and protein structure and dynamics." Applications PIMD is "widely used to describe nuclear quantum effects in chemistry and physics". Path Integral Molecular Dynamics can be applied to polymer physics, both field theories, quantum and not, string theory, stochastic dynamics, quantum mechanics, and quantum gravity. PIMD can also be used to calculate time correlation functions References Further reading External links Molecular dynamics Quantum chemistry Quantum Monte Carlo
Path integral molecular dynamics
[ "Physics", "Chemistry" ]
422
[ "Quantum chemistry", "Molecular physics", "Quantum mechanics", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry", " molecular", "Atomic", "Quantum Monte Carlo", " and optical physics" ]
35,802,855
https://en.wikipedia.org/wiki/Properties%20of%20metals%2C%20metalloids%20and%20nonmetals
can be broadly divided into metals, metalloids, and nonmetals according to their shared physical and chemical properties. All elemental metals have a shiny appearance (at least when freshly polished); are good conductors of heat and electricity; form alloys with other metallic elements; and have at least one basic oxide. Metalloids are metallic-looking, often brittle solids that are either semiconductors or exist in semiconducting forms, and have amphoteric or weakly acidic oxides. Typical elemental nonmetals have a dull, coloured or colourless appearance; are often brittle when solid; are poor conductors of heat and electricity; and have acidic oxides. Most or some elements in each category share a range of other properties; a few elements have properties that are either anomalous given their category, or otherwise extraordinary. Properties Metals Elemental metals appear lustrous (beneath any patina); form compounds (alloys) when combined with other elements; tend to lose or share electrons when they react with other substances; and each forms at least one predominantly basic oxide. Most metals are silvery looking, high density, relatively soft and easily deformed solids with good electrical and thermal conductivity, closely packed structures, low ionisation energies and electronegativities, and are found naturally in combined states. Some metals appear coloured (Cu, Cs, Au), have low densities (e.g. Be, Al) or very high melting points (e.g. W, Nb), are liquids at or near room temperature (e.g. Hg, Ga), are brittle (e.g. Os, Bi), not easily machined (e.g. Ti, Re), or are noble (hard to oxidise, e.g. Au, Pt), or have nonmetallic structures (Mn and Ga are structurally analogous to, respectively, white P and I). Metals comprise the large majority of the elements, and can be subdivided into several different categories. From left to right in the periodic table, these categories include the highly reactive alkali metals; the less-reactive alkaline earth metals, lanthanides, and radioactive actinides; the archetypal transition metals; and the physically and chemically weak post-transition metals. Specialized subcategories such as the refractory metals and the noble metals also exist. Metalloids Metalloids are metallic-looking often brittle solids; tend to share electrons when they react with other substances; have weakly acidic or amphoteric oxides; and are usually found naturally in combined states. Most are semiconductors, and moderate thermal conductors, and have structures that are more open than those of most metals. Some metalloids (As, Sb) conduct electricity like metals. The metalloids, as the smallest major category of elements, are not subdivided further. Nonmetals Nonmetallic elements often have open structures; tend to gain or share electrons when they react with other substances; and do not form distinctly basic oxides. Most are gases at room temperature; have relatively low densities; are poor electrical and thermal conductors; have relatively high ionisation energies and electronegativities; form acidic oxides; and are found naturally in uncombined states in large amounts. Some nonmetals (black P, S, and Se) are brittle solids at room temperature (although each of these also have malleable, pliable or ductile allotropes). From left to right in the periodic table, the nonmetals can be divided into the reactive nonmetals and the noble gases. The reactive nonmetals near the metalloids show some incipient metallic character, such as the metallic appearance of graphite, black phosphorus, selenium and iodine. The noble gases are almost completely inert. Comparison of properties Overview properties of elemental metals and nonmetals are quite distinct, as shown in the table below. Metalloids, straddling the metal-nonmetal border, are mostly distinct from either, but in a few properties resemble one or the other, as shown in the shading of the metalloid column below and summarized in the small table at the top of this section. Authors differ in where they divide metals from nonmetals and in whether they recognize an intermediate metalloid category. Some authors count metalloids as nonmetals with weakly nonmetallic properties. Others count some of the metalloids as post-transition metals. Details Anomalous properties Within each category, elements can be found with one or two properties very different from the expected norm, or that are otherwise notable. Metals Sodium, potassium, rubidium, caesium, barium, platinum, gold The common notions that "alkali metal ions (group 1A) always have a +1 charge" and that "transition elements do not form anions" are textbook errors. The synthesis of a crystalline salt of the sodium anion Na− was reported in 1974. Since then further compounds ("alkalides") containing anions of all other alkali metals except Li and Fr, as well as that of Ba, have been prepared. In 1943, Sommer reported the preparation of the yellow transparent compound CsAu. This was subsequently shown to consist of caesium cations (Cs+) and auride anions (Au−) although it was some years before this conclusion was accepted. Several other aurides (KAu, RbAu) have since been synthesized, as well as the red transparent compound Cs2Pt which was found to contain Cs+ and Pt2− ions. Manganese Well-behaved metals have crystal structures featuring unit cells with up to four atoms. Manganese has a complex crystal structure with a 58-atom unit cell, effectively four different atomic radii, and four different coordination numbers (10, 11, 12 and 16). It has been described as resembling "a quaternary intermetallic compound with four Mn atom types bonding as if they were different elements." The half-filled 3d shell of manganese appears to be the cause of the complexity. This confers a large magnetic moment on each atom. Below 727 °C, a unit cell of 58 spatially diverse atoms represents the energetically lowest way of achieving a zero net magnetic moment. The crystal structure of manganese makes it a hard and brittle metal, with low electrical and thermal conductivity. At higher temperatures "greater lattice vibrations nullify magnetic effects" and manganese adopts less-complex structures. Iron, cobalt, nickel, gadolinium, terbium, dysprosium, holmium, erbium, thulium The only elements strongly attracted to magnets are iron, cobalt, and nickel at room temperature, gadolinium just below, and terbium, dysprosium, holmium, erbium, and thulium at ultra-cold temperatures (below −54 °C, −185 °C, −254 °C, −254 °C, and −241 °C respectively). Iridium The only element encountered with an oxidation state of +9 is iridium, in the [IrO4]+ cation. Other than this, the highest known oxidation state is +8, in Ru, Xe, Os, Ir, and Hs. Gold The malleability of gold is extraordinary: a fist-sized lump can be hammered and separated into one million paperback-sized sheets, each 10 nm thick, 1600 times thinner than regular kitchen aluminium foil (0.016 mm thick). Mercury Bricks and bowling balls will float on the surface of mercury thanks to it having a density 13.5 times that of water. Equally, a solid mercury bowling ball would weigh around 50 pounds and, if it could be kept cold enough, would float on the surface of liquid gold. The only metal having an ionisation energy higher than some nonmetals (sulfur and selenium) is mercury. Mercury and its compounds have a reputation for toxicity but on a scale of 1 to 10, dimethylmercury ((CH3)2Hg) (abbr. DMM), a volatile colourless liquid, has been described as a 15. It is so dangerous that scientists have been encouraged to use less-toxic mercury compounds wherever possible. In 1997, Karen Wetterhahn, a professor of chemistry specialising in toxic metal exposure, died of mercury poisoning ten months after a few drops of DMM landed on her "protective" latex gloves. Although Wetterhahn had been following the then-published procedures for handling this compound, it passed through her gloves and skin within seconds. It is now known that DMM is exceptionally permeable to (ordinary) gloves, skin, and tissues. And its toxicity is such that less than one-tenth of a ml applied to the skin will be seriously toxic. Lead The expression, to "go down like a lead balloon" is anchored in the common view of lead as a dense, heavy metal—being nearly as dense as mercury. However, it is possible to construct a balloon made of lead foil, filled with a helium and air mixture, which will float and be buoyant enough to carry a small load. Bismuth Bismuth has the longest half-life of any naturally occurring element; its only primordial isotope, bismuth-209, was found in 2003 to be slightly radioactive, decaying via alpha decay with a half-life more than a billion times the estimated age of the universe. Prior to this discovery, bismuth-209 was thought to be the heaviest naturally occurring stable isotope; this distinction now belongs to lead-208. Uranium The only element with a naturally occurring isotope capable of undergoing nuclear fission is uranium. The capacity of uranium-235 to undergo fission was first suggested (and ignored) in 1934, and subsequently discovered in 1938. Plutonium It is normally true that metals reduce their electrical conductivity when heated. Plutonium increases its electrical conductivity when heated in the temperature range of around –175 to +125 °C. There is evidence that this behavior, and similar with some of the other transuranic elements is due to more complex relativistic and spin interactions that are not captured by simple model of electrical conductivity. Metalloids Boron Boron is the only element with a partially disordered structure in its most thermodynamically stable crystalline form. Boron, antimony These elements are record holders within the field of superacid chemistry. For seven decades, fluorosulfonic acid HSO3F and trifluoromethanesulfonic acid CF3SO3H were the strongest known acids that could be isolated as single compounds. Both are about a thousand times more acidic than pure sulfuric acid. In 2004, a boron compound broke this record by a thousand fold with the synthesis of carborane acid H(CHB11Cl11). Another metalloid, antimony, features in the strongest known acid, a mixture 10 billion times stronger than carborane acid. This is fluoroantimonic acid H2F[SbF6], a mixture of antimony pentafluoride SbF5 and hydrofluoric acid HF. Silicon The thermal conductivity of silicon is better than that of most metals. A sponge-like porous form of silicon (p-Si) is typically prepared by the electrochemical etching of silicon wafers in a hydrofluoric acid solution. Flakes of p-Si sometimes appear red; it has a band gap of 1.97–2.1 eV. The many tiny pores in porous silicon give it an enormous internal surface area, up to 1,000 m2/cm3. When exposed to an oxidant, especially a liquid oxidant, the high surface-area to volume ratio of p-Si creates a very efficient burn, accompanied by nano-explosions, and sometimes by ball-lightning-like plasmoids with, for example, a diameter of 0.1–0.8 m, a velocity of up to 0.5 m/s and a lifetime of up to 1s. The first ever spectrographic analysis of a ball lightning event (in 2012) revealed the presence of silicon, iron and calcium, these elements also being present in the soil. Arsenic Metals are said to be fusible, resulting in some confusion in old chemistry as to whether arsenic was a true metal, or a nonmetal, or something in-between. It sublimes rather than melts at standard atmospheric pressure, like the nonmetals carbon and red phosphorus. Antimony A high-energy explosive form of antimony was first produced in 1858. It is prepared by the electrolysis of any of the heavier antimony trihalides (SbCl3, SbBr3, SbI3) in a hydrochloric acid solution at low temperature. It comprises amorphous antimony with some occluded antimony trihalide (7–20% in the case of the trichloride). When scratched, struck, powdered or heated quickly to 200 °C, it "flares up, emits sparks and is converted explosively into the lower-energy, crystalline grey antimony". Nonmetals Hydrogen Water (H2O), a well-known oxide of hydrogen, is a spectacular anomaly. Extrapolating from the heavier hydrogen chalcogenides, namely hydrogen sulfide H2S, hydrogen selenide H2Se, and hydrogen telluride H2Te, water should be "a foul-smelling, poisonous, inflammable gas... condensing to a nasty liquid [at] around –100 °C". Instead, due to hydrogen bonding, water is "stable, potable, odorless, benign, and... indispensable to life". Less well-known of the oxides of hydrogen is the trioxide, H2O3. Berthelot proposed the existence of this oxide in 1880 but his suggestion was soon forgotten as there was no way of testing it using the technology of the time. Hydrogen trioxide was prepared in 1994 by replacing the oxygen used in the industrial process for making hydrogen peroxide, with ozone. The yield is about 40 per cent, at –78 °C; above around –40 °C it decomposes into water and oxygen. Derivatives of hydrogen trioxide, such as ("bis(trifluoromethyl) trioxide") are known; these are metastable at room temperature. Mendeleev went a step further, in 1895, and proposed the existence of hydrogen tetroxide as a transient intermediate in the decomposition of hydrogen peroxide; this was prepared and characterised in 1974, using a matrix isolation technique. Alkali metal ozonide salts of the unknown hydrogen ozonide (HO3) are also known; these have the formula MO3. Helium At temperatures below 0.3 and 0.8 K respectively, helium-3 and helium-4 each have a negative enthalpy of fusion. This means that, at the appropriate constant pressures, these substances freeze with the addition of heat. Until 1999 helium was thought to be too small to form a cage clathrate—a compound in which a guest atom or molecule is encapsulated in a cage formed by a host molecule—at atmospheric pressure. In that year the synthesis of microgram quantities of He@C20H20 represented the first such helium clathrate and (what was described as) the world's smallest helium balloon. Carbon Graphite is the most electrically conductive nonmetal, better than some metals. Diamond is the best natural conductor of heat; it even feels cold to the touch. Its thermal conductivity (2,200 W/m•K) is five times greater than the most conductive metal (Ag at 429); 300 times higher than the least conductive metal (Pu at 6.74); and nearly 4,000 times that of water (0.58) and 100,000 times that of air (0.0224). This high thermal conductivity is used by jewelers and gemologists to separate diamonds from imitations. Graphene aerogel, produced in 2012 by freeze-drying a solution of carbon nanotubes and graphite oxide sheets and chemically removing oxygen, is seven times lighter than air, and ten per cent lighter than helium. It is the lightest solid known (0.16 mg/cm3), conductive and elastic. Phosphorus The least stable and most reactive form of phosphorus is the white allotrope. It is a hazardous, highly flammable and toxic substance, spontaneously igniting in air and producing phosphoric acid residue. It is therefore normally stored under water. White phosphorus is also the most common, industrially important, and easily reproducible allotrope, and for these reasons is regarded as the standard state of phosphorus. The most stable form is the black allotrope, which is a metallic looking, brittle and relatively non-reactive semiconductor (unlike the white allotrope, which has a white or yellowish appearance, is pliable, highly reactive and a semiconductor). When assessing periodicity in the physical properties of the elements it needs to be borne in mind that the quoted properties of phosphorus tend to be those of its least stable form rather than, as is the case with all other elements, the most stable form. Iodine The mildest of the halogens, iodine is the active ingredient in tincture of iodine, a disinfectant. This can be found in household medicine cabinets or emergency survival kits. Tincture of iodine will rapidly dissolve gold, a task ordinarily requiring the use of aqua regia (a highly corrosive mixture of nitric and hydrochloric acids). Notes Citations References Addison WE 1964, The allotropy of the elements, Oldbourne Press, London Adler D 1969, 'Half-way elements: The technology of metalloids', book review, Technology Review, vol. 72, no. 1, Oct/Nov, pp. 18–19 Benedict M, Alvarez LW, Bliss LA, English SG, Kinzell AB, Morrison P, English FH, Starr C & Williams WJ 1946, 'Technological control of atomic energy activities', "Bulletin of the Atomic Scientists", vol. 2, no. 11, pp. 18–29 Anthony S 2013, 'Graphene aerogel is seven times lighter than air, can balance on a blade of grass', ExtremeTech, April 10, accessed 8 February 2015 Antia, Meher. 1998, 'Focus: Levitating Liquid Boron', American Physical Society, viewed 14 December 2014 Askeland DR, Fulay PP & Wright JW 2011, The science and engineering of materials, 6th ed., Cengage Learning, Stamford, CT, Atkins P, Overton T, Rourke J, Weller M & Armstrong F 2006, Shriver & Atkins' inorganic chemistry, 4th ed., Oxford University Press, Oxford, Austen K 2012, 'A factory for elements that barely exist', NewScientist, 21 Apr, p. 12, ISSN 1032-1233 Bagnall KW 1966, The chemistry of selenium, tellurium and polonium, Elsevier, Amsterdam Bailar JC, Moeller T, Kleinberg J, Guss CO, Castellion ME & Metz C 1989, Chemistry, 3rd ed., Harcourt Brace Jovanovich, San Diego, Bassett LG, Bunce SC, Carter AE, Clark HM & Hollinger HB 1966, Principles of chemistry, Prentice-Hall, Englewood Cliffs, NJ Batsanov SS & Batsanov AS 2012, Introduction to structural chemistry, Springer Science+Business Media, Dordrecht, Berei K & Vasáros L 1985, 'Astatine compounds', in Kugler & Keller Beveridge TJ, Hughes MN, Lee H, Leung KT, Poole RK, Savvaidis I, Silver S & Trevors JT 1997, 'Metal–microbe interactions: Contemporary approaches', in RK Poole (ed.), Advances in microbial physiology, vol. 38, Academic Press, San Diego, pp. 177–243, Bogoroditskii NP & Pasynkov VV 1967, Radio and electronic materials, Iliffe Books, London Booth VH & Bloom ML 1972, Physical science: a study of matter and energy, Macmillan, New York Born M & Wolf E 1999, Principles of optics: Electromagnetic theory of propagation, interference and diffraction of light, 7th ed., Cambridge University Press, Cambridge, Brasted RC 1974, 'Oxygen group elements and their compounds', in The new Encyclopædia Britannica, vol. 13, Encyclopædia Britannica, Chicago, pp. 809–824 Brescia F, Arents J, Meislich H & Turk A 1975, Fundamentals of chemistry, 3rd ed., Academic Press, New York, p. 453, Brinkley SR 1945, Introductory general chemistry, 3rd ed., Macmillan, New York Brown TL, LeMay HE, Bursten BE, Murphy CJ & Woodward P 2009, Chemistry: The Central Science, 11th ed., Pearson Education, New Jersey, Burakowski T & Wierzchoń T 1999, Surface engineering of metals: Principles, equipment, technologies, CRC Press, Boca Raton, Fla, Bychkov VL 2012, 'Unsolved Mystery of Ball Lightning', in Atomic Processes in Basic and Applied Physics, V Shevelko & H Tawara (eds), Springer Science & Business Media, Heidelberg, pp. 3–24, Carapella SC 1968a, 'Arsenic' in CA Hampel (ed.), The encyclopedia of the chemical elements, Reinhold, New York, pp. 29–32 Chang R 1994, Chemistry, 5th (international) ed., McGraw-Hill, New York Chang R 2002, Chemistry, 7th ed., McGraw Hill, Boston Chedd G 1969, Half-way elements: The technology of metalloids, Doubleday, New York Chizhikov DM & Shchastlivyi VP 1968, Selenium and selenides, translated from the Russian by EM Elkin, Collet's, London Choppin GR & Johnsen RH 1972, Introductory chemistry, Addison-Wesley, Reading, Massachusetts Christensen RM 2012, 'Are the elements ductile or brittle: A nanoscale evaluation', in Failure theory for materials science and engineering, chapter 12, p. 14 Cordes EH & Scaheffer R 1973, Chemistry, Harper & Row, New York Cotton SA 1994, 'Scandium, yttrium & the lanthanides: Inorganic & coordination chemistry', in RB King (ed.), Encyclopedia of inorganic chemistry, 2nd ed., vol. 7, John Wiley & Sons, New York, pp. 3595–3616, Cox PA 2004, Inorganic chemistry, 2nd ed., Instant notes series, Bios Scientific, London, Cverna F 2002, ASM ready reference: Thermal properties of metals, ASM International, Materials Park, Ohio, Dalhouse University 2015, 'Dal chemist discovers new information about elemental boron', media release, 28 January, accessed 9 May 2015 Deming HG 1952, General chemistry: An elementary survey, 6th ed., John Wiley & Sons, New York Donohoe J 1982, The Structures of the Elements, Robert E. Krieger, Malabar, Florida, Dunstan S 1968, Principles of chemistry, D. Van Nostrand Company, London Du Plessis M 2007, 'A Gravimetric Technique to Determine the Crystallite Size Distribution in High Porosity Nanoporous Silicon, in JA Martino, MA Pavanello & C Claeys (eds), Microelectronics Technology and Devices–SBMICRO 2007, vol. 9, no. 1, The Electrochemical Society, New Jersey, pp. 133–142, Eby GS, Waugh CL, Welch HE & Buckingham BH 1943, The physical sciences, Ginn and Company, Boston Edwards PP 1999, 'Chemically engineering the metallic, insulating and superconducting state of matter' in KR Seddon & M Zaworotko (eds), Crystal engineering: The design and application of functional solids, Kluwer Academic, Dordrecht, pp. 409–431 Edwards PP 2000, 'What, why and when is a metal?', in N Hall (ed.), The new chemistry, Cambridge University, Cambridge, pp. 85–114 Emsley 1994, 'Science: Surprise legacy of Germany's Flying Bombs', New Scientist, no. 1910, January 29 Emsley J 2001, Nature's building blocks: An A–Z guide to the elements, Endicott K 1998, 'The Trembling Edge of Science', Dartmouth Alumini Magazine, April, accessed 8 May 2015 Georgievskii VI 1982, 'Biochemical regions. Mineral composition of feeds', in VI Georgievskii, BN Annenkov & VT Samokhin (eds), Mineral nutrition of animals: Studies in the agricultural and food sciences, Butterworths, London, pp. 57–68, Gillespie RJ & Robinson EA 1959, 'The sulphuric acid solvent system', in HJ Emeléus & AG Sharpe (eds), Advances in inorganic chemistry and radiochemistry, vol. 1, Academic Press, New York, pp. 386–424 Glazov VM, Chizhevskaya SN & Glagoleva NN 1969, Liquid semiconductors, Plenum, New York Glinka N 1965, General chemistry, trans. D Sobolev, Gordon & Breach, New York Gösele U & Lehmann V 1994, 'Porous Silicon Quantum Sponge Structures: Formation Mechanism, Preparation Methods and Some Properties', in Feng ZC & Tsu R (eds), Porous Silicon, World Scientific, Singapore, pp. 17–40, Greenwood NN & Earnshaw A 2002, Chemistry of the elements, 2nd ed., Butterworth-Heinemann, Gupta A, Awana VPS, Samanta SB, Kishan H & Narlikar AV 2005, 'Disordered superconductors' in AV Narlikar (ed.), Frontiers in superconducting materials, Springer-Verlag, Berlin, p. 502, Habashi F 2003, Metals from ores: an introduction to extractive metallurgy, Métallurgie Extractive Québec, Sainte Foy, Québec, Manson SS & Halford GR 2006, Fatigue and Durability of Structural Materials, ASM International, Materials Park, OH, Hampel CA & Hawley GG 1976, Glossary of chemical terms, Van Nostrand Reinhold, New York Hem JD 1985, Study and interpretation of the chemical characteristics of natural water, paper 2254, 3rd ed., US Geological Society, Alexandria, Virginia Heslop RB & Robinson PL 1963, Inorganic chemistry: A guide to advanced study, Elsevier, Amsterdam Hill G & Holman J 2000, Chemistry in context, 5th ed., Nelson Thornes, Cheltenham, Hiller LA & Herber RH 1960, Principles of chemistry, McGraw-Hill, New York Holtzclaw HF, Robinson WR & Odom JD 1991, General chemistry, 9th ed., DC Heath, Lexington, Chemistry Views 2012, 'Horst Prinzbach (1931 – 2012)', Wiley-VCH, accessed 28 February 2015 Huheey JE, Keiter EA & Keiter RL 1993, Principles of Structure & Reactivity, 4th ed., HarperCollins College Publishers, Hultgren HH 1966, 'Metalloids', in GL Clark & GG Hawley (eds), The encyclopedia of inorganic chemistry, 2nd ed., Reinhold Publishing, New York Hunt A 2000, The complete A-Z chemistry handbook, 2nd ed., Hodder & Stoughton, London Iler RK 1979, The chemistry of silica: solubility, polymerization, colloid and surface properties, and biochemistry, John Wiley, New York, Jauncey GEM 1948, Modern physics: A second course in college physics, D. Von Nostrand, New York Jenkins GM & Kawamura K 1976, Polymeric carbons—carbon fibre, glass and char, Cambridge University Press, Cambridge Keenan CW, Kleinfelter DC & Wood JH 1980, General college chemistry, 6th ed., Harper & Row, San Francisco, Keogh DW 2005, 'Actinides: Inorganic & coordination chemistry', in RB King (ed.), Encyclopedia of inorganic chemistry, 2nd ed., vol. 1, John Wiley & Sons, New York, pp. 2–32, Klein CA & Cardinale GF 1992, 'Young's modulus and Poisson's ratio of CVD diamond', in A Feldman & S Holly, SPIE Proceedings, vol. 1759, Diamond Optics V, pp. 178‒192, Kneen WR, Rogers MJW & Simpson P 1972, Chemistry: Facts, patterns, and principles, Addison-Wesley, London Kozyrev PT 1959, 'Deoxidized selenium and the dependence of its electrical conductivity on pressure. II', Physics of the solid state, translation of the journal Solid State Physics (Fizika tverdogo tela) of the Academy of Sciences of the USSR, vol. 1, pp. 102–110 Kugler HK & Keller C (eds) 1985, Gmelin Handbook of Inorganic and Organometallic chemistry, 8th ed., 'At, Astatine', system no. 8a, Springer-Verlag, Berlin, Lagrenaudie J 1953, 'Semiconductive properties of boron' (in French), Journal de chimie physique, vol. 50, nos. 11–12, Nov-Dec, pp. 629–633 Leith MM 1966, Velocity of sound in solid iodine, MSc thesis, University of British Columbia. Leith comments that, '... as iodine is anisotropic in many of its physical properties most attention was paid to two amorphous samples which were thought to give representative average values of the properties of iodine' (p. iii). Lide DR & Frederikse HPR (eds) 1998, CRC Handbook of chemistry and physics, 79th ed., CRC Press, Boca Raton, Florida, Lidin RA 1996, Inorganic substances handbook, Begell House, New York, Marlowe MO 1970, Elastic properties of three grades of fine grained graphite to 2000 °C, NASA CR‒66933, National Aeronautics and Space Administration, Scientific and Technical Information Facility, College Park, Maryland Martienssen W & Warlimont H (eds) 2005, Springer Handbook of Condensed Matter and Materials Data, Springer, Heidelberg, McQuarrie DA & Rock PA 1987, General chemistry, 3rd ed., WH Freeman, New York Mendeléeff DI 1897, The Principles of Chemistry, vol. 2, 5th ed., trans. G Kamensky, AJ Greenaway (ed.), Longmans, Green & Co., London Metcalfe HC, Williams JE & Castka JF 1966, Modern chemistry, 3rd ed., Holt, Rinehart and Winston, New York Moss TS 1952, Photoconductivity in the Elements, London, Butterworths Mott NF & Davis EA 2012, 'Electronic Processes in Non-Crystalline Materials', 2nd ed., Oxford University Press, Oxford, Nemodruk AA & Karalova ZK 1969, Analytical chemistry of boron, R Kondor trans., Ann Arbor Humphrey Science, Ann Arbor, Michigan New Scientist 1975, 'Chemistry on the islands of stability', 11 Sep, p. 574, ISSN 1032-1233 Orton JW 2004, The story of semiconductors, Oxford University, Oxford, Parish RV 1977, The metallic elements, Longman, London Partington JR 1944, A text-book of inorganic chemistry, 5th ed., Macmillan & Co., London Pauling L 1988, General chemistry, Dover Publications, NY, Perkins D 1998, Mineralogy, Prentice Hall Books, Upper Saddle River, New Jersey, Pottenger FM & Bowes EE 1976, Fundamentals of chemistry, Scott, Foresman and Co., Glenview, Illinois Rao KY 2002, Structural chemistry of glasses, Elsevier, Oxford, Raub CJ & Griffith WP 1980, 'Osmium and sulphur', in Gmelin handbook of inorganic chemistry, 8th ed., 'Os, Osmium: Supplement', K Swars (ed.), system no. 66, Springer-Verlag, Berlin, pp. 166–170, Reynolds WN 1969, Physical properties of graphite, Elsevier, Amsterdam Rochow EG 1966, The metalloids, DC Heath and Company, Boston Rock PA & Gerhold GA 1974, Chemistry: Principles and applications, WB Saunders, Philadelphia Russell JB 1981, General chemistry, McGraw-Hill, Auckland Russell AM & Lee KL 2005, Structure-property relations in nonferrous metals, Wiley-Interscience, New York, Sacks O 2001, Uncle Tungsten: Memories of a chemical boyhood, Alfred A Knopf, New York, Sanderson RT 1960, Chemical periodicity, Reinhold Publishing, New York Sanderson RT 1967, Inorganic chemistry, Reinhold, New York Sanderson K 2012, 'Stinky rocks hide Earth's only haven for natural fluorine', Nature News, July, Schaefer JC 1968, 'Boron' in CA Hampel (ed.), The encyclopedia of the chemical elements, Reinhold, New York, pp. 73–81 Sidgwick NV 1950, The chemical elements and their compounds, vol. 1, Clarendon, Oxford Sisler HH 1973, Electronic structure, properties, and the periodic law, Van Nostrand, New York Slezak 2014, 'Natural ball lightning probed for the first time', New Scientist, 16 January Slough W 1972, 'Discussion of session 2b: Crystal structure and bond mechanism of metallic compounds', in O Kubaschewski (ed.), Metallurgical chemistry, proceedings of a symposium held at Brunel University and the National Physical Laboratory on the 14, 15 and 16 July 1971, Her Majesty's Stationery Office [for the] National Physical Laboratory, London Slyh JA 1955, 'Graphite', in JF Hogerton & RC Grass (eds), Reactor handbook: Materials, US Atomic Energy Commission, McGraw Hill, New York, pp. 133‒154 Smith A 1921, General chemistry for colleges, 2nd ed., Century, New York Sneed MC 1954, General college chemistry, Van Nostrand, New York Soverna S 2004, 'Indication for a gaseous element 112', in U Grundinger (ed.), GSI Scientific Report 2003, GSI Report 2004-1, p. 187, ISSN 0174-0814 Stoker HS 2010, General, organic, and biological chemistry, 5th ed., Brooks/Cole, Cengage Learning, Belmont CA, Stoye, E., 2014, 'Iridium forms compound in +9 oxidation state', Chemistry World, 22 October 2014 Sundara Rao RVG 1954, 'Erratum to: Elastic constants of orthorhombic sulphur', Proceedings of the Indian Academy of Sciences, Section A, vol. 40, no. 3, p. 151 Swalin RA 1962, Thermodynamics of solids, John Wiley & Sons, New York Tilley RJD 2004, Understanding solids: The science of materials, 4th ed., John Wiley, New York Walker JD, Newman MC & Enache M 2013, Fundamental QSARs for metal ions, CRC Press, Boca Raton, Wiberg N 2001, Inorganic chemistry, Academic Press, San Diego, Wickleder MS 2007, 'Chalcogen-oxygen chemistry', in FA Devillanova (ed.), Handbook of chalcogen chemistry: new perspectives in sulfur, selenium and tellurium, RSC, Cambridge, pp. 344–377, Wilson JR 1965, 'The structure of liquid metals and alloys', Metallurgical reviews, vol. 10, p. 502 Wilson AH 1966, Thermodynamics and statistical mechanics, Cambridge University, Cambridge Witczak Z, Goncharova VA & Witczak PP 2000, 'Irreversible effect of hydrostatic pressure on the elastic properties of polycrystalline tellurium', in MH Manghnani, WJ Nellis & MF Nicol (eds), Science and technology of high pressure: Proceedings of the International Conference on High Pressure Science and Technology (AIRAPT-17), Honolulu, Hawaii, 25‒30 July 1999, vol. 2, Universities Press, Hyderabad, pp. 822‒825, Witt SF 1991, 'Dimethylmercury', Occupational Safety & Health Administration Hazard Information Bulletin, US Department of Labor, February 15, accessed 8 May 2015 Wulfsberg G 2000, Inorganic chemistry, University Science Books, Sausalito CA, Young RV & Sessine S (eds) 2000, World of chemistry, Gale Group, Farmington Hills, Michigan Zhigal'skii GP & Jones BK 2003, Physical properties of thin metal films, Taylor & Francis, London, Zuckerman & Hagen (eds) 1991, Inorganic reactions and methods, vol, 5: The formation of bonds to group VIB (O, S, Se, Te, Po) elements (part 1), VCH Publishers, Deerfield Beach, Fla, Metals Metalloids Nonmetals
Properties of metals, metalloids and nonmetals
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
7,877
[ "Nonmetals", "Metals", "Condensed matter physics", "Materials science" ]
3,873,144
https://en.wikipedia.org/wiki/Chapman%E2%80%93Jouguet%20condition
The Chapman–Jouguet condition holds approximately in detonation waves in high explosives. It states that the detonation propagates at a velocity at which the reacting gases just reach sonic velocity (in the frame of the leading shock wave) as the reaction ceases. David Chapman and Émile Jouguet originally (c. 1900) stated the condition for an infinitesimally thin detonation. A physical interpretation of the condition is usually based on the later modelling (c. 1943) by Yakov Borisovich Zel'dovich, John von Neumann, and Werner Döring (the so-called ZND detonation model). In more detail (in the ZND model) in the frame of the leading shock of the detonation wave, gases enter at supersonic velocity and are compressed through the shock to a high-density, subsonic flow. This sudden change in pressure initiates the chemical (or sometimes, as in steam explosions, physical) energy release. The energy release re-accelerates the flow back to the local speed of sound. It can be shown fairly simply, from the one-dimensional gas equations for steady flow, that the reaction must cease at the sonic ("CJ") plane, or there would be discontinuously large pressure gradients at that point. The sonic plane forms a so-called choke point that enables the lead shock, and reaction zone, to travel at a constant velocity, undisturbed by the expansion of gases in the rarefaction region beyond the CJ plane. This simple one-dimensional model is quite successful in explaining detonations. However, observations of the structure of real chemical detonations show a complex three-dimensional structure, with parts of the wave traveling faster than average, and others slower. Indeed, such waves are quenched as their structure is destroyed. The Wood–Kirkwood detonation theory can correct for some of these limitations. Mathematical description Source: The Rayleigh line equation and the Hugoniot curve equation obtained from the Rankine–Hugoniot relations for an ideal gas, with the assumption of constant specific heat and constant molecular weight, respectively are where is the specific heat ratio and Here the subscript 1 and 2 identifies flow properties (pressure , density ) upstream and downstream of the wave and is the constant mass flux and is the heat released in the wave. The slopes of Rayleigh line and Hugoniot curve are ⋅ At the Chapman-Jouguet point, both slopes are equal, leading the condition that Substituting this back into the Rayleigh equation, we find Using the definition of mass flux , where denotes the flow velocity, we find where is the Mach number and is the speed of sound, in other words, downstream flow is sonic with respect to the Chapman-Jouguet wave. Explicit expression for the variables can be derived, The upper sign applies for the Upper Chapman-Jouguet point (detonation) and the lower sign applies for the Lower Chapman-Jouguet point (deflagration). Similarly, the upstream Mach number can be found from and the temperature ratio can be found from the relation . See also Taylor–von Neumann–Sedov blast wave Zeldovich–Taylor flow References Further reading Explosives engineering Combustion
Chapman–Jouguet condition
[ "Chemistry", "Engineering" ]
666
[ "Combustion", "Explosives engineering" ]