id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
11,579,078
https://en.wikipedia.org/wiki/Biological%20specificity
Biological specificity is the tendency of a characteristic such as a behavior or a biochemical variation to occur in a particular species. Biochemist Linus Pauling stated that "Biological specificity is the set of characteristics of living organisms or constituents of living organisms of being special or doing something special. Each animal or plant species is special. It differs in some way from all other species...biological specificity is the major problem about understanding life." Biological specificity within Homo sapiens Homo sapiens has many characteristics that show the biological specificity in the form of behavior and morphological traits. Morphologically, humans have an enlarged cranial capacity and more gracile features in comparison to other hominins. The reduction of dentition is a feature that allows for the advantage of adaptability in diet and survival. As a species, humans are culture dependent and much of human survival relies on the culture and social relationships. With the evolutionary change of the reduction of the pelvis and enlarged cranial capacity; events like childbirth are dependent on a safe, social setting to assist in the childbirth; a birthing mother will seek others when going into labor. This is a uniquely human experience, as other animals are able to give birth on their own and often choose to isolate themselves to do so to protect their young. An example of a genetic adaptation unique to humans is the gene apolipoprotein E (APOE4) on chromosome 19. While chimpanzees may have the APOE gene, the study "The apolipoprotein E (APOE) gene appears functionally monomorphic in chimpanzees" shows that the diversity of the APOE gene in humans in unique. The polymorphism in APOE is only in humans as they carry alleles APOE2, APOE3, APOE4; APOE4, which allows humans to break down fatty protein and eat more protein than their ancestors, is also a genomic risk factor for Alzheimer's disease. There are many behavioral characteristics that are specific to Homo sapiens in addition to childbirth. Specific and elaborate tool creation and use and language are other areas. Humans do not simply communicate; language is essential to their survival and complex culture. This culture must be learned, is variable and highly malleable to fit distinct social parameters. Humans do not simply communicate with a code or general understanding, but adhere to social standards, hierarchies, technologies, complex system of regulations and must maintain many dimensions of relationships in order to survive. This complexity of language and the dependence on culture is uniquely human. Intraspecific behaviors and variations exist within Homo sapiens which adds to the complexity of culture and language. Intraspecific variations are differences in behavior or biology within a species. Variation in genetic expression of race and gender and complexities within society lead to social constructs such as roles. These add to power dynamics and hierarchies within the already multifaceted society. Subtopics Characteristics may further be described as being interspecific, intraspecific, and conspecific. Interspecific Interspecificity (literally between/among species), or being interspecific, describes issues between organisms of separate species. These may include: Interspecies communication, communication between different species of animals, plants, fungi or bacteria Interspecific competition, when individuals of different species compete for the same resource in an ecosystem Interspecific feeding, when adults of one species feed the young of another species Interspecific hybridization, when two species within the same genus generate offspring. Offspring may develop into adults but may be sterile. Interspecific interaction, the effects organisms in a community have on one another Interspecific pregnancy, pregnancy involving an embryo or fetus belonging to another species than the carrier Intraspecific Intraspecificity (literally within species), or being intraspecific, describes behaviors, biochemical variations and other issues within members of a single species. These may include: Intraspecific antagonism, when members of the same species are hostile to one another Intraspecific competition, when members or groups from the same species compete for the same resource in an ecosystem Intraspecific hybridization, hybridization between sub-species within a species. Intraspecific mimicry Conspecific Two or more organisms, populations, or taxa are conspecific if they belong to the same species. Where different species can interbreed and their gametes compete, the conspecific gametes take precedence over heterospecific gametes. This is known as conspecific sperm precedence, or conspecific pollen precedence in plants. Heterospecific The antonym of conspecificity is the term heterospecificity: two organisms are heterospecific if they are considered to belong to different biological species. Related concepts Congeners are organisms within the same genus. See also Evolutionary biology References External links Evolutionary biology concepts
Biological specificity
[ "Biology" ]
1,032
[ "Evolutionary biology concepts" ]
11,580,204
https://en.wikipedia.org/wiki/Dream%20sharing
Dream sharing is the process of documenting or discussing both night and daydreams with others. Dreams are novel but realistic simulations of waking social life. One of the primary purposes of sharing dreams is entertainment. Dream sharing is a strategy that tests and strengthens the bond between people. A dream can be described as a calculated social interaction and a way to bring individuals closer together. Individuals choose to share dreams with those that they know well or want to know well. Dreams are a common denominator amongst humans of all nations and cultures. Increasing the rate of discussion regarding dreams leads to more understanding about the personality of someone otherwise difficult to connect with due to language or cultural barriers. Demographics Currently, dream sharing is more prevalent in certain demographics. Women are found to share and discuss dreams and nightmares more frequently than men. During this discovery, dream and nightmare recall were controlled to be proportional frequencies across the two sexes, signifying that the differences in dream sharing were not due to biological dream factors such as memory, but from the stigma around men sharing personal thoughts with each other. Men remain independent and reject the need for social support. Often these traits are developed from male social stigma. Personality traits such as openness and extraversion were also positively correlated with dream-sharing frequency. Relationships When couples talk about their dreams with each other, it seems to be linked to feeling closer in their relationship. In other words, the more they share their dreams, the stronger their sense of intimacy. This suggests that open communication about dreams may contribute to a deeper connection between romantic partners. Engaging in conversations about dreams enhances the levels of empathy the listener feels toward the dreamer and also fosters a deeper connection by investigating the intricate landscapes of the subconscious mind. As individuals share their dreams, a unique window into their thoughts, emotions, and aspirations opens up, creating a rich tapestry of understanding. This exchange of dreams can cultivate empathy by providing insight into the dreamer's inner world, fostering a more profound appreciation for their experiences and perspectives. Stress relief Dream sharing is also associated with stress relief. The relationship between dreams and stress relief is complex and can vary from person to person. A few ways in which dreaming and sharing dreams might contribute to stress relief are emotional processing, catharsis, symbolic exploration, social connection, and mindfulness and relaxation. History The sharing of dreams dates back at least as far as 4000-3000 BC in permanent form on clay tablets. In ancient Egypt, dreams were among the items recorded in the form of hieroglyphics. In ancient Egyptian culture dream sharing had a religious context as priests doubled as dream interpreters. Those whose dreams were especially vivid or significant were thought to be blessed and were given special status in these ancient societies. Likewise, people who were able to interpret dreams were thought to receive these gifts directly from the gods, and they enjoyed a special status in society as well. The respect for dreams changed radically early in the 19th century, and dreams in that era were often dismissed as reactions to anxiety, outside noises or even bad food and indigestion. During this period of time, dreams were thought to have no meaning at all, and interest in dream interpretation all but evaporated. This all changed, however, with the arrival of Sigmund Freud later in the 19th century. Freud stunned the world of psychiatry by stressing the importance of dreams, and he revived the once dead art of dream interpretation. Freud's interpretation Freud represented the view that in order to understand one's unconscious, dreams are to be dissected and discussed. See also Dream diary Dream interpretation References Further reading Sharing Sharing
Dream sharing
[ "Biology" ]
729
[ "Dream", "Behavior", "Sleep" ]
3,350,111
https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem
In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process. History Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914. Continuous-time process For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value where the asterisk denotes complex conjugate, then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that where the integral is a Riemann–Stieltjes integral. This is a kind of spectral decomposition of the auto-correlation function. is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum. The ordinary Fourier transform of does not exist in general, because stochastic random functions are usually not absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either. However, if the measure is absolutely continuous (e.g. if the process is purely indeterministic), then is differentiable almost everywhere and has a Radon-Nikodym derivative given by In this case, one can determine , the power spectral density of , by taking the averaged derivative of . Because the left and right derivatives of exist everywhere, i.e. we can put everywhere, (obtaining that F is the integral of its averaged derivative), and the theorem simplifies to Assuming that and are "sufficiently nice" such that the Fourier inversion theorem is valid, the Wiener–Khinchin theorem takes the simple form of saying that and are a Fourier transform pair, and Discrete-time process For the discrete-time case, the power spectral density of the function with discrete values is where is the angular frequency, is used to denote the imaginary unit (in engineering, sometimes the letter is used instead) and is the discrete autocorrelation function of , defined in its deterministic or stochastic formulation. Provided is absolutely summable, i.e. the result of the theorem then can be written as Being a discrete-time sequence, the spectral density is periodic in the frequency domain. For this reason, the domain of the function is usually restricted to (note the interval is open from one side). Application The theorem is useful for analyzing linear time-invariant systems (LTI systems) when the inputs and outputs are not square-integrable, so their Fourier transforms do not exist. A corollary is that the Fourier transform of the autocorrelation function of the output of an LTI system is equal to the product of the Fourier transform of the autocorrelation function of the input of the system times the squared magnitude of the Fourier transform of the system impulse response. This works even when the Fourier transforms of the input and output signals do not exist because these signals are not square-integrable, so the system inputs and outputs cannot be directly related by the Fourier transform of the impulse response. Since the Fourier transform of the autocorrelation function of a signal is the power spectrum of the signal, this corollary is equivalent to saying that the power spectrum of the output is equal to the power spectrum of the input times the energy transfer function. This corollary is used in the parametric method for power spectrum estimation. Discrepancies in terminology In many textbooks and in much of the technical literature, it is tacitly assumed that Fourier inversion of the autocorrelation function and the power spectral density is valid, and the Wiener–Khinchin theorem is stated, very simply, as if it said that the Fourier transform of the autocorrelation function was equal to the power spectral density, ignoring all questions of convergence (similar to Einstein's paper). But the theorem (as stated here) was applied by Norbert Wiener and Aleksandr Khinchin to the sample functions (signals) of wide-sense-stationary random processes, signals whose Fourier transforms do not exist. Wiener's contribution was to make sense of the spectral decomposition of the autocorrelation function of a sample function of a wide-sense-stationary random process even when the integrals for the Fourier transform and Fourier inversion do not make sense. Further complicating the issue is that the discrete Fourier transform always exists for digital, finite-length sequences, meaning that the theorem can be blindly applied to calculate autocorrelations of numerical sequences. As mentioned earlier, the relation of this discrete sampled data to a mathematical model is often misleading, and related errors can show up as a divergence when the sequence length is modified. Some authors refer to as the autocovariance function. They then proceed to normalize it by dividing by , to obtain what they refer to as the autocorrelation function. References Further reading Theorems in Fourier analysis Signal processing Probability theorems
Wiener–Khinchin theorem
[ "Mathematics", "Technology", "Engineering" ]
1,119
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Theorems in probability theory", "Mathematical problems", "Mathematical theorems" ]
3,352,292
https://en.wikipedia.org/wiki/Wavelet%20transform
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. Definition A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is, a complete orthonormal system for the Hilbert space of square-integrable functions on the real line. The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of , for integers . If, under the standard inner product on , this family is orthonormal, then it is an orthonormal system: where is the Kronecker delta. Completeness is satisfied if every function may be expanded in the basis as with convergence of the series understood to be convergence in norm. Such a representation of is known as a wavelet series. This implies that an orthonormal wavelet is self-dual. The integral wavelet transform is the integral transform defined as The wavelet coefficients are then given by Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position. Principle The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape, imposing a restriction on choosing suitable basis functions. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, where represents time and angular frequency (, where is ordinary frequency). The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of . When is large, Bad time resolution Good frequency resolution Low frequency, large scaling factor When is small Good time resolution Bad frequency resolution High frequency, small scaling factor In other words, the basis function can be regarded as an impulse response of a system with which the function has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of the Fourier uncertainty principle is not correctly displayed in the Figure. This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable. Another example: The analysis of three superposed sinusoidal signals with STFT and wavelet-transformation. Wavelet compression Wavelet compression is a form of data compression well suited for image compression (sometimes also video compression and audio compression). Notable implementations are JPEG 2000, DjVu and ECW for still images, JPEG XS, CineForm, and the BBC's Dirac. The goal is to store image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy. Using a wavelet transform, the wavelet compression methods are adequate for representing transients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespread discrete cosine transform, had been used. Discrete wavelet transform has been successfully applied for the compression of electrocardiograph (ECG) signals In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction. Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditional harmonic analysis in the frequency domain with Fourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, the Vorbis audio codec primarily uses the modified discrete cosine transform to compress audio (which is generally smooth and periodic), however allows the addition of a hybrid wavelet filter bank for improved reproduction of transients. See Diary Of An x264 Developer: The problems with wavelets (2010) for discussion of practical issues of current methods using wavelets for video compression. Method First a wavelet transform is applied. This produces as many coefficients as there are pixels in the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is called transform coding. After that, the coefficients are quantized and the quantized values are entropy encoded and/or run length encoded. A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints". Evaluation Requirement for image compression For most natural images, the spectrum density of lower frequency is higher. As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression: Being able to transform more original image into the reference signal. Highest fidelity reconstruction based on the reference signal. Should not lead to artifacts in the image reconstructed from the reference signal alone. Requirement for shift variance and ringing behavior Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below: The transformation system contains two analysis filters (a low pass filter and a high pass filter ), a decimation process, an interpolation process, and two synthesis filters ( and ). The compression and reconstruction system generally involves low frequency components, which is the analysis filters for image compression and the synthesis filters for reconstruction. To evaluate such system, we can input an impulse and observe its reconstruction ; The optimal wavelet are those who bring minimum shift variance and sidelobe to . Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters: By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant). Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio. So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed. Derivation of impulse response As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system. For the input sequence , the reference signal after one level of decomposition is goes through decimation by a factor of two, while is a low pass filter. Similarly, the next reference signal is obtained by goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every samples: . On the other hand, to reconstruct the signal x(n), we can consider a reference signal . If the detail signals are equal to zero for , then the reference signal at the previous stage ( stage) is , which is obtained by interpolating and convoluting with . Similarly, the procedure is iterated to obtain the reference signal at stage . After L iterations, the synthesis impulse response is calculated: , which relates the reference signal and the reconstructed signal. To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below: . Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse response can be used to evaluate the wavelet image compression performance. Comparison with Fourier transform and time-frequency analysis Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the common Morlet wavelet is mathematically identical to a short-time Fourier transform using a Gaussian window function. The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses. Other practical applications The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis, for fault detection, for the analysis of seasonal displacements of landslides, for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications. Time-causal wavelets For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al and Lindeberg, with the latter method also involving a memory-efficient time-recursive implementation. Synchro-squeezed transform Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform. See also Binomial QMF (also known as Daubechies wavelet) Biorthogonal nearly coiflet basis, which shows that wavelet for image compression can also be nearly coiflet (nearly orthogonal) Chirplet transform Complex wavelet transform Constant-Q transform Continuous wavelet transform Daubechies wavelet Discrete wavelet transform DjVu format uses wavelet-based IW44 algorithm for image compression Dual wavelet ECW, a wavelet-based geospatial image format designed for speed and processing efficiency Gabor wavelet Haar wavelet JPEG 2000, a wavelet-based image compression standard Least-squares spectral analysis Morlet wavelet Multiresolution analysis MrSID, the image format developed from original wavelet compression research at Los Alamos National Laboratory (LANL) S transform Scaleograms, a type of spectrogram generated using wavelets instead of a short-time Fourier transform Set partitioning in hierarchical trees Short-time Fourier transform Stationary wavelet transform Time–frequency representation Wavelet References External links Concise Introduction to Wavelets by René Puschinger Wavelets Functional analysis Signal processing Image compression Data compression
Wavelet transform
[ "Mathematics", "Technology", "Engineering" ]
2,306
[ "Functions and mappings", "Telecommunications engineering", "Functional analysis", "Computer engineering", "Signal processing", "Mathematical objects", "Mathematical relations" ]
3,352,423
https://en.wikipedia.org/wiki/Hydrogen%20fluoride
Hydrogen fluoride (fluorane) is an inorganic compound with chemical formula . It is a very poisonous, colorless gas or liquid that dissolves in water to yield hydrofluoric acid. It is the principal industrial source of fluorine, often in the form of hydrofluoric acid, and is an important feedstock in the preparation of many important compounds including pharmaceuticals and polymers such as polytetrafluoroethylene (PTFE). HF is also widely used in the petrochemical industry as a component of superacids. Due to strong and extensive hydrogen bonding, it boils near room temperature, a much higher temperature than other hydrogen halides. Hydrogen fluoride is an extremely dangerous gas, forming corrosive and penetrating hydrofluoric acid upon contact with moisture. The gas can also cause blindness by rapid destruction of the corneas. History In 1771 Carl Wilhelm Scheele prepared the aqueous solution, hydrofluoric acid in large quantities, although hydrofluoric acid had been known in the glass industry before then. French chemist Edmond Frémy (1814–1894) is credited with discovering hydrogen fluoride (HF) while trying to isolate fluorine. Structure and reactions HF is diatomic in the gas-phase. As a liquid, HF forms relatively strong hydrogen bonds, hence its relatively high boiling point. Solid HF consists of zig-zag chains of HF molecules. The HF molecules, with a short covalent H–F bond of 95 pm length, are linked to neighboring molecules by intermolecular H–F distances of 155 pm. Liquid HF also consists of chains of HF molecules, but the chains are shorter, consisting on average of only five or six molecules. Comparison with other hydrogen halides Hydrogen fluoride does not boil until 20 °C in contrast to the heavier hydrogen halides, which boil between −85 °C (−120 °F) and −35 °C (−30 °F). This hydrogen bonding between HF molecules gives rise to high viscosity in the liquid phase and lower than expected pressure in the gas phase. Aqueous solutions HF is miscible with water (dissolves in any proportion). In contrast, the other hydrogen halides exhibit limiting solubilities in water. Hydrogen fluoride forms a monohydrate HF.H2O with melting point −40 °C (−40 °F), which is 44 °C (79 °F) above the melting point of pure HF. Aqueous solutions of HF are called hydrofluoric acid. When dilute, hydrofluoric acid behaves like a weak acid, unlike the other hydrohalic acids, due to the formation of hydrogen-bonded ion pairs [·F−]. However concentrated solutions are strong acids, because bifluoride anions are predominant, instead of ion pairs. In liquid anhydrous HF, self-ionization occurs: which forms an extremely acidic liquid (). Reactions with Lewis acids Like water, HF can act as a weak base, reacting with Lewis acids to give superacids. A Hammett acidity function (H0) of −21 is obtained with antimony pentafluoride (SbF5), forming fluoroantimonic acid. Production Hydrogen fluoride is typically produced by the reaction between sulfuric acid and pure grades of the mineral fluorite: About 20% of manufactured HF is a byproduct of fertilizer production, which generates hexafluorosilicic acid. This acid can be degraded to release HF thermally and by hydrolysis: Use In general, anhydrous hydrogen fluoride is more common industrially than its aqueous solution, hydrofluoric acid. Its main uses, on a tonnage basis, are as a precursor to organofluorine compounds and a precursor to cryolite for the electrolysis of aluminium. Precursor to organofluorine compounds HF reacts with chlorocarbons to give fluorocarbons. An important application of this reaction is the production of tetrafluoroethylene (TFE), precursor to Teflon. Chloroform is fluorinated by HF to produce chlorodifluoromethane (R-22): Pyrolysis of chlorodifluoromethane (at 550-750 °C) yields TFE. HF is a reactive solvent in the electrochemical fluorination of organic compounds. In this approach, HF is oxidized in the presence of a hydrocarbon and the fluorine replaces C–H bonds with C–F bonds. Perfluorinated carboxylic acids and sulfonic acids are produced in this way. 1,1-Difluoroethane is produced by adding HF to acetylene using mercury as a catalyst. The intermediate in this process is vinyl fluoride or fluoroethylene, the monomeric precursor to polyvinyl fluoride. Precursor to metal fluorides and fluorine The electrowinning of aluminium relies on the electrolysis of aluminium fluoride in molten cryolite. Several kilograms of HF are consumed per ton of Al produced. Other metal fluorides are produced using HF, including uranium tetrafluoride. HF is the precursor to elemental fluorine, F2, by electrolysis of a solution of HF and potassium bifluoride. The potassium bifluoride is needed because anhydrous HF does not conduct electricity. Several thousand tons of F2 are produced annually. Catalyst HF serves as a catalyst in alkylation processes in refineries. It is used in the majority of the installed linear alkyl benzene production facilities in the world. The process involves dehydrogenation of n-paraffins to olefins, and subsequent reaction with benzene using HF as catalyst. For example, in oil refineries "alkylate", a component of high-octane petrol (gasoline), is generated in alkylation units, which combine C3 and C4 olefins and iso-butane. Solvent Hydrogen fluoride is an excellent solvent. Reflecting the ability of HF to participate in hydrogen bonding, even proteins and carbohydrates dissolve in HF and can be recovered from it. In contrast, most non-fluoride inorganic chemicals react with HF rather than dissolving. Health effects Hydrogen fluoride is highly corrosive and a powerful contact poison. Exposure requires immediate medical attention. It can cause blindness by rapid destruction of the corneas. Breathing in hydrogen fluoride at high levels or in combination with skin contact can cause death from an irregular heartbeat or from pulmonary edema (fluid buildup in the lungs). References External links Fluorides, Hydrogen Fluoride, and Fluorine at ATSDR. Retrieved September 30, 2019 CDC - NIOSH Pocket Guide to Chemical Hazards Hydrogen Fluoride Fact Sheet at Toxics Use Reduction Institute Fluorides Hazardous air pollutants Hydrogen compounds Industrial gases Inorganic solvents Nonmetal halides Diatomic molecules
Hydrogen fluoride
[ "Physics", "Chemistry" ]
1,505
[ "Molecules", "Salts", "Industrial gases", "Chemical process engineering", "Diatomic molecules", "Fluorides", "Matter" ]
3,352,536
https://en.wikipedia.org/wiki/Exotic%20star
An exotic star is a hypothetical compact star composed of exotic matter (something not made of electrons, protons, neutrons, or muons), and balanced against gravitational collapse by degeneracy pressure or other quantum properties. Types of exotic stars include quark stars (composed of quarks) strange stars (composed of strange quark matter, a condensate of up, down, and strange quarks) s (speculative material composed of preons, which are hypothetical particles and "building blocks" of quarks and leptons, should quarks be decomposable into component sub-particles). Of the various types of exotic star proposed, the most well evidenced and understood is the quark star, although its existence is not confirmed. In Newtonian mechanics, objects dense enough to trap any emitted light are called dark stars,, as opposed to black holes in general relativity. However, the same name is used for hypothetical ancient "stars" which derived energy from dark matter. Exotic stars are hypothetical – partly because it is difficult to test in detail how such forms of matter may behave, and partly because prior to the fledgling technology of gravitational-wave astronomy, there was no satisfactory means of detecting compact astrophysical objects that do not radiate either electromagnetically or through known particles. While candidate objects are occasionally identified based on indirect evidence, it is not yet possible to distinguish their observational signatures from those of known objects. Quark stars and strange stars A quark star is a hypothesized object that results from the decomposition of neutrons into their constituent up and down quarks under gravitational pressure. It is expected to be smaller and denser than a neutron star, and may survive in this new state indefinitely, if no extra mass is added. Effectively, it is a single, very large hadron. Quark stars that contain strange matter are called strange stars. Based on observations released by the Chandra X-Ray Observatory on 10 April 2002, two objects, named RX J1856.5−3754 and were suggested as quark star candidates. The former appeared to be much smaller and the latter much colder than expected for a neutron star, suggesting that they were composed of material denser than neutronium. However, these observations were met with skepticism by researchers who said the results were not conclusive. After further analysis, RX J1856.5−3754 was excluded from the list of quark star candidates. Electroweak stars An electroweak star is a hypothetical type of exotic star in which the gravitational collapse of the star is prevented by radiation pressure resulting from electroweak burning; that is, the energy released by the conversion of quarks into leptons through the electroweak force. This proposed process might occur in a volume at the star's core approximately the size of an apple, containing about two Earth masses, and reaching temperatures on the order of 1015 K (1 PK). Electroweak stars could be identified through the equal number of neutrinos emitted of all three generations, taking into account neutrino oscillation. Preon stars A preon star is a proposed type of compact star made of preons, a group of hypothetical subatomic particles. Preon stars would be expected to have huge densities, exceeding  kg/m3. They may have greater densities than quark stars, and they would be heavier but smaller than white dwarfs and neutron stars. Preon stars could originate from supernova explosions or the Big Bang. Such objects could be detected in principle through gravitational lensing of gamma rays. Preon stars are a potential candidate for dark matter. However, current observations from particle accelerators speak against the existence of preons, or at least do not prioritize their investigation, since the only particle detector presently able to explore very high energies (the Large Hadron Collider) is not designed specifically for this and its research program is directed towards other areas, such as studying the Higgs boson, quark–gluon plasma and evidence related to physics beyond the Standard Model. Boson stars A boson star is a hypothetical astronomical object formed out of particles called bosons (conventional stars are formed from mostly protons and electrons, which are fermions, but also contain a large proportion of helium-4 nuclei, which are bosons, and smaller amounts of various heavier nuclei, which can be either). For this type of star to exist, there must be a stable type of boson with self-repulsive interaction; one possible candidate particle is the still-hypothetical "axion" (which is also a candidate for the not-yet-detected "non-baryonic dark matter" particles, which appear to compose roughly 25% of the mass of the Universe). It is theorized that unlike normal stars (which emit radiation due to gravitational pressure and nuclear fusion), boson stars would be transparent and invisible. The immense gravity of a compact boson star would bend light around the object, creating an empty region resembling the shadow of a black hole's event horizon. Like a black hole, a boson star would absorb ordinary matter from its surroundings, but because of the transparency, matter (which would probably heat up and emit radiation) would be visible at its center. Simulations suggest that rotating boson stars would be torus-shaped, as centrifugal forces would give the bosonic matter that form. There is no significant evidence that such stars exist. However, it may become possible to detect them by the gravitational radiation emitted by a pair of co-orbiting boson stars. GW190521, thought to be the most energetic black hole merger ever recorded, may be the head-on collision of two boson stars. The invisible companion to a Sun-like star identified by Gaia mission could be a black hole or either a boson star or an exotic star of other types. Boson stars may have formed through gravitational collapse during the primordial stages of the Big Bang. At least in theory, a supermassive boson star could exist at the core of a galaxy, which may explain many of the observed properties of active galactic cores. Boson stars have also been proposed as candidate dark matter objects, and it has been hypothesized that the dark matter haloes surrounding most galaxies might be viewed as enormous "boson stars." The compact boson stars and boson shells are often studied involving fields like the massive (or massless) complex scalar fields, the U(1) gauge field and gravity with conical potential. The presence of a positive or negative cosmological constant in the theory facilitates a study of these objects in de Sitter and anti-de Sitter spaces. Boson stars composed of elementary particles with spin-1 have been labelled Proca stars. Braaten, Mohapatra, and Zhang have theorized that a new type dense axion-star may exist in which gravity is balanced by the mean-field pressure of the axion Bose–Einstein condensate. The possibility that dense axion stars exist has been challenged by other work that does not support this claim. Planck stars In loop quantum gravity, a Planck star is a hypothetically possible astronomical object that is created when the energy density of a collapsing star reaches the Planck energy density. Under these conditions, assuming gravity and spacetime are quantized, there arises a repulsive "force" derived from Heisenberg's uncertainty principle. In other words, if gravity and spacetime are quantized, the accumulation of mass-energy inside the Planck star cannot collapse beyond this limit to form a gravitational singularity because it would violate the uncertainty principle for spacetime itself. Q-stars Q-stars are hypothetical objects that originate from supernovae or the big bang. They are theorized to be massive enough to bend space-time to a degree such that some, but not all light could escape from its surface. These are predicted to be denser than neutron stars or even quark stars. See also Exotic matter Glueball Q star Quasi-star Electroweak force Chiral anomaly Sphaleron Quark nova Quark star Preon star Lepton Quark Degenerate matter Degeneracy pressure Footnotes References Sources External links Degenerate stars Hypothetical stars Compact stars Star types
Exotic star
[ "Physics", "Astronomy" ]
1,707
[ "Exotic matter", "Astronomical classification systems", "Compact stars", "Star types", "Matter" ]
18,193,302
https://en.wikipedia.org/wiki/Photodissociation%20region
In astrophysics, photodissociation regions (or photon-dominated regions, PDRs) are predominantly neutral regions of the interstellar medium in which far ultraviolet photons strongly influence the gas chemistry and act as the most important source of heat. They occur in any region of interstellar gas that is dense and cold enough to remain neutral, but that has too low a column density to prevent the penetration of far-UV photons from distant, massive stars. A typical and well-studied example is the gas at the boundary of a giant molecular cloud. PDRs are also associated with HII regions, reflection nebulae, active galactic nuclei, and Planetary nebulae. All the atomic gas and most of the molecular gas in the galaxy is found in PDRs. The closest PDRs to the Sun are IC 59 and IC 63, near the bright Be star Gamma Cassiopeiae. History The study of photodissociation regions began from early observations of the star-forming regions Orion A and M17 which showed neutral areas bright in infrared radiation lying outside ionised HII regions. References Astrophysics Interstellar media
Photodissociation region
[ "Physics", "Astronomy" ]
226
[ "Interstellar media", "Outer space", "Plasma physics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Plasma physics stubs", "Outer space stubs", "Astronomical sub-disciplines" ]
18,193,964
https://en.wikipedia.org/wiki/K-Poincar%C3%A9%20algebra
In physics and mathematics, the κ-Poincaré algebra, named after Henri Poincaré, is a deformation of the Poincaré algebra into a Hopf algebra. In the bicrossproduct basis, introduced by Majid-Ruegg its commutation rules reads: Where are the translation generators, the rotations and the boosts. The coproducts are: The antipodes and the counits: The κ-Poincaré algebra is the dual Hopf algebra to the κ-Poincaré group, and can be interpreted as its “infinitesimal” version. References Hopf algebras Mathematical physics
K-Poincaré algebra
[ "Physics", "Mathematics" ]
133
[ "Algebra stubs", "Applied mathematics", "Theoretical physics", "Mathematical physics", "Algebra" ]
18,195,193
https://en.wikipedia.org/wiki/Thermo-magnetic%20motor
Thermomagnetic motors (also known as Curie wheels, Curie-motors and pyromagnetic motors) convert heat into kinetic energy using the thermomagnetic effect, i.e., the influence of temperature on the magnetic material magnetization. Historical background This technology dates back to 19th century, when a number of scientists submitted patents on the so-called "pyro-magnetic generators". These systems operate in a magnetic Brayton cycle, in a reverse way of the magnetocaloric refrigerators. Experiments have produced only extremely inefficient working prototypes, however, thermodynamic analysis indicate that thermomagnetic motors present high efficiency related to Carnot efficiency for small temperature differences around the magnetic material Curie temperature. The thermomagnetic motor principle has been studied as a possible actuator in smart materials, being successful in the generation of electric energy from ultra-low temperature gradients. See also Thermomagnetic convection References Electric motors Magnetic devices
Thermo-magnetic motor
[ "Technology", "Engineering" ]
211
[ "Electrical engineering", "Engines", "Electric motors" ]
18,198,290
https://en.wikipedia.org/wiki/Soft%20story%20building
A soft story building is a multi-story building in which one or more floors have windows, wide doors, large unobstructed commercial spaces, or other openings in places where a shear wall would normally be required for stability as a matter of earthquake engineering design. A typical soft story building is an apartment building of three or more stories located over a ground level with large openings, such as a parking garage or series of retail businesses with large windows. Buildings are classified as having a soft story if that level is less than 70% as stiff as the floor immediately above it, or less than 80% as stiff as the average stiffness of the three floors above it. Soft story buildings are vulnerable to collapse in a moderate to severe earthquake in a phenomenon known as soft story collapse. The inadequately-braced level is relatively less resistant than surrounding floors to lateral earthquake motion, so a disproportionate amount of the building's overall side-to-side drift is focused on that floor. Subject to disproportionate lateral stress, and less able to withstand the stress, the floor becomes a weak point that may suffer structural damage or complete failure, which in turn results in the collapse of the entire building. 1989 Loma Prieta earthquake Soft-story failure was responsible for nearly half of all homes that became uninhabitable in California's Loma Prieta earthquake of 1989 and was projected to cause severe damage and possible destruction of 160,000 homes in the event of a more significant earthquake in the San Francisco Bay Area. As of 2009, few such buildings in the area had undergone the relatively inexpensive seismic retrofit to correct the condition. In 2013, San Francisco mandated screening of soft-story buildings to determine if retrofitting is necessary and required that retrofitting be completed by 2017 through 2020. Los Angeles earthquake hazard reduction policies After the establishment of the San Francisco Mandatory Seismic Retrofit Program in 2013, Los Angeles adopted a similar ordinance targeting soft-story apartment buildings. This ordinance is to reduce structural damage in the event of an earthquake by reinforcing soft-story areas with steel structures. A soft-story building is described as existing wood-frame buildings with soft, weak, or open-front walls and existing non-ductile concrete buildings in the ordinance. Most of these buildings were built before 1978, before building codes were changed. Los Angeles property owners are being targeted by the size of their buildings. The first group of ordinances went out May 2, 2016, with sixteen or more units and more than three stories. The second is July 22, 2016, with sixteen or more units and two stories. The third is October 17, 2016, with sixteen or fewer units and more than three stories. The fourth is January 30, 2017, for nine to fifteen units. The fifth is May 29, 2017 for seven to eight units. The sixth is August 14, 2017, for four to six units. Then on October 30, 2017 condominiums and commercial buildings will receive their orders to comply. The order to comply is legally and logistically significant because it starts a "ticking clock". In Los Angeles, property owners have two years from the date of the order to bring forth approved plans. After that milestone, they have 3.5 years (counting from when the order was received) to obtain the construction permit. Total completion, as confirmed by receiving a certificate of compliance, must be attained within seven years of the date of issue of the original order. In Los Angeles, property owners also have the option to demolish their noncompliant buildings. Demolition plans must be submitted within two years, and a demolition permit must be issued within 3.5 years (from the date of the original compliance order), and the actual demolition must be completed within seven years of the original compliance order. Failure to meet the deadlines can result in the municipality stepping in, evicting any lingering tenants, and then demolishing the building, whereupon the cost of demolition is charged back to the property owner. If the property owner refuses to pay the bill, then the City can seize the entire property and sell it in order to pay for the demolition (including all associated clean-up tasks), as well as any back-taxes that may be owing. However, because land values are often high, it is rare for property owners to neglect orders from the municipality. 2023 Turkey–Syria earthquake In Turkey, multi-story residential buildings often have a soft inset ground floor, which is used in high-density areas in Asia to provide extra space for parking or pedestrians. About 90% of buildings that collapsed in the 1999 İzmit earthquake in Turkey had soft stories, which prompted the creation of a building code for earthquake safety. In 2016, Turkish President Recep Tayyip Erdoğan started issuing amnesty to developers for building regulations, allowing the construction of unsafe multi-story buildings, including ones with soft stories. The 2023 Turkey–Syria earthquake destroyed many soft-story buildings, which were widespread in the country and greatly increased the amount of damage and number of casualties. References Construction Earthquake and seismic risk mitigation Earthquake engineering
Soft story building
[ "Engineering" ]
1,047
[ "Structural engineering", "Construction", "Civil engineering", "Earthquake engineering", "Earthquake and seismic risk mitigation" ]
18,199,714
https://en.wikipedia.org/wiki/Vidarabine%20phosphate
Vidarabine phosphate is an adenosine monophosphate nucleotide in which ribose is replaces by an arabinso moiety. It has antiviral and possibly antineoplastic properties. See also Vidarabine References Nucleotides
Vidarabine phosphate
[ "Chemistry", "Biology" ]
55
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
18,200,485
https://en.wikipedia.org/wiki/Minnaert%20function
The Minnaert function is a photometric function used to interpret astronomical observations and remote sensing data for the Earth. It was named after the astronomer Marcel Minnaert. This function expresses the radiance factor (RADF) as a function the phase angle (), the photometric latitude () and the photometric longitude (). where is the Minnaert albedo, is an empirical parameter, is the scattered radiance in the direction , is the incident radiance, and The phase angle is the angle between the light source and the observer with the object as the center. The assumptions made are: the surface is illuminated by a distant point source. the surface is isotropic and flat. Minnaert's contribution is the introduction of the parameter , having a value between 0 and 1, originally for a better interpretation of observations of the Moon. In remote sensing the use of this function is referred to as Minnaert topographic correction, a necessity when interpreting images of rough terrain. References Observational astronomy Photometric systems Equations of astronomy
Minnaert function
[ "Physics", "Astronomy" ]
216
[ "Concepts in astronomy", "Observational astronomy", "Astronomical sub-disciplines", "Equations of astronomy" ]
18,200,701
https://en.wikipedia.org/wiki/Korringa%E2%80%93Kohn%E2%80%93Rostoker%20method
The Korringa–Kohn–Rostoker (KKR) method is used to calculate the electronic band structure of periodic solids. In the derivation of the method using multiple scattering theory by Jan Korringa and the derivation based on the Kohn and Rostoker variational method, the muffin-tin approximation was used. Later calculations are done with full potentials having no shape restrictions. Introduction All solids in their ideal state are single crystals with the atoms arranged on a periodic lattice. In condensed matter physics, the properties of such solids are explained on the basis of their electronic structure. This requires the solution of a complicated many-electron problem, but the density functional theory of Walter Kohn makes it possible to reduce it to the solution of a Schroedinger equation with a one-electron periodic potential. The problem is further simplified with the use of group theory and in particular Bloch's theorem, which leads to the result that the energy eigenvalues depend on the crystal momentum and are divided into bands. Band theory is used to calculate the eigenvalues and wave functions. As compared with other band structure methods, the Korringa-Kohn-Rostoker (KKR) band structure method has the advantage of dealing with small matrices due to the fast convergence of scattering operators in angular momentum space, and disordered systems where it allows to carry out with relative ease the ensemble configuration averages. The KKR method does have a few “bills” to pay, e.g., (1) the calculation of KKR structure constants, the empty lattice propagators, must be carried out by the Ewald's sums for each energy and k-point, and (2) the KKR functions have a pole structure on the real energy axis, which requires a much larger number of k points for the Brillouin Zone (BZ) integration as compared with other band theory methods. The KKR method has been implemented in several codes for electronic structure and spectroscopy calculations, such as MuST, AkaiKKR, sprKKR, FEFF, GNXAS and JuKKR. Mathematical formulation The KKR band theory equations for space-filling non-spherical potentials are derived in books and in the article on multiple scattering theory. The wave function near site is determined by the coefficients . According to Bloch's theorem, these coefficients differ only through a phase factor . The satisfy the homogeneous equations where and . The is the inverse of the scattering matrix calculated with the non-spherical potential for the site. As pointed out by Korringa, Ewald derived a summation process that makes it possible to calculate the structure constants, . The energy eigenvalues of the periodic solid for a particular , , are the roots of the equation . The eigenfunctions are found by solving for the with . By ignoring all contributions that correspond to an angular momentum greater than , they have dimension . In the original derivations of the KKR method, spherically symmetric muffin-tin potentials were used. Such potentials have the advantage that the inverse of the scattering matrix is diagonal in where is the scattering phase shift that appears in the partial wave analysis in scattering theory. The muffin-tin approximation is good for closely packed metals, but it does not work well for ionic solids like semiconductors. It also leads to errors in calculations of interatomic forces. Applications The KKR method may be combined with density functional theory (DFT) and used to study the electronic structure and consequent physical properties of molecules and materials. As with any DFT calculation, the electronic problem must be solved self-consistently, before quantities such as the total energy of a collection of atoms, the electron density, the band structure, and forces on individual atoms may be calculated. One major advantage of the KKR formalism over other electronic structure methods is that it provides direct access to the Green's function of a given system. This, and other convenient mathematical quantities recovered from the derivation in terms of multiple scattering theory, facilitate access to a range of physically relevant quantities, including transport properties, magnetic properties, and spectroscopic properties. One particularly powerful method which is unique to Green's function-based methods is the coherent potential approximation (CPA), which is an effective medium theory used to average over configurational disorder, such as is encountered in a substitutional alloy. The CPA captures the broken translational symmetry of the disordered alloy in a physically meaningful way, with the end result that the initially 'sharp' band structure is 'smeared-out', which reflects the finite lifetime of electronic states in such a system. The CPA can also be used to average over many possible orientations of magnetic moments, as is necessary to describe the paramagnetic state of a magnetic material (above its Curie temperature). This is referred to as the disordered local moment (DLM) picture. References Electronic structure methods
Korringa–Kohn–Rostoker method
[ "Physics", "Chemistry" ]
1,008
[ "Quantum chemistry", "Quantum mechanics", "Computational physics", "Electronic structure methods", "Computational chemistry" ]
18,203,720
https://en.wikipedia.org/wiki/Topological%20degree%20theory
In mathematics, topological degree theory is a generalization of the winding number of a curve in the complex plane. It can be used to estimate the number of solutions of an equation, and is closely connected to fixed-point theory. When one solution of an equation is easily found, degree theory can often be used to prove existence of a second, nontrivial, solution. There are different types of degree for different types of maps: e.g. for maps between Banach spaces there is the Brouwer degree in Rn, the Leray-Schauder degree for compact mappings in normed spaces, the coincidence degree and various other types. There is also a degree for continuous maps between manifolds. Topological degree theory has applications in complementarity problems, differential equations, differential inclusions and dynamical systems. Further reading Topological fixed point theory of multivalued mappings, Lech Górniewicz, Springer, 1999, Topological degree theory and applications, Donal O'Regan, Yeol Je Cho, Yu Qing Chen, CRC Press, 2006, Mapping Degree Theory, Enrique Outerelo, Jesus M. Ruiz, AMS Bookstore, 2009, Topology Algebraic topology Differential topology
Topological degree theory
[ "Physics", "Mathematics" ]
244
[ "Algebraic topology", "Topology stubs", "Fields of abstract algebra", "Topology", "Space", "Differential topology", "Geometry", "Spacetime" ]
18,204,005
https://en.wikipedia.org/wiki/Ulrich%20Kohlenbach
Ulrich Wilhelm Kohlenbach (born 27 July 1962 in Frankfurt am Main) is a German mathematician and professor of algebra and logic at the Technische Universität Darmstadt. His research interests lie in the field of proof mining. Kohlenbach was president of the German Association for Mathematical Logic and for Basic Research in the Exact Sciences (DVMLG) from 2008 to 2012 and president of the Association for Symbolic Logic from 2016 to 2018. Life He graduated ('Abitur') from Lessing-Gymnasium (High School) in 1980 and completed his studies of mathematics, philosophy, and linguistics with a diplom from the Goethe University Frankfurt. During his studies he received a scholarship from the Studienstiftung des deutschen Volkes. At the same university, he received his Ph.D. in 1990 under the supervision of Horst Luckhardt and passed his habilitation ('venia legendi') in mathematics five years later. During the academic year 1996/1997 he was a visiting assistant professor at the University of Michigan. In 1997, he became an associate professor at Aarhus University where he worked until 2004. Kohlenbach is now a full professor at the Technische Universität Darmstadt. He is married to Gabriele Bahl-Kohlenbach with whom he has a daughter. In 2011, he received the prestigious Kurt Gödel Research Prize of the Kurt Gödel Society. He was an invited speaker at the 2018 International Congress of Mathematicians in Rio de Janeiro. In 2024, he was selected as the first ringbearer of the Ernst Zermelo Ring. References External links at Technische Universität Darmstadt Mathematical logicians 1962 births Living people 20th-century German mathematicians 21st-century German mathematicians Goethe University Frankfurt alumni Academic staff of Technische Universität Darmstadt
Ulrich Kohlenbach
[ "Mathematics" ]
371
[ "Mathematical logic", "Mathematical logicians" ]
18,204,479
https://en.wikipedia.org/wiki/Connected%20Urban%20Development
Connected Urban Development (CUD) is a private-public partnership, initiated in 2006 by Cisco in cooperation with the cities of Amsterdam, San Francisco, and Seoul, to work towards a further reduction of carbon emissions through improvements in the efficiency of the urban infrastructure. It follows on Cisco's commitment to the Clinton Global Initiative. The CUD program initially involved three pilot cities: San Francisco, California; Amsterdam, The Netherlands; and Seoul, South Korea. These cities were selected because each had implemented or planned to implement a next-generation [broadband] (fiber and/or wireless) infrastructure. Other common factors are the significant traffic congestion issues each city faces and the fact that each is led by a visionary mayor already involved in green initiatives. In 2008 the three initial cities were joined by Lisbon, Hamburg, Madrid and Birmingham. Learnings from the CUD partnership should serve as blueprint of best practices and methodologies that other cities can use as a reference. All cities involved are strongly relying on the use of information and communication technologies to achieve their goals. CUD is an ongoing program, to deliver innovative, sustainable models for urban planning and economic development. The exchange of ideas is promoted through bi-annual conferences in which representatives of all participating cities meet, together with representatives of other interested cities. The first CUD Conference took place in February 2008 in San Francisco, the second is planned for Amsterdam in September 2008. References External links Connected Urban Development Environmental policy Cisco Systems Urban planning
Connected Urban Development
[ "Engineering" ]
297
[ "Urban planning", "Architecture" ]
18,205,060
https://en.wikipedia.org/wiki/Transport%20law
Transport law (or transportation law) is the area of law dealing with transport. The laws can apply very broadly at a transport system level or more narrowly to transport things or activities within that system such as vehicles, things and behaviours. Transport law is generally found in two main areas: Legislation or statutory law passed or made by elected officials like Parliaments or made by other officials under delegation Case law decided by courts Legislation typically consists of statutes known as Acts and delegated legislation like regulations, orders or notices. Case law consists of judgments, findings and rulings handed down by courts. Transport system things and activities Transport laws can apply at a global transport system-wide level. A transport system can encompass a wide range of matters which make up the system. These include - heavy and light rail systems including associated land, infrastructure and rolling stock which comprise trains, trams and light rail vehicles roads including freeways, arterial roads and paths vehicles including cars, trucks, buses and bicycles ports and waterways commercial ships and recreational vessels air transport systems and aircraft. A transport system includes not only system infrastructure and conveyances, but also things like - communication systems and other technologies strategic, business and operational plans schedules, timetables and ticketing systems safety systems labour components service components government decision makers like Ministers, departments, authorities, corporations, agencies and other legal persons. The Transport Integration Act of Victoria, Australia provides an example of the use of a broad statutory formulation to circumscribe the operation of a transport law in legislative form. Individual components can be identified from this broad transport system formulation and then regulated discreetly. For example, a bus or a car forms part of a broad transport system but are commonly regulated on an individual basis in terms of identification (registration), control of the vehicle (driver licensing and drug and blood alcohol controls), vehicle forms and fittings (vehicle standards) and other safety requirements. Examples of transport legislation Victoria again provides an example of a jurisdiction with a suite of transport legislation which operates both at transport system and modal or activity levels. System level The Transport Integration Act sets out the overall policy framework for transport in Victoria. It also establishes and sets the charters of the key government agencies which make decisions affecting the planning and operation of the State's transport system and each agency is required by the statute to have regard to the policy framework. As a general rule, transport agencies and officials do not exist in their own right and have no existence or power without conferral from a transport law. Legislation is commonly required for this purpose. Transport decision makers and agencies established and/or empowered by the Transport Integration Act include - key government figures such as Ministers (currently the Minister for Public Transport, the Minister for Roads and the Minister for Ports) a central government Department - the Department of Transport (Victoria, 2019–) - responsible for system-wide planning, integration and coordination a public transport agency responsible for providing or regulating train, tram, light rail, bus and taxi services - the Public Transport Development Authority a road agency responsible for road construction and maintenance and vehicles and towing services regulation - the Roads Corporation (VicRoads) agencies responsible for discrete parts of the rail system such as land, infrastructure and other assets (Victorian Rail Track (VicTrack)) and regional services (V/Line Corporation) agencies responsible for ports and other waters - the Port of Melbourne Corporation, the Port of Hastings Development Authority, the Victorian Regional Channels Authority and local and other authorities an independent transport safety regulator (Director, Transport Safety) and independent safety investigator (the Chief Investigator, Transport Safety) The Transport Integration Act establishes these agencies and sets their statutory charters. The charters circumscribe the agencies' jurisdiction or power to operate in and to regulate their respective components of the transport system. Mode or activity-based legislation Victoria has a range of statutes which regulate transport modes and transport-related activities throughout the State. These include: the Road Management Act 2004 the Road Safety Act 1986 the Rail Management Act 1996 the Rail Safety Act 2006 the Bus Safety Act 2009 the Bus Services Act 1995 the Accident Towing Services Act 2007 the Major Transport Projects Facilitation Act 2009 the Port Management Act 1995 the Marine Act 1988 the Tourist and Heritage Railways Act 2010 Case law and law from other sources Areas of transport law governed by court decisions and other non transport statutes or laws include property law, contract law, torts law and specialist regulation governing the contract of carriage, and the relationship between carriers and passengers in public transport and shippers and cargo owners in shipping. See also Aviation law Bicycle law Traffic law Traffic court References External links
Transport law
[ "Physics" ]
921
[ "Physical systems", "Transport", "Transport law" ]
16,987,262
https://en.wikipedia.org/wiki/Louisville%20Ridge
The Louisville Ridge, often now referred to as the Louisville Seamount Chain, is an underwater chain of over 70 seamounts located in the Southwest portion of the Pacific Ocean. As one of the longest seamount chains on Earth it stretches some from the Pacific-Antarctic Ridge northwest to the Tonga-Kermadec Trench, where it subducts under the Indo-Australian Plate as part of the Pacific Plate. The chains formation is best explained by movement of the Pacific Plate over the Louisville hotspot although others had suggested by leakage of magma from the shallow mantle up through the Eltanin fracture zone, which it follows closely for some of its course. Depth-sounding data first revealed existence consistent with a seamount chain in 1972 although some of the seamounts had been assigned as a ridge in 1964 linked to the Eltanin fracture zone system, hence the name. Geology The oldest volcanic rocks of the chain come from Osbourn Seamount at 78.8 ± 1.3  and ages become younger in a non linear fashion towards the south east with a youngest age of 1.1 . Composition studies of the erupted dominantly alkali basalt are consistent with a single Louisville mantle source distinct from other hotspots and the composition has remained homogeneous over at least the last 70 million years. In the past 25 million years magma upwelling rates may have decreased. There is almost certainly a deep plume origin to the hotspot. The Louisville hotspot chain passes through the western and eastern branches of the Wishbone scarp and while the seamounts show no compositional change as they cross the scarps, the East Wishbone scarp crossing point is associated with a distinct decrease in the volume of the younger seamount eruptives from that point east into the Pacific Plate. Tectonics Volcanic hotspot chains are used to suggest the net movements of tectonic plates and so in the case of the large Pacific Plate validation of models of its movement and indeed the hot spot hypothesis itself relies on data from several hot spot chains. As well as the Louisville hotspot there is data over tens of millions of years available from the movements of the Hawaii hotspot and the Arago hotspot. While the model of Pacific Plate movement, including bends in the hotspot track can be made to fit very well there has been long debate on timing of such bends as mismatchs of a few million years appeared to exist. Subduction The area of subduction of the Louisville chain into the Tonga Trench is associated with a relative seismic gap beneath the Tonga forearc. This implies that the subduction of the volcanoes compared to normal sediment has a significant impact in terms of normal relief of stress but it is unclear if the subducted volcanoes relieve it as suggested by some or say increase potential for sudden release. Further a postulated historic change in trend of the subducted Louisville chain compared to present is backed up by compositional analysis of more recent arc volcanism as the volcanics from the Louisville chain are recycled. A bathymetric high north-west of the Osbourn Seamount has been interpreted as the currently subducting portion of the Louisville chain, but this continuation is not aligned with the existent chain. Ecology Some of the seamounts are known coral reef stoney habitats, with typical species including the coral Solenosmilia variabilis, brisingid starfishes (Order Brisingida), and sea-lilies and feather stars (Class Crinoidea). They can be a fishery resource for species such as the orange roughy (Hoplostethus atlanticus) that can be fished by bottom trawling. Seamounts The Louisville Ridge includes the following: See also Louisville hotspot Hollister Ridge Hotspot (geology) Zealandia References Sources External links Expedition 330 - Louisville Seamount Trail, Integrated Ocean Drilling Program, 13 December 2010 to 11 February 2011 The Louisville Ridge – Tonga Trench collision: Implications for subduction zone dynamics, RV Sonne Research Expedition SO215 Cruise Report, 25 April 2011 to 11 June 2011 Seamounts of the Pacific Ocean Guyots Oceanography Hotspot tracks Seamount chains Underwater ridges of the Pacific Ocean
Louisville Ridge
[ "Physics", "Environmental_science" ]
853
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
16,993,001
https://en.wikipedia.org/wiki/Moody%20chart
In engineering, the Moody chart or Moody diagram (also Stanton diagram) is a graph in non-dimensional form that relates the Darcy–Weisbach friction factor fD, Reynolds number Re, and surface roughness for fully developed flow in a circular pipe. It can be used to predict pressure drop or flow rate down such a pipe. History In 1944, Lewis Ferry Moody plotted the Darcy–Weisbach friction factor against Reynolds number Re for various values of relative roughness ε / D. This chart became commonly known as the Moody chart or Moody diagram. It adapts the work of Hunter Rouse but uses the more practical choice of coordinates employed by R. J. S. Pigott, whose work was based upon an analysis of some 10,000 experiments from various sources. Measurements of fluid flow in artificially roughened pipes by J. Nikuradse were at the time too recent to include in Pigott's chart. The chart's purpose was to provide a graphical representation of the function of C. F. Colebrook in collaboration with C. M. White, which provided a practical form of transition curve to bridge the transition zone between smooth and rough pipes, the region of incomplete turbulence. Description Moody's team used the available data (including that of Nikuradse) to show that fluid flow in rough pipes could be described by four dimensionless quantities: Reynolds number, pressure loss coefficient, diameter ratio of the pipe and the relative roughness of the pipe. They then produced a single plot which showed that all of these collapsed onto a series of lines, now known as the Moody chart. This dimensionless chart is used to work out pressure drop, (Pa) (or head loss, (m)) and flow rate through pipes. Head loss can be calculated using the Darcy–Weisbach equation in which the Darcy friction factor appears : Pressure drop can then be evaluated as: or directly from where is the density of the fluid, is the average velocity in the pipe, is the friction factor from the Moody chart, is the length of the pipe and is the pipe diameter. The chart plots Darcy–Weisbach friction factor against Reynolds number Re for a variety of relative roughnesses, the ratio of the mean height of roughness of the pipe to the pipe diameter or . The Moody chart can be divided into two regimes of flow: laminar and turbulent. For the laminar flow regime (< ~3000), roughness has no discernible effect, and the Darcy–Weisbach friction factor was determined analytically by Poiseuille: For the turbulent flow regime, the relationship between the friction factor the Reynolds number Re, and the relative roughness is more complex. One model for this relationship is the Colebrook equation (which is an implicit equation in ): Fanning friction factor This formula must not be confused with the Fanning equation, using the Fanning friction factor , equal to one fourth the Darcy-Weisbach friction factor . Here the pressure drop is: References See also Friction loss Darcy friction factor formulae Fluid dynamics Hydraulics Piping
Moody chart
[ "Physics", "Chemistry", "Engineering" ]
629
[ "Building engineering", "Chemical engineering", "Physical systems", "Hydraulics", "Mechanical engineering", "Piping", "Fluid dynamics" ]
16,993,965
https://en.wikipedia.org/wiki/Reduced%20derivative
In mathematics, the reduced derivative is a generalization of the notion of derivative that is well-suited to the study of functions of bounded variation. Although functions of bounded variation have derivatives in the sense of Radon measures, it is desirable to have a derivative that takes values in the same space as the functions themselves. Although the precise definition of the reduced derivative is quite involved, its key properties are quite easy to remember: it is a multiple of the usual derivative wherever it exists; at jump points, it is a multiple of the jump vector. The notion of reduced derivative appears to have been introduced by Alexander Mielke and Florian Theil in 2004. Definition Let X be a separable, reflexive Banach space with norm || || and fix T > 0. Let BV−([0, T]; X) denote the space of all left-continuous functions z : [0, T] → X with bounded variation on [0, T]. For any function of time f, use subscripts +/− to denote the right/left continuous versions of f, i.e. For any sub-interval [a, b] of [0, T], let Var(z, [a, b]) denote the variation of z over [a, b], i.e., the supremum The first step in the construction of the reduced derivative is the "stretch" time so that z can be linearly interpolated at its jump points. To this end, define The "stretched time" function τ̂ is left-continuous (i.e. τ̂ = τ̂−); moreover, τ̂− and τ̂+ are strictly increasing and agree except at the (at most countable) jump points of z. Setting T̂ = τ̂(T), this "stretch" can be inverted by Using this, the stretched version of z is defined by where θ ∈ [0, 1] and The effect of this definition is to create a new function ẑ which "stretches out" the jumps of z by linear interpolation. A quick calculation shows that ẑ is not just continuous, but also lies in a Sobolev space: The derivative of ẑ(τ) with respect to τ is defined almost everywhere with respect to Lebesgue measure. The reduced derivative of z is the pull-back of this derivative by the stretching function τ̂ : [0, T] → [0, T̂]. In other words, Associated with this pull-back of the derivative is the pull-back of Lebesgue measure on [0, T̂], which defines the differential measure μz: Properties The reduced derivative rd(z) is defined only μz-almost everywhere on [0, T]. If t is a jump point of z, then If z is differentiable on (t1, t2), then and, for t ∈ (t1, t2), , For 0 ≤ s < t ≤ T, References Differential calculus Mathematical analysis
Reduced derivative
[ "Mathematics" ]
613
[ "Mathematical analysis", "Differential calculus", "Calculus" ]
16,994,785
https://en.wikipedia.org/wiki/Propellants%2C%20Explosives%20and%20Rocket%20Motor%20Establishment
Propellants, Explosives and Rocket Motor Establishment, usually known for brevity as PERME, operated at two sites: Waltham Abbey Royal Gunpowder Mills, known from 1977 as PERME Waltham Abbey Rocket Propulsion Establishment established at RAF Westcott in 1946, also known as PERME Westcott RAF Spadeadam, also known as the Rocket Establishment was not officially part of PERME but was often confused with it. Research institutes in England Rocket engines of the United Kingdom Rocketry
Propellants, Explosives and Rocket Motor Establishment
[ "Astronomy", "Engineering" ]
98
[ "Rocketry", "Rocketry stubs", "Astronomy stubs", "Aerospace engineering" ]
4,540,869
https://en.wikipedia.org/wiki/Radio%20code
A Radio code is any code that is commonly used over a telecommunication system such as Morse code, brevity codes and procedure words. Brevity code Brevity codes are designed to convey complex information with a few words or codes. Specific brevity codes include: ACP-131 Aeronautical Code signals ARRL Numbered Radiogram Multiservice tactical brevity code Ten-code Phillips Code NOTAM Code Operating signals Brevity codes that are specifically designed for use between communications operators and to support communication operations are referred to as "operating signals". These include: Prosigns for Morse code 92 Code, Western Union telegraph brevity codes Q code, initially developed for commercial radiotelegraph communication, later adopted by other radio services, especially amateur radio. Used since circa 1909. QN Signals, published by the ARRL and used by Amateur radio operators to assist in the transmission of ARRL Radiograms in the National Traffic System. R and S brevity codes, published by the British Post Office in 1908 for coastal wireless stations and ships, superseded in 1912 by Q codes X code, used by European military services as a wireless telegraphy code in the 1930s and 1940s Z code, also used in the early days of radiotelegraph communication. Other Morse code, is commonly used in Amateur radio. Morse code abbreviations are a type of brevity code. Procedure words used in radiotelephony procedure, are a type of radio code. Spelling alphabets, including the ICAO spelling alphabet, are commonly used in communication over radios and telephones. Other meanings Many car audio systems (car radios) have a so-called 'radio code' number which needs to be entered after a power disconnection. This was introduced as a measure to deter theft of these devices. If the code is entered correctly, the radio is activated for use. Entering the code incorrectly several times in a row will cause a temporary or permanent lockout. Some car radios have another check which operates in conjunction with car electronics. If the VIN or another vehicle ID matches the previously stored one, the radio is activated. If the radio cannot verify the vehicle, it is considered to be moved into another vehicle. The radio will then request for the code number or simply refuse to operate and display an error message such as "CANCHECK" or "SECURE". See also Encoding References Broad-concept articles Telecommunications
Radio code
[ "Technology" ]
489
[ "Information and communications technology", "Telecommunications" ]
4,541,666
https://en.wikipedia.org/wiki/Particle%20tracking%20velocimetry
Particle tracking velocimetry (PTV) is a velocimetry method i.e. a technique to measure velocities and trajectories of moving objects. In fluid mechanics research these objects are neutrally buoyant particles that are suspended in fluid flow. As the name suggests, individual particles are tracked, so this technique is a Lagrangian approach, in contrast to particle image velocimetry (PIV), which is an Eulerian method that measures the velocity of the fluid as it passes the observation point, that is fixed in space. There are two experimental PTV methods: the two-dimensional (2-D) PTV. Measurements are made in a 2-D slice, illuminated by a thin laser sheet (a thin plane); a low density of seeded particles allows for tracking each of them individually for several frames. the three-dimensional particle tracking velocimetry (3-D PTV) is a distinctive experimental technique originally developed to study fully turbulent flows. It is now being used widely in various disciplines, ranging from structural mechanics research to medicine and industrial environments. It is based on a multiple camera-system in a stereoscopic arrangement, three-dimensional illumination of an observation volume, recording of the time sequence of stereoscopic images of optical targets (flow tracers illuminated particles), determining their instantaneous 3-D position in space by use of photogrammetric techniques and tracking their movement in time, thus obtaining a set of 3-D trajectories of the optical targets. Time-resolved three-dimensional particle tracking velocimetry is known as 4D-PTV. Description The 3-D particle tracking velocimetry (PTV) belongs to the class of whole-field velocimetry techniques used in the study of turbulent flows, allowing the determination of instantaneous velocity and vorticity distributions over two or three spatial dimensions. 3-D PTV yields a time series of instantaneous 3-component velocity vectors in the form of fluid element trajectories. At any instant, the data density can easily exceed 10 velocity vectors per cubic centimeter. The method is based on stereoscopic imaging (using 2 to 4 cameras) and synchronous recording of the motion of flow tracers, i.e. small particles suspended in the flow, illuminated by a strobed light source. The 3-D particle coordinates as a function of time are then derived by use of image and photogrammetric analysis of each stereoscopic set of frames. The 3-D particle positions are tracked in the time domain to derive the particle trajectories. The ability to follow (track) a spatially dense set of individual particles for a sufficiently long period of time, and to perform statistical analysis of their properties, permits a Lagrangian description of the turbulent flow process. This is a unique advantage of the 3-D PTV method. A typical implementation of the 3D-PTV consists of two, three or four digital cameras, installed in an angular configuration and synchronously recording the diffracted or fluorescent light from the flow tracers seeded in the flow. The flow is illuminated by a collimated laser beam, or by another source of light that is often strobed, synchronously with the camera frame rate, to reduce the effective exposure time of the moving optical targets and "freeze" their position on each frame. There is no restriction on the light to be coherent or monochromatic; only its illuminance has to be sufficient for imaging the tracer particles in the observational volume. Particles or tracers could be fluorescent, diffractive, tracked through as many consecutive frames as possible, and on as many cameras as possible to maximize positioning accuracy. In principle, two cameras in a stereoscopic configuration are sufficient in order to determine the three coordinates of a particle in space, but in most practical situations three or four cameras are used to reach a satisfactory 3-D positioning accuracy, as well as increase the trajectory yield when studying fully turbulent flows. 3D-PTV schemes Several versions of 3D-PTV schemes exist. Most of these utilize either 3 CCDs or 4 CCDs. Real time image processing schemes The use of white light for illuminating the observation volume, rather than laser-based illumination, substantially reduces both the cost, and the health and safety requirements. Initial development of the 3-D PTV method started as a joint project between the Institute of Geodesy and Photogrammetry and the Institute of Hydraulics of ETH Zurich. Further developments of the technique include real-time image processing using on-camera FPGA chip. See also Hot-wire anemometry Laser Doppler velocimetry Molecular tagging velocimetry References Maas, H.-G., 1992. Digitale Photogrammetrie in der dreidimensionalen Strömungsmesstechnik, ETH Zürich Dissertation Nr. 9665 Malik, N., Dracos, T., Papantoniou, D., 1993. Particle Tracking in three dimensional turbulent flows - Part II: Particle tracking. Experiments in Fluids Vol. 15, pp. 279–294 Maas, H.-G., Grün, A., Papantoniou, D., 1993. Particle Tracking in three dimensional turbulent flows - Part I: Photogrammetric determination of particle coordinates. Experiments in Fluids Vol. 15, pp. 133–146 Srdic, Andjelka, 1998. Interaction of dense particles with stratified and turbulent environments. Ph.D. Dissertation, Arizona State University. Lüthi, B., Tsinober, A., Kinzelbach W. (2005)- Lagrangian Measurement of Vorticity Dynamics in Turbulent Flow. Journal of Fluid Mechanics. (528), p. 87-118 Nicholas T. Ouellette, Haitao Xu, Eberhard Bodenschatz, A quantitative study of three-dimensional Lagrangian particle tracking algorithms, Experiments in Fluids, Volume 40, Issue 2, Feb 2006, Pages 301 - 313. Measurement Fluid dynamics
Particle tracking velocimetry
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
1,260
[ "Physical quantities", "Chemical engineering", "Quantity", "Measurement", "Size", "Piping", "Fluid dynamics" ]
4,542,797
https://en.wikipedia.org/wiki/Water%20well%20pump
A water well pump is a pump that is used in extracting water from a water well. Deep well pumps extract groundwater from subterranean aquifers, offering a reliable source of water independent of municipal networks. These pumps, often submersible and powered by electricity, can access water reserves located much deeper than shallow wells, ensuring a consistent supply even during periods of drought. They include different kinds of pumps, most of them submersible pumps: Hand pump, manually operated Injector, a jet-driven pump Mechanical or rotary lobe pump requiring mechanical parts to pump water Solar-powered water pump Pump driven by air as used by the Amish Pump driven by air as used in the Australian outback Manual pumpless or hand pump wells requiring a human operator The pump replaces the use of a bucket and pulley system to extract water. External links Water well pump article Pumps Water wells
Water well pump
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
178
[ "Pumps", "Hydrology", "Turbomachinery", "Physical systems", "Hydraulics", "Water wells", "Environmental engineering" ]
4,543,610
https://en.wikipedia.org/wiki/Loop%20braid%20group
The loop braid group is a mathematical group structure that is used in some models of theoretical physics to model the exchange of particles with loop-like topologies within three dimensions of space and time. The basic operations which generate a loop braid group for n loops are exchanges of two adjacent loops, and passing one adjacent loop through another. The topology forces these generators to satisfy some relations, which determine the group. To be precise, the loop braid group on n loops is defined as the motion group of n disjoint circles embedded in a compact three-dimensional "box" diffeomorphic to the three-dimensional disk. A motion is a loop in the configuration space, which consists of all possible ways of embedding n circles into the 3-disk. This becomes a group in the same way as loops in any space can be made into a group; first, we define equivalence classes of loops by letting paths g and h be equivalent iff they are related by a (smooth) homotopy, and then we define a group operation on the equivalence classes by concatenation of paths. In his 1962 Ph.D. thesis, David M. Dahm was able to show that there is an injective homomorphism from this group into the automorphism group of the free group on n generators, so it is natural to identify the group with this subgroup of the automorphism group. One may also show that the loop braid group is isomorphic to the welded braid group, as is done for example in a paper by John C. Baez, Derek Wise, and Alissa Crans, which also gives some presentations of the loop braid group using the work of Xiao-Song Lin. See also Braid group References Braid groups
Loop braid group
[ "Mathematics" ]
350
[ "Topology stubs", "Topology" ]
420,302
https://en.wikipedia.org/wiki/Mechanical%20efficiency
In mechanical engineering, mechanical efficiency is a dimensionless ratio that measures the efficiency of a mechanism or machine in transforming the power input to the device to power output. A machine is a mechanical linkage in which force is applied at one point, and the force does work moving a load at another point. At any instant the power input to a machine is equal to the input force multiplied by the velocity of the input point, similarly the power output is equal to the force exerted on the load multiplied by the velocity of the load. The mechanical efficiency of a machine (often represented by the Greek letter eta η) is a dimensionless number between 0 and 1 that is the ratio between the power output of the machine and the power input Since a machine does not contain a source of energy, nor can it store energy, from conservation of energy the power output of a machine can never be greater than its input, so the efficiency can never be greater than 1. All real machines lose energy to friction; the energy is dissipated as heat. Therefore, their power output is less than their power input Therefore, the efficiency of all real machines is less than 1. A hypothetical machine without friction is called an ideal machine; such a machine would not have any energy losses, so its output power would equal its input power, and its efficiency would be 1 (100%). For hydropower turbines the efficiency is referred to as hydraulic efficiency. See also Mechanical advantage Thermal efficiency Electrical efficiency Internal combustion engine Electric motor Velocity ratio References Mechanisms (engineering) Energy conversion Mechanical quantities he:נצילות מכנית
Mechanical efficiency
[ "Physics", "Mathematics", "Engineering" ]
324
[ "Mechanical quantities", "Physical quantities", "Quantity", "Mechanics", "Mechanical engineering", "Mechanisms (engineering)" ]
420,335
https://en.wikipedia.org/wiki/Ideal%20machine
The term ideal machine refers to a hypothetical mechanical system in which energy and power are not lost or dissipated through friction, deformation, wear, or other inefficiencies. Ideal machines have the theoretical maximum performance, and therefore are used as a baseline for evaluating the performance of real machine systems. A simple machine, such as a lever, pulley, or gear train, is "ideal" if the power input is equal to the power output of the device, which means there are no losses. In this case, the mechanical efficiency is 100%. Mechanical efficiency is the performance of the machine compared to its theoretical maximum as performed by an ideal machine. The mechanical efficiency of a simple machine is calculated by dividing the actual power output by the ideal power output. This is usually expressed as a percentage. Power loss in a real system can occur in many ways, such as through friction, deformation, wear, heat losses, incomplete chemical conversion, magnetic and electrical losses. Criteria A machine consists of a power source and a mechanism for the controlled use of this power. The power source often relies on chemical conversion to generate heat which is then used to generate power. Each stage of the process of power generation has a maximum performance limit which is identified as ideal. Once the power is generated the mechanism components of the machine direct it toward useful forces and movement. The ideal mechanism does not absorb any power, which means the power input is equal to the power output. An example is the automobile engine (internal combustion engine) which burns fuel (an exothermic chemical reaction) inside a cylinder and uses the expanding gases to drive a piston. The movement of the piston rotates the crank shaft. The remaining mechanical components such as the transmission, drive shaft, differential, axles and wheels form the power transmission mechanism that directs the power from the engine into friction forces on the road to move the automobile. The ideal machine has the maximum energy conversion performance combined with a lossless power transmission mechanism that yields maximum performance. See also Mechanics Energy Conservation of energy Entropy References Mechanics Linkages (mechanical)
Ideal machine
[ "Physics", "Engineering" ]
417
[ "Mechanics", "Mechanical engineering" ]
420,361
https://en.wikipedia.org/wiki/Radioactive%20tracer
A radioactive tracer, radiotracer, or radioactive label is a synthetic derivative of a natural compound in which one or more atoms have been replaced by a radionuclide (a radioactive atom). By virtue of its radioactive decay, it can be used to explore the mechanism of chemical reactions by tracing the path that the radioisotope follows from reactants to products. Radiolabeling or radiotracing is thus the radioactive form of isotopic labeling. In biological contexts, experiments that use radioisotope tracers are sometimes called radioisotope feeding experiments. Radioisotopes of hydrogen, carbon, phosphorus, sulfur, and iodine have been used extensively to trace the path of biochemical reactions. A radioactive tracer can also be used to track the distribution of a substance within a natural system such as a cell or tissue, or as a flow tracer to track fluid flow. Radioactive tracers are also used to determine the location of fractures created by hydraulic fracturing in natural gas production. Radioactive tracers form the basis of a variety of imaging systems, such as, PET scans, SPECT scans and technetium scans. Radiocarbon dating uses the naturally occurring carbon-14 isotope as an isotopic label. Methodology Isotopes of a chemical element differ only in the mass number. For example, the isotopes of hydrogen can be written as 1H, 2H and 3H, with the mass number superscripted to the left. When the atomic nucleus of an isotope is unstable, compounds containing this isotope are radioactive. Tritium is an example of a radioactive isotope. The principle behind the use of radioactive tracers is that an atom in a chemical compound is replaced by another atom, of the same chemical element. The substituting atom, however, is a radioactive isotope. This process is often called radioactive labeling. The power of the technique is due to the fact that radioactive decay is much more energetic than chemical reactions. Therefore, the radioactive isotope can be present in low concentration and its presence detected by sensitive radiation detectors such as Geiger counters and scintillation counters. George de Hevesy won the 1943 Nobel Prize for Chemistry "for his work on the use of isotopes as tracers in the study of chemical processes". There are two main ways in which radioactive tracers are used When a labeled chemical compound undergoes chemical reactions one or more of the products will contain the radioactive label. Analysis of what happens to the radioactive isotope provides detailed information on the mechanism of the chemical reaction. A radioactive compound is introduced into a living organism and the radio-isotope provides a means to construct an image showing the way in which that compound and its reaction products are distributed around the organism. Production The commonly used radioisotopes have short half lives and so do not occur in nature in large amounts. They are produced by nuclear reactions. One of the most important processes is absorption of a neutron by an atomic nucleus, in which the mass number of the element concerned increases by 1 for each neutron absorbed. For example, 13C + n → 14C In this case the atomic mass increases, but the element is unchanged. In other cases the product nucleus is unstable and decays, typically emitting protons, electrons (beta particle) or alpha particles. When a nucleus loses a proton the atomic number decreases by 1. For example, 32S + n → 32P + p Neutron irradiation is performed in a nuclear reactor. The other main method used to synthesize radioisotopes is proton bombardment. The proton are accelerated to high energy either in a cyclotron or a linear accelerator. Tracer isotopes Hydrogen Tritium (hydrogen-3) is produced by neutron irradiation of 6Li: 6Li + n → 4He + 3H Tritium has a half-life days (approximately 12.32 years) and it decays by beta decay. The electrons produced have an average energy of 5.7 keV. Because the emitted electrons have relatively low energy, the detection efficiency by scintillation counting is rather low. However, hydrogen atoms are present in all organic compounds, so tritium is frequently used as a tracer in biochemical studies. Carbon 11C decays by positron emission with a half-life of ca. 20 min. 11C is one of the isotopes often used in positron emission tomography. 14C decays by beta decay, with a half-life of 5730 years. It is continuously produced in the upper atmosphere of the earth, so it occurs at a trace level in the environment. However, it is not practical to use naturally-occurring 14C for tracer studies. Instead it is made by neutron irradiation of the isotope 13C which occurs naturally in carbon at about the 1.1% level. 14C has been used extensively to trace the progress of organic molecules through metabolic pathways. Nitrogen 13N decays by positron emission with a half-life of 9.97 min. It is produced by the nuclear reaction 1H + 16O → 13N + 4He 13N is used in positron emission tomography (PET scan). Oxygen 15O decays by positron emission with a half-life of 122 seconds. It is used in positron emission tomography. Fluorine 18F decays predominantly by β emission, with a half-life of 109.8 min. It is made by proton bombardment of 18O in a cyclotron or linear particle accelerator. It is an important isotope in the radiopharmaceutical industry. For example, it is used to make labeled fluorodeoxyglucose (FDG) for application in PET scans. Phosphorus 32P is made by neutron bombardment of 32S 32S + n → 32P + p It decays by beta decay with a half-life of 14.29 days. It is commonly used to study protein phosphorylation by kinases in biochemistry. 33P is made in relatively low yield by neutron bombardment of 31P. It is also a beta-emitter, with a half-life of 25.4 days. Though more expensive than 32P, the emitted electrons are less energetic, permitting better resolution in, for example, DNA sequencing. Both isotopes are useful for labeling nucleotides and other species that contain a phosphate group. Sulfur 35S is made by neutron bombardment of 35Cl 35Cl + n → 35S + p It decays by beta-decay with a half-life of 87.51 days. It is used to label the sulfur-containing amino-acids methionine and cysteine. When a sulfur atom replaces an oxygen atom in a phosphate group on a nucleotide a thiophosphate is produced, so 35S can also be used to trace a phosphate group. Technetium 99mTc is a very versatile radioisotope, and is the most commonly used radioisotope tracer in medicine. It is easy to produce in a technetium-99m generator, by decay of 99Mo. 99Mo → 99mTc + + The molybdenum isotope has a half-life of approximately 66 hours (2.75 days), so the generator has a useful life of about two weeks. Most commercial 99mTc generators use column chromatography, in which 99Mo in the form of molybdate, MoO42− is adsorbed onto acid alumina (Al2O3). When the 99Mo decays it forms pertechnetate TcO4−, which because of its single charge is less tightly bound to the alumina. Pulling normal saline solution through the column of immobilized 99Mo elutes the soluble 99mTc, resulting in a saline solution containing the 99mTc as the dissolved sodium salt of the pertechnetate. The pertechnetate is treated with a reducing agent such as Sn2+ and a ligand. Different ligands form coordination complexes which give the technetium enhanced affinity for particular sites in the human body. 99mTc decays by gamma emission, with a half-life: 6.01 hours. The short half-life ensures that the body-concentration of the radioisotope falls effectively to zero in a few days. Iodine 123I is produced by proton irradiation of 124Xe. The caesium isotope produced is unstable and decays to 123I. The isotope is usually supplied as the iodide and hypoiodate in dilute sodium hydroxide solution, at high isotopic purity. 123I has also been produced at Oak Ridge National Laboratories by proton bombardment of 123Te. 123I decays by electron capture with a half-life of 13.22 hours. The emitted 159 keV gamma ray is used in single-photon emission computed tomography (SPECT). A 127 keV gamma ray is also emitted. 125I is frequently used in radioimmunoassays because of its relatively long half-life (59 days) and ability to be detected with high sensitivity by gamma counters. 129I is present in the environment as a result of the testing of nuclear weapons in the atmosphere. It was also produced in the Chernobyl and Fukushima disasters. 129I decays with a half-life of 15.7 million years, with low-energy beta and gamma emissions. It is not used as a tracer, though its presence in living organisms, including human beings, can be characterized by measurement of the gamma rays. Other isotopes Many other isotopes have been used in specialized radiopharmacological studies. The most widely used is 67Ga for gallium scans. 67Ga is used because, like 99mTc, it is a gamma-ray emitter and various ligands can be attached to the Ga3+ ion, forming a coordination complex which may have selective affinity for particular sites in the human body. An extensive list of radioactive tracers used in hydraulic fracturing can be found below. Applications In metabolism research, tritium and 14C-labeled glucose are commonly used in glucose clamps to measure rates of glucose uptake, fatty acid synthesis, and other metabolic processes. While radioactive tracers are sometimes still used in human studies, stable isotope tracers such as 13C are more commonly used in current human clamp studies. Radioactive tracers are also used to study lipoprotein metabolism in humans and experimental animals. In medicine, tracers are applied in a number of tests, such as 99mTc in autoradiography and nuclear medicine, including single-photon emission computed tomography (SPECT), positron emission tomography (PET) and scintigraphy. The urea breath test for helicobacter pylori commonly used a dose of 14C labeled urea to detect h. pylori infection. If the labeled urea was metabolized by h. pylori in the stomach, the patient's breath would contain labeled carbon dioxide. In recent years, the use of substances enriched in the non-radioactive isotope 13C has become the preferred method, avoiding patient exposure to radioactivity. In hydraulic fracturing, radioactive tracer isotopes are injected with hydraulic fracturing fluid to determine the injection profile and location of created fractures. Tracers with different half-lives are used for each stage of hydraulic fracturing. In the United States amounts per injection of radionuclide are listed in the US Nuclear Regulatory Commission (NRC) guidelines. According to the NRC, some of the most commonly used tracers include antimony-124, bromine-82, iodine-125, iodine-131, iridium-192, and scandium-46. A 2003 publication by the International Atomic Energy Agency confirms the frequent use of most of the tracers above, and says that manganese-56, sodium-24, technetium-99m, silver-110m, argon-41, and xenon-133 are also used extensively because they are easily identified and measured. References External links National Isotope Development Center U.S. Government resources for radioisotopes - production, distribution, and information Isotope Development & Production for Research and Applications (IDPRA) U.S. Department of Energy program sponsoring isotope production and production research and development Radiobiology Radiology Radiopharmaceuticals Radioactivity Biochemistry methods Medicinal radiochemistry
Radioactive tracer
[ "Physics", "Chemistry", "Biology" ]
2,552
[ "Biochemistry methods", "Medicinal radiochemistry", "Radiobiology", "Radiopharmaceuticals", "Medicinal chemistry", "Nuclear physics", "Biochemistry", "Chemicals in medicine", "Radioactivity" ]
420,737
https://en.wikipedia.org/wiki/Tin%20pest
Tin pest is an autocatalytic, allotropic transformation of the element tin, which causes deterioration of tin objects at low temperatures. Tin pest has also been called tin disease, tin blight, tin plague, or tin leprosy. It is an autocatalytic process, accelerating once it begins. It was first documented in the scientific literature in 1851, having been observed in the pipes of pipe organs in medieval churches that had experienced cool climates. With the adoption of the Restriction of Hazardous Substances Directive (RoHS) regulations in Europe, and similar regulations elsewhere, traditional lead/tin solder alloys in electronic devices have been replaced by nearly pure tin, introducing tin pest and related problems such as tin whiskers. Allotropic transformation At and below, pure tin transforms from the silvery, ductile metallic allotrope of β-form white tin to the brittle, nonmetallic, α-form grey tin with a diamond cubic structure. The transformation is slow to initiate due to a high activation energy but the presence of germanium (or crystal structures of similar form and size) or very low temperatures of roughly −30 °C aids the initiation. There is also a large volume increase of about 27% associated with the phase change to the nonmetallic low temperature allotrope. This frequently makes tin objects (like buttons) decompose into powder during the transformation, hence the name tin pest. The decomposition will catalyze itself, which is why the reaction accelerates once it starts. The mere presence of tin pest leads to tin pest. Tin objects at low temperatures will simply disintegrate. Possible historical examples Scott expedition to Antarctica In 1910 British polar explorer Robert Scott hoped to be the first to reach the South Pole, but was beaten by Norwegian explorer Roald Amundsen. On foot, the expedition trudged through the frozen deserts of the Antarctic, marching for caches of food and kerosene deposited on the way. In early 1912, at the first cache, there was no kerosene; the cans – soldered with tin – were empty. The cause of the empty tins could have been related to tin pest. The tin cans were recovered and no tin pest was found when analyzed by the Tin Research Institute. Some observers blame poor quality soldering, as tin cans over 80 years old have been discovered in Antarctic buildings with the soldering in good condition. Napoleon's buttons The story is often told of Napoleon's men freezing in the bitter Russian Winter, their clothes falling apart as tin pest ate the buttons. This appears to be an urban legend, as there is no evidence of any failing buttons, and thus they cannot have been a contributing factor in the failure of the invasion. Uniform buttons of that era were generally bone for enlisted, and brass for officers. Critics of the theory point out that any tin that might have been used would have been quite impure, and thus more tolerant of low temperatures. Laboratory tests of the time required for unalloyed tin to develop significant tin pest damage at lowered temperatures is about 18 months, which is more than twice the length of the invasion. Nevertheless, some of the regiments in the campaign did have tin buttons and the temperature reached sufficiently low values (below −40 °C or °F). In the event, none of the many survivors' tales mention problems with buttons and it has been suggested that the legend is an amalgamation of reports of blocks of Banca tin completely disintegrated in a customs warehouse in St. Petersburg in 1868, and earlier Russian reports that cast-in buttons for military uniforms also disintegrated, and the desperate state of Napoleon's army, having turned soldiers into ragged beggars. Modern tin pest since adoption of RoHS With the 2006 adoption of the Restriction of Hazardous Substances Directive (RoHS) regulations in the European Union, California banning most uses of lead, and similar regulations elsewhere, the problem of tin pest has returned, since some manufacturers which previously used tin/lead alloys now use predominantly tin-based alloys. For example, the leads of some electrical and electronic components are plated with pure tin. In cold environments, this can change to α-modification grey tin, which is not electrically conductive, and falls off the leads. After reheating, it changes back to β-modification white tin, which is electrically conductive. This cycle can cause electrical short circuits and failure of equipment. Such problems can be intermittent as the powdered particles of tin move around. Tin pest can be avoided by alloying with small amounts of electropositive metals or semimetals soluble in tin's solid phase, e.g. antimony or bismuth, which prevent the phase change. See also Bronze disease – destruction of bronze artifacts by corrosion Gold–aluminium intermetallic – giving rise to Purple plague or White plague, another failure mode for electronic components due to the formation of a crystalline substance. Zinc pest – decay of zinc by an unrelated intercrystalline corrosion process. References External links Metallurgy Tin Allotropes Corrosion
Tin pest
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,035
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Metallurgy", "Materials science", "Corrosion", "Materials", "Electrochemistry", "nan", "Materials degradation", "Matter" ]
420,912
https://en.wikipedia.org/wiki/Advanced%20multi-mission%20operations%20system
The advanced multi-mission operations system (AMMOS) is a common set of services and tools created by the Interplanetary Network Directorate, a division of the Jet Propulsion Laboratory, for use in JPL's operation of spacecraft. These tools include a means by which mission planning and analysis can be undertaken, as well as developing pre-planned command sequences for the spacecraft. AMMOS also provides a means by which downlinked data can be displayed and manipulated, including key mission telemetry such as readings of temperature, pressure, power, and other critical indicators. This common toolset allows space missions to minimize the cost of developing operations infrastructure, which is very important in light of recent restricted spending by space agencies. References External links Official website Aerospace engineering
Advanced multi-mission operations system
[ "Engineering" ]
153
[ "Aerospace engineering" ]
421,121
https://en.wikipedia.org/wiki/Coulomb%20barrier
The Coulomb barrier, named after Coulomb's law, which is in turn named after physicist Charles-Augustin de Coulomb, is the energy barrier due to electrostatic interaction that two nuclei need to overcome so they can get close enough to undergo a nuclear reaction. Potential energy barrier This energy barrier is given by the electric potential energy: where ε0 is the permittivity of free space; q1, q2 are the charges of the interacting particles; r is the interaction radius. A positive value of U is due to a repulsive force, so interacting particles are at higher energy levels as they get closer. A negative potential energy indicates a bound state (due to an attractive force). The Coulomb barrier increases with the atomic numbers (i.e. the number of protons) of the colliding nuclei: where e is the elementary charge, and Zi the corresponding atomic numbers. To overcome this barrier, nuclei have to collide at high velocities, so their kinetic energies drive them close enough for the strong interaction to take place and bind them together. According to the kinetic theory of gases, the temperature of a gas is just a measure of the average kinetic energy of the particles in that gas. For classical ideal gases the velocity distribution of the gas particles is given by Maxwell–Boltzmann. From this distribution, the fraction of particles with a velocity high enough to overcome the Coulomb barrier can be determined. In practice, temperatures needed to overcome the Coulomb barrier turned out to be smaller than expected due to quantum mechanical tunnelling, as established by Gamow. The consideration of barrier-penetration through tunnelling and the speed distribution gives rise to a limited range of conditions where fusion can take place, known as the Gamow window. The absence of the Coulomb barrier enabled the discovery of the neutron by James Chadwick in 1932. Modeling a potential energy barrier There is keen interest in the mechanics and parameters of nuclear fusion, including methods of modeling the Coulomb barrier for scientific and educational purposes. The Coulomb barrier is a type of potential energy barrier, and is central to nuclear fusion. It results from the interplay of two fundamental interactions: the strong interaction at close-range within ≈ 1 fm, and the electromagnetic interaction at far-range beyond the Coulomb barrier. The microscopic range of the strong interaction, on the order of one femtometre, makes it challenging to model and no classical examples exist on the human scale.  A visual and tactile classroom model of strong close-range attraction and far-range repulsion characteristic of the fusion potential curve is modeled in the magnetic “Coulomb” barrier apparatus. The apparatus won first place in the 2023 national apparatus competition of the American Academy of Physics Teachers in Sacramento, California. Essentially, a pair of opposing permanent magnet arrays generate asymmetric alternating N/S magnetic fields that result in repulsion at a distance and attraction within ≈ 1cm. A related patent method (US11,087,910 B2) further describes the apparatus and outlines criteria for more generally modeling an electromagnetic potential energy barrier. Magnetic and electric forces were unified within the electromagnetic fundamental force by James Clerk Maxwell in 1873 in A Treatise on Electricity and Magnetism. In the case of the magnetic “Coulomb” barrier, the patent describes alternating/unequal or asymmetric North and South magnetic poles but the patent method language is broad enough to include positive and negative electrostatic poles as well. The implication is that regularly spaced opposite and unequal electrostatic point charges possess the capacity to model an electrostatic potential energy barrier as well. References Nuclear physics Nuclear fusion Nuclear chemistry
Coulomb barrier
[ "Physics", "Chemistry" ]
751
[ "Nuclear chemistry", "Nuclear fusion", "nan", "Nuclear physics" ]
421,149
https://en.wikipedia.org/wiki/Endoscope
An endoscope is an inspection instrument composed of image sensor, optical lens, light source and mechanical device, which is used to look deep into the body by way of openings such as the mouth or anus. A typical endoscope applies several modern technologies including optics, ergonomics, precision mechanics, electronics, and software engineering. With an endoscope, it is possible to observe lesions that cannot be detected by X-ray, making it useful in medical diagnosis. An endoscope uses tubes only a few millimeters thick to transfer illumination in one direction and high-resolution video in the other, allowing minimally invasive surgeries. It is used to examine the internal organs like the throat or esophagus. Specialized instruments are named after their target organ. Examples include the cystoscope (bladder), nephroscope (kidney), bronchoscope (bronchus), arthroscope (joints) and colonoscope (colon), and laparoscope (abdomen or pelvis). They can be used to examine visually and diagnose, or assist in surgery such as an arthroscopy. Etymology "Endo-" is a scientific Latin prefix derived from Ancient Greek ἐνδο- (endo-) meaning "within", and "-scope" comes from the modern Latin "-scopium", from the Greek σκοπεῖν (skopein) meaning to "look at" or "to examine". History The first endoscope was developed in 1806 by German physician Philipp Bozzini with his introduction of a "Lichtleiter" (light conductor) "for the examinations of the canals and cavities of the human body". However, the College of Physicians in Vienna disapproved of such curiosity. The first effective open-tube endoscope was developed by French physician Antonin Jean Desormeaux. He was also the first one to use an endoscope in a successful operation. After the invention of Thomas Edison, the use of electric light was a major step in the improvement of endoscope. The first such lights were external although sufficiently capable of illumination to allow cystoscopy, hysteroscopy and sigmoidoscopy as well as examination of the nasal (and later thoracic) cavities as was being performed routinely in human patients by Sir Francis Cruise (using his own commercially available endoscope) by 1865 in the Mater Misericordiae Hospital in Dublin, Ireland. Later, smaller bulbs became available making internal light possible, for instance in a hysteroscope by Charles David in 1908. Hans Christian Jacobaeus has been given credit for the first large published series of endoscopic explorations of the abdomen and the thorax with laparoscope (1912) and thoracoscope (1910) although the first reported thoracoscopic examination in a human was also by Cruise. Laparoscope was used in the diagnosis of liver and gallbladder disease by Heinz Kalk in the 1930s. Hope reported in 1937 on the use of laparoscopy to diagnose ectopic pregnancy. In 1944, Raoul Palmer placed his patients in the Trendelenburg position after gaseous distention of the abdomen and thus was able to reliably perform gynecologic laparoscope. Georg Wolf, a Berlin manufacturer of rigid endoscopes established in 1906, produced the Sussmann flexible gastroscope in 1911. Karl Storz began producing instruments for ENT specialists in 1945 through his company, Karl Storz GmbH. Fiber optics Basil Hirschowitz, Larry Curtiss, and Wilbur Peters invented the first fiber optic endoscope in 1957. Earlier in the 1950s Harold Hopkins had designed a "fibroscope" consisting of a bundle of flexible glass fibres able to coherently transmit an image. This proved useful both medically and industrially, and subsequent research led to further improvements in image quality. The previous practice of a small filament lamp on the tip of the endoscope had left the choice of either viewing in a dim red light or increasing the light output – which carried the risk of burning the inside of the patient. Alongside the advances to the optics, the ability to 'steer' the tip was developed, as well as innovations in remotely operated surgical instruments contained within the body of the endoscope itself. This was the beginning of "key-hole surgery" as we know it today. Rod-lens endoscopes There were physical limits to the image quality of a fibroscope. A bundle of 50,000 fibers would only give a 50,000-pixel image, and continued flexing from use breaks fibers and progressively loses pixels. Eventually, so many are lost that the whole bundle must be replaced at a considerable expense) . Harold Hopkins realised that any further optical improvement would require a different approach. Previous rigid endoscopes suffered from low light transmittance and poor image quality. The surgical requirement of passing surgical tools as well as the illumination system within the endoscope's tube which itself is limited in dimensions by the human body left very little room for the imaging optics. The tiny lenses of a conventional system required supporting rings that would obscure the bulk of the lens' area. They were also hard to manufacture and assemble and optically nearly useless. The elegant solution that Hopkins invented was to fill the air-spaces between the 'little lenses' with rods of glass. These rods fitted exactly the endoscope's tube making them self-aligning and requiring of no other support. They were much easier to handle and utilised the maximum possible diameter available. With the appropriate curvature and coatings to the rod ends and optimal choices of glass-types, all calculated and specified by Hopkins, the image quality was transformed even with tubes of only 1mm in diameter. With a high quality 'telescope' of such small diameter the tools and illumination system could be comfortably housed within an outer tube. Once again, it was Karl Storz who produced the first of these new endoscopes as part of a long and productive partnership between the two men. Whilst there are regions of the body that will always require flexible endoscopes (principally the gastrointestinal tract), the rigid rod-lens endoscopes have such exceptional performance that they are still the preferred instrument and have enabled modern key-hole surgery. (Harold Hopkins was recognized and honoured for his advancement of medical-optic by the medical community worldwide. It formed a major part of the citation when he was awarded the Rumford Medal by the Royal Society in 1984.) Composition A typical endoscope is composed of following parts: A rigid or flexible tube as a body. A light transmission system that illuminates the object to be inpsected. For the light source, it is usually located outside the scope body. A lens system that transmits the image from the objective lens to the observer, usually a relay lens system in the case of a rigid endoscope or a bundle of optical fibers in the case of a fiberoptic endoscope. An eyepiece which transmits the image to the screen in order to capture it. However, modern videoscopes require no eyepiece. An additional channel for medical instruments or manipulators (only for a multi-function endoscope, see below in "Classification"). Besides, patients undergoing endoscopy procedure may be offered sedation in order to avoid discomfort. Clinical application Endoscopes may be used to investigate symptoms in the digestive system including nausea, vomiting, abdominal pain, difficulty swallowing, and gastrointestinal bleeding. It is also used in diagnosis, most commonly by performing a biopsy to check for conditions such as anemia, bleeding, inflammation, and cancers of the digestive system. The procedure may also be used for treatment such as cauterization of a bleeding vessel, widening a narrow esophagus, clipping off a polyp or removing a foreign object. Health care workers can use endoscopes to review the following body parts: The gastrointestinal tract: Esophagus: chronic esophagitis, esophageal varices, esophageal hiatal hernia, esophageal leiomyoma, esophageal cancer, cardiac cancer, etc. Stomach and duodenum: chronic gastritis, gastric ulcer, benign gastric tumor, gastric cancer, duodenal ulcer, duodenal tumor. Small intestine: small intestine neoplasms, smooth muscle tumors, sarcomas, polyps, lymphomas, inflammation, etc. Large intestine: nonspecific ulcerative colitis, Crohn's disease, chronic colitis, colonic polyps, colorectal cancer, etc. The pancreas and biliary tract: pancreatic cancer, cholangitis, cholangiocarcinoma, etc. The laparoscopy: liver disease, biliary disease, etc. The respiratory tract: lung cancer, transbronchoscopy lung biopsy, selective bronchography, etc. The urinary tract: cystitis, bladder conjugation, bladder tumor, renal tuberculosis, renal stones, renal tumors, congenital malformations of ureter, ureteral stones, ureteral tumors, etc. The ear, nose and throat: Ear: tympanitis, inner ear deformity, etc. Nose: rhinitis, nasal polyp, etc. Throat: retropharyngeal abscess, specific infection, etc. Classification There are many different types of endoscopes for medical examination, so are their classification methods. Generally speaking, the following three classifications are more common: According to functions of the endoscope: single-function endoscope: A single-function endoscope refers to an observation mirror that only has an optical system with it. multi-function endoscope: For a multi-functional endoscope, in addition to the function of observation, it also has at least one working channel like lighting, surgery, flushing and other functions. According to detection areas reached by the endoscope: enteroscope otoscope colonoscope rhinoscope arthroscope laparoscope etc. According to rigidity of the endoscope: rigid endoscope: A rigid endoscope is a prismatic optical system with advantages of clear imaging, multiple working channels and multiple viewpoints. flexible endoscope: A flexible endoscope is an optical-fiber-based system. Notable features of a flexible endoscope include that the lens can be manipulated by the operator to change direction, but the imaging quality is not as good as a rigid one. Recent developments Robot assisted surgery With the development and application of robotic systems, especially surgical robotics, remote surgery has been introduced, in which the surgeon could be at a site far away from the patient. The first remote surgery was called the Lindbergh Operation. And a wireless oesophageal pH measuring devices can now be placed endoscopically, to record ph trends in an area remotely. Endoscopy VR simulators Virtual reality simulators are being developed for training doctors on various endoscopy skills. Disposable endoscopy Disposable endoscopy is an emerging category of endoscopic instruments. Recent developments have allowed the manufacture of endoscopes inexpensive enough to be used on a single patient only. It is meeting a growing demand to lessen the risk of cross contamination and hospital acquired diseases. A European consortium of the SME is working on the DUET (disposable use of endoscopy tool) project to build a disposable endoscope. Capsule endoscopy Capsule endoscopes are pill-sized imaging devices that are swallowed by a patient and then record images of the gastrointestinal tract as they pass through naturally. Images are typically retrieved via wireless data transfer to an external receiver. Augmented reality The endoscopic images can be combined with other image sources to provide the surgeon with additional information. For instance, the position of an anatomical structure or tumor might be shown in the endoscopic video. Image enhancement Emerging endoscope technologies measure additional properties of light such as optical polarization, optical phase, and additional wavelengths of light to improve contrast. Non-medical use Industrial endoscopic nondestructive testing technology The above is mainly about the application of endoscopes in medical inspection. In fact, endoscopes are also widely used in industrial field, especially in non-destructive testing and hole exploration. If internal visual inspection of pipes, boilers, cylinders, motors, reactors, heat exchangers, turbines, and other products with narrow, inaccessible cavities and/or channels is to be performed, then the endoscope is an important, if not an indispensable instrument. In such applications they are commonly known as borescopes. See also Medical device Endoscopy Surgery Anesthesia Minimally invasive procedure Robot assisted surgery References Endoscopes Medical devices
Endoscope
[ "Biology" ]
2,713
[ "Medical devices", "Medical technology" ]
421,341
https://en.wikipedia.org/wiki/Ratio%20test
In mathematics, the ratio test is a test (or "criterion") for the convergence of a series where each term is a real or complex number and is nonzero when is large. The test was first published by Jean le Rond d'Alembert and is sometimes known as d'Alembert's ratio test or as the Cauchy ratio test. The test The usual form of the test makes use of the limit The ratio test states that: if L < 1 then the series converges absolutely; if L > 1 then the series diverges; if L = 1 or the limit fails to exist, then the test is inconclusive, because there exist both convergent and divergent series that satisfy this case. It is possible to make the ratio test applicable to certain cases where the limit L fails to exist, if limit superior and limit inferior are used. The test criteria can also be refined so that the test is sometimes conclusive even when L = 1. More specifically, let . Then the ratio test states that: if R < 1, the series converges absolutely; if r > 1, the series diverges; or equivalently if for all large n (regardless of the value of r), the series also diverges; this is because is nonzero and increasing and hence does not approach zero; the test is otherwise inconclusive. If the limit L in () exists, we must have L = R = r. So the original ratio test is a weaker version of the refined one. Examples Convergent because L < 1 Consider the series Applying the ratio test, one computes the limit Since this limit is less than 1, the series converges. Divergent because L > 1 Consider the series Putting this into the ratio test: Thus the series diverges. Inconclusive because L = 1 Consider the three series The first series (1 + 1 + 1 + 1 + ⋯) diverges, the second (the one central to the Basel problem) converges absolutely and the third (the alternating harmonic series) converges conditionally. However, the term-by-term magnitude ratios of the three series are       and   . So, in all three, the limit is equal to 1. This illustrates that when L = 1, the series may converge or diverge: the ratio test is inconclusive. In such cases, more refined tests are required to determine convergence or divergence. Proof Below is a proof of the validity of the generalized ratio test. Suppose that . We also suppose that has infinite non-zero members, otherwise the series is just a finite sum hence it converges. Then there exists some such that there exists a natural number satisfying and for all , because if no such exists then there exists arbitrarily large satisfying for every , then we can find a subsequence satisfying , but this contradicts the fact that is the limit inferior of as , implying the existence of . Then we notice that for , . Notice that so as and , this implies diverges so the series diverges by the n-th term test. Now suppose . Similar to the above case, we may find a natural number and a such that for . Then The series is the geometric series with common ratio , hence which is finite. The sum is a finite sum and hence it is bounded, this implies the series converges by the monotone convergence theorem and the series converges by the absolute convergence test. When the limit exists and equals to then , this gives the original ratio test. Extensions for L = 1 As seen in the previous example, the ratio test may be inconclusive when the limit of the ratio is 1. Extensions to the ratio test, however, sometimes allow one to deal with this case. In all the tests below one assumes that Σan is a sum with positive an. These tests also may be applied to any series with a finite number of negative terms. Any such series may be written as: where aN is the highest-indexed negative term. The first expression on the right is a partial sum which will be finite, and so the convergence of the entire series will be determined by the convergence properties of the second expression on the right, which may be re-indexed to form a series of all positive terms beginning at n=1. Each test defines a test parameter (ρn) which specifies the behavior of that parameter needed to establish convergence or divergence. For each test, a weaker form of the test exists which will instead place restrictions upon limn->∞ρn. All of the tests have regions in which they fail to describe the convergence properties of Σan. In fact, no convergence test can fully describe the convergence properties of the series. This is because if Σan is convergent, a second convergent series Σbn can be found which converges more slowly: i.e., it has the property that limn->∞ (bn/an) = ∞. Furthermore, if Σan is divergent, a second divergent series Σbn can be found which diverges more slowly: i.e., it has the property that limn->∞ (bn/an) = 0. Convergence tests essentially use the comparison test on some particular family of an, and fail for sequences which converge or diverge more slowly. De Morgan hierarchy Augustus De Morgan proposed a hierarchy of ratio-type tests The ratio test parameters () below all generally involve terms of the form . This term may be multiplied by to yield . This term can replace the former term in the definition of the test parameters and the conclusions drawn will remain the same. Accordingly, there will be no distinction drawn between references which use one or the other form of the test parameter. 1. d'Alembert's ratio test The first test in the De Morgan hierarchy is the ratio test as described above. 2. Raabe's test This extension is due to Joseph Ludwig Raabe. Define: (and some extra terms, see Ali, Blackburn, Feld, Duris (none), Duris2) The series will: Converge when there exists a c>1 such that for all n>N. Diverge when for all n>N. Otherwise, the test is inconclusive. For the limit version, the series will: Converge if (this includes the case ρ = ∞) Diverge if . If ρ = 1, the test is inconclusive. When the above limit does not exist, it may be possible to use limits superior and inferior. The series will: Converge if Diverge if Otherwise, the test is inconclusive. Proof of Raabe's test Defining , we need not assume the limit exists; if , then diverges, while if the sum converges. The proof proceeds essentially by comparison with . Suppose first that . Of course if then for large , so the sum diverges; assume then that . There exists such that for all , which is to say that . Thus , which implies that for ; since this shows that diverges. The proof of the other half is entirely analogous, with most of the inequalities simply reversed. We need a preliminary inequality to use in place of the simple that was used above: Fix and . Note that . So ; hence . Suppose now that . Arguing as in the first paragraph, using the inequality established in the previous paragraph, we see that there exists such that for ; since this shows that converges. 3. Bertrand's test This extension is due to Joseph Bertrand and Augustus De Morgan. Defining: Bertrand's test asserts that the series will: Converge when there exists a c>1 such that for all n>N. Diverge when for all n>N. Otherwise, the test is inconclusive. For the limit version, the series will: Converge if (this includes the case ρ = ∞) Diverge if . If ρ = 1, the test is inconclusive. When the above limit does not exist, it may be possible to use limits superior and inferior. The series will: Converge if Diverge if Otherwise, the test is inconclusive. 4. Extended Bertrand's test This extension probably appeared at the first time by Margaret Martin in 1941. A short proof based on Kummer's test and without technical assumptions (such as existence of the limits, for example) was provided by Vyacheslav Abramov in 2019. Let be an integer, and let denote the th iterate of natural logarithm, i.e. and for any , . Suppose that the ratio , when is large, can be presented in the form (The empty sum is assumed to be 0. With , the test reduces to Bertrand's test.) The value can be presented explicitly in the form Extended Bertrand's test asserts that the series Converge when there exists a such that for all . Diverge when for all . Otherwise, the test is inconclusive. For the limit version, the series Converge if (this includes the case ) Diverge if . If , the test is inconclusive. When the above limit does not exist, it may be possible to use limits superior and inferior. The series Converge if Diverge if Otherwise, the test is inconclusive. For applications of Extended Bertrand's test see birth–death process. 5. Gauss's test This extension is due to Carl Friedrich Gauss. Assuming an > 0 and r > 1, if a bounded sequence Cn can be found such that for all n: then the series will: Converge if Diverge if 6. Kummer's test This extension is due to Ernst Kummer. Let ζn be an auxiliary sequence of positive constants. Define Kummer's test states that the series will: Converge if there exists a such that for all n>N. (Note this is not the same as saying ) Diverge if for all n>N and diverges. For the limit version, the series will: Converge if (this includes the case ρ = ∞) Diverge if and diverges. Otherwise the test is inconclusive When the above limit does not exist, it may be possible to use limits superior and inferior. The series will Converge if Diverge if and diverges. Special cases All of the tests in De Morgan's hierarchy except Gauss's test can easily be seen as special cases of Kummer's test: For the ratio test, let ζn=1. Then: For Raabe's test, let ζn=n. Then: For Bertrand's test, let ζn=n ln(n). Then: Using and approximating for large n, which is negligible compared to the other terms, may be written: For Extended Bertrand's test, let From the Taylor series expansion for large we arrive at the approximation where the empty product is assumed to be 1. Then, Hence, Note that for these four tests, the higher they are in the De Morgan hierarchy, the more slowly the series diverges. Proof of Kummer's test If then fix a positive number . There exists a natural number such that for every Since , for every In particular for all which means that starting from the index the sequence is monotonically decreasing and positive which in particular implies that it is bounded below by 0. Therefore, the limit exists. This implies that the positive telescoping series is convergent, and since for all by the direct comparison test for positive series, the series is convergent. On the other hand, if , then there is an N such that is increasing for . In particular, there exists an for which for all , and so diverges by comparison with . Tong's modification of Kummer's test A new version of Kummer's test was established by Tong. See also for further discussions and new proofs. The provided modification of Kummer's theorem characterizes all positive series, and the convergence or divergence can be formulated in the form of two necessary and sufficient conditions, one for convergence and another for divergence. Series converges if and only if there exists a positive sequence , , such that Series diverges if and only if there exists a positive sequence , , such that and The first of these statements can be simplified as follows: Series converges if and only if there exists a positive sequence , , such that The second statement can be simplified similarly: Series diverges if and only if there exists a positive sequence , , such that and However, it becomes useless, since the condition in this case reduces to the original claim Frink's ratio test Another ratio test that can be set in the framework of Kummer's theorem was presented by Orrin Frink 1948. Suppose is a sequence in , If , then the series converges absolutely. If there is such that for all , then diverges. This result reduces to a comparison of with a power series , and can be seen to be related to Raabe's test. Ali's second ratio test A more refined ratio test is the second ratio test: For define: By the second ratio test, the series will: Converge if Diverge if If then the test is inconclusive. If the above limits do not exist, it may be possible to use the limits superior and inferior. Define: Then the series will: Converge if Diverge if If then the test is inconclusive. Ali's mth ratio test This test is a direct extension of the second ratio test. For and positive define: By the th ratio test, the series will: Converge if Diverge if If then the test is inconclusive. If the above limits do not exist, it may be possible to use the limits superior and inferior. For define: Then the series will: Converge if Diverge if If , then the test is inconclusive. Ali--Deutsche Cohen φ-ratio test This test is an extension of the th ratio test. Assume that the sequence is a positive decreasing sequence. Let be such that exists. Denote , and assume . Assume also that Then the series will: Converge if Diverge if If , then the test is inconclusive. See also Root test Radius of convergence Footnotes References . : §8.14. : §3.3, 5.4. : §3.34. : §2.36, 2.37. Convergence tests Articles containing proofs it:Criteri di convergenza#Criterio del rapporto (o di d'Alembert)
Ratio test
[ "Mathematics" ]
2,961
[ "Theorems in mathematical analysis", "Convergence tests", "Articles containing proofs" ]
421,597
https://en.wikipedia.org/wiki/Tractive%20effort
In railway engineering, the term tractive effort describes the pulling or pushing capability of a locomotive. The published tractive force value for any vehicle may be theoretical—that is, calculated from known or implied mechanical properties—or obtained via testing under controlled conditions. The discussion herein covers the term's usage in mechanical applications in which the final stage of the power transmission system is one or more wheels in frictional contact with a railroad track. Defining tractive effort The term tractive effort is often qualified as starting tractive effort, continuous tractive effort and maximum tractive effort. These terms apply to different operating conditions, but are related by common mechanical factors: input torque to the driving wheels, the wheel diameter, coefficient of friction () between the driving wheels and supporting surface, and the weight applied to the driving wheels (). The product of and is the factor of adhesion, which determines the maximum torque that can be applied before the onset of wheelspin or wheelslip. Starting tractive effort Starting tractive effort is the tractive force that can be generated at a standstill. This figure is important on railways because it determines the maximum train weight that a locomotive can set into motion. Maximum tractive effort Maximum tractive effort is defined as the highest tractive force that can be generated under any condition that is not injurious to the vehicle or machine. In most cases, maximum tractive effort is developed at low speed and may be the same as the starting tractive effort. Continuous tractive effort Continuous tractive effort is the tractive force that can be maintained indefinitely, as distinct from the higher tractive effort that can be maintained for a limited period of time before the power transmission system overheats. Due to the relationship between power (), velocity () and force (), described as: or Tractive effort inversely varies with speed at any given level of available power. Continuous tractive effort is often shown in graph form at a range of speeds as part of a tractive effort curve. Vehicles having a hydrodynamic coupling, hydrodynamic torque multiplier or electric motor as part of the power transmission system may also have a maximum continuous tractive effort rating, which is the highest tractive force that can be produced for a short period of time without causing component harm. The period of time for which the maximum continuous tractive effort may be safely generated is usually limited by thermal considerations. such as temperature rise in a traction motor. Tractive effort curves Specifications of locomotives often include tractive effort curves, showing the relationship between tractive effort and velocity. The shape of the graph is shown at right. The line AB shows operation at the maximum tractive effort, the line BC shows continuous tractive effort that is inversely proportional to speed (constant power). Tractive effort curves often have graphs of rolling resistance superimposed on them—the intersection of the rolling resistance graph and tractive effort graph gives the maximum velocity at zero grade (when net tractive effort is zero). Rail vehicles In order to start a train and accelerate it to a given speed, the locomotive(s) must develop sufficient tractive force to overcome the train's resistance, which is a combination of axle bearing friction, the friction of the wheels on the rails (which is substantially greater on curved track than on tangent track), and the force of gravity if on a grade. Once in motion, the train will develop additional drag as it accelerates due to aerodynamic forces, which increase with the square of the speed. Drag may also be produced at speed due to truck (bogie) hunting, which will increase the rolling friction between wheels and rails. If acceleration continues, the train will eventually attain a speed at which the available tractive force of the locomotive(s) will exactly offset the total drag, causing acceleration to cease. This top speed will be increased on a downgrade due to gravity assisting the motive power, and will be decreased on an upgrade due to gravity opposing the motive power. Tractive effort can be theoretically calculated from a locomotive's mechanical characteristics (e.g., steam pressure, weight, etc.), or by actual testing with strain sensors on the drawbar and a dynamometer car. Power at rail is a railway term for the available power for traction, that is, the power that is available to propel the train. Steam locomotives An estimate for the tractive effort of a single cylinder steam locomotive can be obtained from the cylinder pressure, cylinder bore, stroke of the piston and the diameter of the wheel. The torque developed by the linear motion of the piston depends on the angle that the driving rod makes with the tangent of the radius on the driving wheel. For a more useful value an average value over the rotation of the wheel is used. The driving force is the torque divided by the wheel radius. As an approximation, the following formula can be used (for a two-cylinder locomotive): where t is tractive effort in pounds-force d is the piston diameter in inches (bore) s is the piston stroke in inches p is the working pressure in pounds per square inch w is the diameter of the driving wheels in inches The constant 0.85 was the Association of American Railroads (AAR) standard for such calculations, and overestimated the efficiency of some locomotives and underestimated that of others. Modern locomotives with roller bearings were probably underestimated. European designers used a constant of 0.6 instead of 0.85, so the two cannot be compared without a conversion factor. In Britain main-line railways generally used a constant of 0.85 but builders of industrial locomotives often used a lower figure, typically 0.75. The constant c also depends on the cylinder dimensions and the time at which the steam inlet valves are open; if the steam inlet valves are closed immediately after obtaining full cylinder pressure the piston force can be expected to have dropped to less than half the initial force. giving a low c value. If the cylinder valves are left open for longer the value of c will rise nearer to one. Three or four cylinders (simple) The result should be multiplied by 1.5 for a three-cylinder locomotive and by two for a four-cylinder locomotive. Alternatively, tractive effort of all "simple" (i.e. non-compound) locomotives can be calculated thus: where t is tractive effort in pounds-force n is the number of cylinders d is the piston diameter in inches s is the piston stroke in inches p is the maximum rated boiler pressure in psi w is the diameter of the driving wheels in inches Multiple cylinders (compound) For other numbers and combinations of cylinders, including double and triple expansion engines the tractive effort can be estimated by adding the tractive efforts due to the individual cylinders at their respective pressures and cylinder strokes. Values and comparisons for steam locomotives Tractive effort is the figure often quoted when comparing the powers of steam locomotives, but is misleading because tractive effort shows the ability to start a train, not the ability to haul it. Possibly the highest tractive effort ever claimed was for the Virginian Railway's 2-8-8-8-4 triplex locomotive, which in simple expansion mode had a calculated starting T.E. of 199,560 lbf (887.7 kN)—but the boiler could not produce enough steam to haul at speeds over 5 mph (8 km/h). Of more successful steam locomotives, those with the highest rated starting tractive effort were the Virginian Railway AE-class 2-10-10-2s, at 176,000 lbf (783 kN) in simple-expansion mode (or 162,200 lb if calculated by the usual formula). The Union Pacific Big Boys had a starting T.E. of 135,375 lbf (602 kN); the Norfolk & Western's Y5, Y6, Y6a, and Y6b class 2-8-8-2s had a starting T.E. of 152,206 lbf (677 kN) in simple expansion mode (later modified to 170,000 lbf (756 kN), claim some enthusiasts); and the Pennsylvania Railroad's freight duplex Q2 attained 114,860 lbf (510.9 kN, including booster)—the highest for a rigid-framed locomotive. Later two-cylinder passenger locomotives were generally 40,000 to 80,000 lbf (170 to 350 kN) of T.E. Diesel and electric locomotives For an electric locomotive or a diesel-electric locomotive, starting tractive effort can be calculated from the amount of weight on the driving wheels (which may be less than the total locomotive weight in some cases), combined stall torque of the traction motors, the gear ratio between the traction motors and axles, and driving wheel diameter. For a diesel-hydraulic locomotive, the starting tractive effort is affected by the stall torque of the torque converter, as well as gearing, wheel diameter and locomotive weight. The relationship between power and tractive effort was expressed by Hay (1978) as where t is tractive effort, in newtons (N) P is the power in watts (W) E is the efficiency, with a suggested value of 0.82 to account for losses between the motor and the rail, as well as power diverted to auxiliary systems such as lighting v is the speed in metres per second (m/s) Freight locomotives are designed to produce higher maximum tractive effort than passenger units of equivalent power, necessitated by the much higher weight that is typical of a freight train. In modern locomotives, the gearing between the traction motors and axles is selected to suit the type of service in which the unit will be operated. As traction motors have a maximum speed at which they can rotate without incurring damage, gearing for higher tractive effort is at the expense of top speed. Conversely, the gearing used with passenger locomotives favors speed over maximum tractive effort. Electric locomotives with monomotor bogies are sometimes fitted with two-speed gearing. This allows higher tractive effort for hauling freight trains but at reduced speed. Examples include the SNCF classes BB 8500 and BB 25500. See also Braking Drag equation Factor of adhesion, which is simply the weight on the locomotive's driving wheels divided by the starting tractive effort Power classification – British Railways and London, Midland and Scottish railway classification scheme Rail adhesion Tractor pulling, bollard pull – articles relating to tractive effort for other forms of vehicle Notes References Further reading A simple guide to train physics Tractive effort, acceleration and braking Rolling stock Force Vehicles
Tractive effort
[ "Physics", "Mathematics" ]
2,162
[ "Vehicles", "Force", "Physical quantities", "Quantity", "Mass", "Classical mechanics", "Physical systems", "Transport", "Wikipedia categories named after physical quantities", "Matter" ]
421,914
https://en.wikipedia.org/wiki/Additive%20rhythm%20and%20divisive%20rhythm
In music, the terms additive and divisive are used to distinguish two types of both rhythm and meter: A divisive (or, alternately, multiplicative) rhythm is a rhythm in which a larger period of time is divided into smaller rhythmic units or, conversely, some integer unit is regularly multiplied into larger, equal units. This can be contrasted with additive rhythm, in which larger periods of time are constructed by concatenating (joining end to end) a series of units into larger units of unequal length, such as a meter produced by the regular alternation of and . When applied to meters, the terms perfect and imperfect are sometimes used as the equivalents of divisive and additive, respectively . For example, 4 may be evenly divided by 2 or reached by adding 2 + 2. In contrast, 5 is only evenly divisible by 5 and 1 and may be reached by adding 2 or 3. Thus, (or, more commonly, ) is divisive while is additive. The terms additive and divisive originate with Curt Sachs's book Rhythm and Tempo (1953), while the term aksak rhythm was introduced for the former concept at about the same time by Constantin Brăiloiu, in agreement with the Turkish musicologist Ahmet Adnan Saygun. The relationship between additive and divisive rhythms is complex, and the terms are often used in imprecise ways. In his article on rhythm in the second edition of the New Grove Dictionary of Music and Musicians, Justin London states that: Winold recommends that, "metric structure is best described through detailed analysis of pulse groupings on various levels rather than through attempts to represent the organization with a single term". Sub-Saharan African music and most European (Western) music is divisive, while Indian and other Asian musics may be considered as primarily additive. However, many pieces of music cannot be clearly labeled divisive or additive. Divisive rhythm For example: consists of one measure (whole note: 1) divided into a stronger first beat and slightly less strong second beat (half notes: 1, 3), which are in turn divided, by two weaker beats (quarter notes: 1, 2, 3, 4), and again divided into still weaker beats (eighth notes: 1 & 2 & 3 & 4 &). Additive rhythm features nonidentical or irregular durational groups following one another at two levels, within the bar and between bars or groups of bars. This type of rhythm is also referred to in musicological literature by the Turkish word aksak, which means "limping". In the special case of time signatures in which the upper numeral is not divisible by two or three without a fraction, the result may alternatively be called irregular, imperfect, or uneven meter, and the groupings into twos and threes are sometimes called long beats and short beats. The term additive rhythm is also often used to refer to what are also incorrectly called asymmetric rhythms and even irregular rhythms – that is, meters which have a regular pattern of beats of uneven length. For example, the time signature indicates each bar is eight quavers long, and has four beats, each a crotchet (that is, two quavers) long. The asymmetric time signature , on the other hand, while also having eight quavers in a bar, divides them into three beats, the first three quavers long, the second three quavers long, and the last just two quavers long. These kinds of rhythms are used, for example, by Béla Bartók, who was influenced by similar rhythms in Bulgarian Folk Music. The third movement of Bartók's String Quartet No. 5, a scherzo marked alla bulgarese features a " rhythm (4+2+3)". Stravinsky's Octet for Wind Instruments "ends with a jazzy 3+3+2 = 8 swung coda". Stravinsky himself found a kinship with additive rhythms in music of the renaissance and baroque periods. For example, he marvelled at the Laudate Pueri from Monteverdi's Vespers of 1610, where the music follows the natural accentuation of the Latin words to create metrical groupings of twos, threes and fours at the very start: "I know of no music before or since…. which so felicitously exploits accentual and metrical variation and irregularity, and no more subtle rhythmic construction of any kind than that which is set in motion at the beginning of the 'Laudate Pueri,’ if, that is, the music is sung according to the verbal accents instead of... the editor's bar-lines". Additive patterns also occur in some music of Philip Glass, and other minimalists, most noticeably the "one-two-one-two-three" chorus parts in Einstein on the Beach. They may also occur in passing in pieces which are on the whole in conventional meters. In jazz, Dave Brubeck's song "Blue Rondo à la Turk" features bars of nine quavers grouped into patterns of at the start. George Harrison's song "Here Comes the Sun" on The Beatles' album Abbey Road features a rhythm "which switches between , and on the bridge". "The special effect of running even eighth notes accented as if triplets against the grain of the underlying backbeat is carried to a point more reminiscent of Stravinsky than of the Beatles". Olivier Messiaen made extensive use of additive rhythmic patterns, much of it stemming from his close study of the rhythms of Indian music. His "Danse de la fureur, pour les sept trompettes" from The Quartet for the End of Time is a bracing example. A gentler exploration of additive patterns can be found in "Le Regard de la Vierge" from the same composer's piano cycle Vingt regards sur l'enfant-Jésus. György Ligeti's Étude No. 13, "L'escalier du diable" features patterns involving quavers grouped in twos and threes. The rhythm at the start of the study follows the pattern , then . According to the composer's note, the time signature "serves only as a guideline, the actual meter consists of 36 quavers (three 'bars'), divided asymmetrically". Sub-Saharan African rhythm A divisive form of cross-rhythm is the basis for most Sub-Saharan African music traditions. Rhythmic patterns are generated by simultaneously dividing a span of musical time by a triple-beat scheme and a duple-beat scheme. In the development of cross rhythm, there are some selected rhythmic materials or beat schemes that are customarily used. These beat schemes, in their generic forms, are simple divisions of the same musical period in equal units, producing varying rhythmic densities or motions. At the center of a core of rhythmic traditions within which the composer conveys his ideas is the technique of cross-rhythm. The technique of cross-rhythm is a simultaneous use of contrasting rhythmic patterns within the same scheme of accents or meter... By the very nature of the desired resultant rhythm, the main beat scheme cannot be separated from the secondary beat scheme. It is the interplay of the two elements that produces the cross-rhythmic texture. "the entire African rhythmic structure... is divisive in nature". Tresillo: divisive and additive interpretations In divisive form, the strokes of tresillo contradict the beats. In additive form, the strokes of tresillo are the beats. From a metrical perspective then, the two ways of perceiving tresillo constitute two different rhythms. On the other hand, from the perspective of simply the pattern of attack-points, tresillo is a shared element of traditional folk music from the northwest tip of Africa to southeast tip of Asia. Additive structure "Tresillo" is also found within a wide geographic belt stretching from Morocco in North Africa to Indonesia in South Asia. Use of the pattern in Moroccan music can be traced back to slaves brought north across the Sahara Desert from present-day Mali. This pattern may have migrated east from North Africa to Asia through the spread of Islam. In Middle Eastern and Asian music, the figure is generated through additive rhythm. Divisive structure The most basic duple-pulse figure found in the Music of Africa and music of the African diaspora is a figure the Cubans call tresillo, a Spanish word meaning 'triplet' (three equal beats in the same time as two main beats). However, in the vernacular of Cuban popular music, the term refers to the figure shown below. African-based music has a divisive rhythm structure. Tresillo is generated through cross-rhythm: 8 pulses ÷ 3 = 2 cross-beats (consisting of three pulses each), with a remainder of a partial cross-beat (spanning two pulses). In other words, 8 ÷ 3 = 2, r2. Tresillo is a cross-rhythmic fragment. Because of its irregular pattern of attack-points, "tresillo" in African and African-based musics has been mistaken for a form of additive rhythm. Although the difference between the two ways of notating this rhythm may seem small, they stem from fundamentally different conceptions. Those who wish to convey a sense of the rhythm's background [main beats], and who understand the surface morphology in relation to a regular subsurface articulation, will prefer the divisive format. Those who imagine the addition of three, then three, then two sixteenth notes will treat the well-formedness of 3 + 3 + 2 as fortuitous, a product of grouping rather than of metrical structure. They will be tempted to deny that African music has a bona fide metrical structure because of its frequent departures from normative grouping structure. See also Counting (music) References Sources Reprinted 1988, New York: Columbia University Press. (cloth); (pbk). Rhythm and meter fr:Division du temps (solfège) pl:Rytmika zmienna pl:Rytmika okresowa sv:Asymmetrisk rytm
Additive rhythm and divisive rhythm
[ "Physics" ]
2,104
[ "Spacetime", "Rhythm and meter", "Physical quantities", "Time" ]
421,940
https://en.wikipedia.org/wiki/Bragg%27s%20law
In many areas of science, Bragg's law, Wulff–Bragg's condition, or Laue–Bragg interference are a special case of Laue diffraction, giving the angles for coherent scattering of waves from a large crystal lattice. It describes how the superposition of wave fronts scattered by lattice planes leads to a strict relation between the wavelength and scattering angle. This law was initially formulated for X-rays, but it also applies to all types of matter waves including neutron and electron waves if there are a large number of atoms, as well as visible light with artificial periodic microscale lattices. History Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by Lawrence Bragg and his father, William Henry Bragg, in 1913 after their discovery that crystalline solids produced surprising patterns of reflected X-rays (in contrast to those produced with, for instance, a liquid). They found that these crystals, at certain specific wavelengths and incident angles, produced intense peaks of reflected radiation. Lawrence Bragg explained this result by modeling the crystal as a set of discrete parallel planes separated by a constant parameter . He proposed that the incident X-ray radiation would produce a Bragg peak if reflections off the various planes interfered constructively. The interference is constructive when the phase difference between the wave reflected off different atomic planes is a multiple of ; this condition (see Bragg condition section below) was first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. Although simple, Bragg's law confirmed the existence of real particles at the atomic scale, as well as providing a powerful new tool for studying crystals. Lawrence Bragg and his father, William Henry Bragg, were awarded the Nobel Prize in physics in 1915 for their work in determining crystal structures beginning with NaCl, ZnS, and diamond. They are the only father-son team to jointly win. The concept of Bragg diffraction applies equally to neutron diffraction and approximately to electron diffraction. In both cases the wavelengths are comparable with inter-atomic distances (~ 150 pm). Many other types of matter waves have also been shown to diffract, and also light from objects with a larger ordered structure such as opals. Bragg condition Bragg diffraction occurs when radiation of a wavelength comparable to atomic spacings is scattered in a specular fashion (mirror-like reflection) by planes of atoms in a crystalline material, and undergoes constructive interference. When the scattered waves are incident at a specific angle, they remain in phase and constructively interfere. The glancing angle (see figure on the right, and note that this differs from the convention in Snell's law where is measured from the surface normal), the wavelength , and the "grating constant" of the crystal are connected by the relation:where is the diffraction order ( is first order, is second order, is third order). This equation, Bragg's law, describes the condition on θ for constructive interference. A map of the intensities of the scattered waves as a function of their angle is called a diffraction pattern. Strong intensities known as Bragg peaks are obtained in the diffraction pattern when the scattering angles satisfy Bragg condition. This is a special case of the more general Laue equations, and the Laue equations can be shown to reduce to the Bragg condition with additional assumptions. Derivation In Bragg's original paper he describes his approach as a Huygens' construction for a reflected wave. Suppose that a plane wave (of any type) is incident on planes of lattice points, with separation , at an angle as shown in the Figure. Points A and C are on one plane, and B is on the plane below. Points ABCC' form a quadrilateral. There will be a path difference between the ray that gets reflected along AC' and the ray that gets transmitted along AB, then reflected along BC. This path difference is The two separate waves will arrive at a point (infinitely far from these lattice planes) with the same phase, and hence undergo constructive interference, if and only if this path difference is equal to any integer value of the wavelength, i.e. where and are an integer and the wavelength of the incident wave respectively. Therefore, from the geometry from which it follows that Putting everything together, which simplifies to which is Bragg's law shown above. If only two planes of atoms were diffracting, as shown in the Figure then the transition from constructive to destructive interference would be gradual as a function of angle, with gentle maxima at the Bragg angles. However, since many atomic planes are participating in most real materials, sharp peaks are typical. A rigorous derivation from the more general Laue equations is available (see page: Laue equations). Beyond Bragg's law The Bragg condition is correct for very large crystals. Because the scattering of X-rays and neutrons is relatively weak, in many cases quite large crystals with sizes of 100 nm or more are used. While there can be additional effects due to crystal defects, these are often quite small. In contrast, electrons interact thousands of times more strongly with solids than X-rays, and also lose energy (inelastic scattering). Therefore samples used in transmission electron diffraction are much thinner. Typical diffraction patterns, for instance the Figure, show spots for different directions (plane waves) of the electrons leaving a crystal. The angles that Bragg's law predicts are still approximately right, but in general there is a lattice of spots which are close to projections of the reciprocal lattice that is at right angles to the direction of the electron beam. (In contrast, Bragg's law predicts that only one or perhaps two would be present, not simultaneously tens to hundreds.) With low-energy electron diffraction where the electron energies are typically 30-1000 electron volts, the result is similar with the electrons reflected back from a surface. Also similar is reflection high-energy electron diffraction which typically leads to rings of diffraction spots. With X-rays the effect of having small crystals is described by the Scherrer equation. This leads to broadening of the Bragg peaks which can be used to estimate the size of the crystals. Bragg scattering of visible light by colloids A colloidal crystal is a highly ordered array of particles that forms over a long range (from a few millimeters to one centimeter in length); colloidal crystals have appearance and properties roughly analogous to their atomic or molecular counterparts. It has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations, with interparticle separation distances often being considerably greater than the individual particle diameter. Periodic arrays of spherical particles give rise to interstitial voids (the spaces between the particles), which act as a natural diffraction grating for visible light waves, when the interstitial spacing is of the same order of magnitude as the incident lightwave. In these cases brilliant iridescence (or play of colours) is attributed to the diffraction and constructive interference of visible lightwaves according to Bragg's law, in a matter analogous to the scattering of X-rays in crystalline solid. The effects occur at visible wavelengths because the interplanar spacing is much larger than for true crystals. Precious opal is one example of a colloidal crystal with optical effects. Volume Bragg gratings Volume Bragg gratings (VBG) or volume holographic gratings (VHG) consist of a volume where there is a periodic change in the refractive index. Depending on the orientation of the refractive index modulation, VBG can be used either to transmit or reflect a small bandwidth of wavelengths. Bragg's law (adapted for volume hologram) dictates which wavelength will be diffracted: where is the Bragg order (a positive integer), the diffracted wavelength, Λ the fringe spacing of the grating, the angle between the incident beam and the normal () of the entrance surface and the angle between the normal and the grating vector (). Radiation that does not match Bragg's law will pass through the VBG undiffracted. The output wavelength can be tuned over a few hundred nanometers by changing the incident angle (). VBG are being used to produce widely tunable laser source or perform global hyperspectral imagery (see Photon etc.). Selection rules and practical crystallography The measurement of the angles can be used to determine crystal structure, see x-ray crystallography for more details. As a simple example, Bragg's law, as stated above, can be used to obtain the lattice spacing of a particular cubic system through the following relation: where is the lattice spacing of the cubic crystal, and , , and are the Miller indices of the Bragg plane. Combining this relation with Bragg's law gives: One can derive selection rules for the Miller indices for different cubic Bravais lattices as well as many others, a few of the selection rules are given in the table below. These selection rules can be used for any crystal with the given crystal structure. KCl has a face-centered cubic Bravais lattice. However, the K+ and the Cl− ion have the same number of electrons and are quite close in size, so that the diffraction pattern becomes essentially the same as for a simple cubic structure with half the lattice parameter. Selection rules for other structures can be referenced elsewhere, or derived. Lattice spacing for the other crystal systems can be found here. See also Bragg plane Crystal lattice Diffraction Distributed Bragg reflector Fiber Bragg grating Dynamical theory of diffraction Electron diffraction Georg Wulff Henderson limit Laue conditions Powder diffraction Radar angels Structure factor X-ray crystallography References Further reading Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976). External links Nobel Prize in Physics – 1915 https://web.archive.org/web/20110608141639/http://www.physics.uoguelph.ca/~detong/phys3510_4500/xray.pdf Learning crystallography Diffraction Neutron X-rays Crystallography
Bragg's law
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,142
[ "Spectrum (physical sciences)", "X-rays", "Electromagnetic spectrum", "Materials science", "Crystallography", "Diffraction", "Condensed matter physics", "Spectroscopy" ]
422,481
https://en.wikipedia.org/wiki/Mass%E2%80%93energy%20equivalence
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula. The formula defines the energy of a particle in its rest frame as the product of mass () with the speed of light squared (). Because the speed of light is a large number in everyday units (approximately ), the formula implies that a small amount of mass corresponds to an enormous amount of energy. Rest mass, also called invariant mass, is a fundamental physical property of matter, independent of velocity. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics. Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists. Description Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared (). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by , which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with as the energy released (removed), and as the change in mass. In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by . For an observer in the rest frame, removing energy is the same as removing mass and the formula indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by . Mass in special relativity An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by . Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. Physicists typically use the term mass, though experiments have shown an object's gravitational mass depends on its total energy and not just its rest mass. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum. Relativistic mass Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by , or . The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws. Conservation of mass and energy Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy. Massless particles Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation , where is the Planck constant and is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy. Composite systems For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses. For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by ) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass. Relation to gravity Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity. The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the solar eclipse of May 29, 1919. During the eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation. Efficiency In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation. As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang. Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes. Extension for systems in motion Unlike a system's energy in an inertial frame, the relativistic energy () of a system depends on both the rest mass () and the total momentum of the system. The extension of Einstein's equation to these systems is given by: or where the term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to when the momentum term is zero. For photons where , the equation reduces to . Low-speed approximation Using the Lorentz factor, , the energy–momentum can be rewritten as and expanded as a power series: For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because is small. For low speeds, all but the first two terms can be ignored: In classical mechanics, both the term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields: . The difference between the two approximations is given by , a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of . The difference between the approximations for the Parker Solar Probe in 2018 is , which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about . Applications Application to nuclear physics The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power. A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by ), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is blown up in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton () nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation. Practical examples Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: . In the SI system (expressing the ratio in joules per kilogram using the value of in metres per second): (≈ 9.0 × 1016 joules per kilogram). So the energy equivalent of one kilogram of mass is 89.9 petajoules 25.0 billion kilowatt-hours (≈ 25,000 GW·h) 21.5 trillion kilocalories (≈ 21 Pcal) 85.2 trillion BTUs 0.0852 quads or the energy released by combustion of the following: 21 500 kilotons of TNT-equivalent energy (≈ 21 Mt) litres or US gallons of automotive gasoline Any time energy is released, the process can be evaluated from an perspective. For instance, the "gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass. Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged: A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring. Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = ). A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg. History While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps. Developments prior to Einstein Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it. During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of , where . English engineer Samuel Tolver Preston in 1875 and the Italian industrialist and geologist Olinto De Pretto in 1903, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed . Each of these particles has a kinetic energy of up to a small numerical factor, giving a mass–energy relation. In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics. Electromagnetic mass There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass: , where Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes. Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass" to the cavity's mass. He argued that this implies mass dependence on temperature as well. Einstein: mass–energy equivalence Einstein did not write the exact formula in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy by emitting light, its mass diminishes by . This formulation relates only a change in mass to a change in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone." Mass–velocity relationship In developing special relativity, Einstein found that the kinetic energy of a moving body is with the velocity, the rest mass, and the Lorentz factor. He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle: Without this second term, there would be an additional contribution in the energy when the particle is not moving. Einstein's view on mass Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on , he treated as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass". In older physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection. Einstein's 1905 derivation Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles: . Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used to represent the speed of light in vacuum and to represent the energy lost by a body in the form of radiation. Consequently, the equation was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy in the form of radiation, its mass diminishes by ." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of before and after the emission as seen in its rest frame. As seen from a moving frame, becomes and becomes . Einstein obtained, in modern notation: . He then argued that can only differ from the kinetic energy by an additive constant, which gives . Neglecting effects higher than third order in after a Taylor series expansion of the right side of this yields: Einstein concluded that the emission reduces the body's mass by , and that the mass of a body is a measure of its energy content. The correctness of Einstein's 1905 derivation of was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons. Relativistic center-of-mass theorem of 1906 Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's , because mass conservation appears as a special case of the energy conservation law. Further developments There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be (where is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula , with being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as in June 1907, where is the pressure and the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form and concluded: "A mass is equivalent, as regards inertia, to a quantity of energy . […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: and , with being the relativistic energy (the energy of an object when the object is moving), is the rest energy (the energy when not moving), is the relativistic mass (the rest mass and the extra mass gained when moving), and is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: and , with being the total energy (rest energy plus kinetic energy) of a moving material point, its rest energy, the relativistic mass, and the invariant mass. In 1911, German physicist Max von Laue gave a more comprehensive proof of from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918. Einstein returned to the topic once again after World War II and this time he wrote in the title of his article intended as an explanation for a general reader by analogy. Alternative version An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass . If the body is examined in a frame moving with nonrelativistic velocity , it is no longer at rest and in the moving frame it has momentum . Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy . In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor . The momentum of the light is its energy divided by , and it is increased by a factor of . So the right-moving light is carrying an extra momentum given by: The left-moving light carries a little less momentum, by the same amount . So the total right-momentum in both light pulses is twice . This is the right-momentum that the object lost. The momentum of the object in the moving frame after the emission is reduced to this amount: So the change in the object's mass is equal to the total energy lost divided by . Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass. Radioactivity and nuclear energy It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter." Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine." This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem. While is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation , plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of allowed them to realize on the spot that the basic fission process was energetically possible. Einstein's equation written According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021. See also Notes References External links Einstein on the Inertia of Energy – MathPages Einstein-on film explaining a mass energy equivalence Mass and Energy – Conversations About Science with Theoretical Physicist Matt Strassler The Equivalence of Mass and Energy – Entry in the Stanford Encyclopedia of Philosophy 1905 introductions 1905 in science 1905 in Germany Albert Einstein Energy (physics) Equations Mass Special relativity
Mass–energy equivalence
[ "Physics", "Mathematics" ]
8,862
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Mathematical objects", "Equations", "Size", "Special relativity", "Energy (physics)", "Theory of relativity", "Wikipedia categories named after physical quantities", "Matter" ]
422,567
https://en.wikipedia.org/wiki/Fehling%27s%20solution
In organic chemistry, Fehling's solution is a chemical reagent used to differentiate between water-soluble carbohydrate and ketone () functional groups, and as a test for reducing sugars and non-reducing sugars, supplementary to the Tollens' reagent test. The test was developed by German chemist Hermann von Fehling in 1849. Laboratory preparation Fehling's solution is prepared by combining two separate solutions: Fehling's A, which is a deep blue aqueous solution of copper(II) sulfate, and Fehling's B, which is a colorless solution of aqueous potassium sodium tartrate (also known as Rochelle salt) made strongly alkaline with sodium hydroxide. These two solutions, stable separately, are combined when needed for the test because the copper(II) complex formed by their combination is not stable: it slowly decomposes into copper hydroxide in the alkaline conditions. The active reagent is a tartrate complex of Cu2+, which serves as an oxidizing agent. The tartrate serves as a ligand. However, the coordination chemistry is complex and various species with different metal to ligand ratio have been determined. Other methods of preparing comparable cupric-ion test-reagent solutions were developed at about the same time as Fehling's. These include the Viollette solution (eponymous for (1823–1894)) and the Soxhlet solution (eponymous for Franz Ritter von Soxhlet (1848–1926)), both containing tartrate, and Soldaïni's solution (eponymous for Arturo Soldaïni), which instead contains carbonate. Barfoed's test is also related and similar to Fehling's test (eponymous for Christen Thomsen Barfoed (1815–1889)). Use of the reagent Fehling's solution can be used to distinguish aldehyde vs ketone functional groups. The compound to be tested is added to the Fehling's solution and the mixture is heated. Aldehydes are oxidized, giving a positive result, but ketones do not react, unless they are α-hydroxy ketones. The bistartratocuprate(II) complex oxidizes the aldehyde to a carboxylate anion, and in the process the copper(II) ions of the complex are reduced to copper(I) ions. Red copper(I) oxide then precipitates out of the reaction mixture, which indicates a positive result i.e. that redox has taken place (this is the same positive result as with Benedict's solution). Fehling's test can be used as a generic test for monosaccharides and other reducing sugars (e.g., maltose). It will give a positive result for aldose monosaccharides (due to the oxidisable aldehyde group) but also for ketose monosaccharides, as they are converted to aldoses by the base in the reagent, and then give a positive result. Fehling's can be used to screen for glucose in urine, thus detecting diabetes. Another use is in the breakdown of starch to convert it to glucose syrup and maltodextrins in order to measure the amount of reducing sugar, thus revealing the dextrose equivalent (DE) of the starch sugar. Formic acid (HCO2H) also gives a positive Fehling's test result, as it does with Tollens' (eponymous for Bernhard Christian Gottfried Tollens (1841 – 1918)) test and Benedict's solution also. The positive tests are consistent with it being readily oxidizable to carbon dioxide. The solution cannot differentiate between benzaldehyde and acetone. Net reaction The net reaction between an aldehyde and the copper(II) ions in Fehling's solution may be written as: RCHO + 2 Cu^2+ + 5 OH- -> RCOO- + Cu2O + 3 H2O or with the tartrate included: RCHO + 2 Cu(C4H4O6)2^2- + 5 OH- -> RCOO- + Cu2O + 4 C4H4O6^2- + 3 H2O See also Barfoed's test References External links Biochemistry detection methods Carbohydrate methods Chemical tests Coordination complexes Copper(II) compounds Oxidizing agents Analytical reagents
Fehling's solution
[ "Chemistry", "Biology" ]
947
[ "Biochemistry methods", "Redox", "Coordination complexes", "Coordination chemistry", "Biochemistry detection methods", "Oxidizing agents", "Chemical tests", "Carbohydrate chemistry", "Carbohydrate methods", "Analytical reagents" ]
422,711
https://en.wikipedia.org/wiki/Equivariant%20map
In mathematics, equivariance is a form of symmetry for functions from one space with symmetry to another (such as symmetric spaces). A function is said to be an equivariant map when its domain and codomain are acted on by the same symmetry group, and when the function commutes with the action of the group. That is, applying a symmetry transformation and then computing the function produces the same result as computing the function and then applying the transformation. Equivariant maps generalize the concept of invariants, functions whose value is unchanged by a symmetry transformation of their argument. The value of an equivariant map is often (imprecisely) called an invariant. In statistical inference, equivariance under statistical transformations of data is an important property of various estimation methods; see invariant estimator for details. In pure mathematics, equivariance is a central object of study in equivariant topology and its subtopics equivariant cohomology and equivariant stable homotopy theory. Examples Elementary geometry In the geometry of triangles, the area and perimeter of a triangle are invariants under Euclidean transformations: translating, rotating, or reflecting a triangle does not change its area or perimeter. However, triangle centers such as the centroid, circumcenter, incenter and orthocenter are not invariant, because moving a triangle will also cause its centers to move. Instead, these centers are equivariant: applying any Euclidean congruence (a combination of a translation and rotation) to a triangle, and then constructing its center, produces the same point as constructing the center first, and then applying the same congruence to the center. More generally, all triangle centers are also equivariant under similarity transformations (combinations of translation, rotation, reflection, and scaling), and the centroid is equivariant under affine transformations. The same function may be an invariant for one group of symmetries and equivariant for a different group of symmetries. For instance, under similarity transformations instead of congruences the area and perimeter are no longer invariant: scaling a triangle also changes its area and perimeter. However, these changes happen in a predictable way: if a triangle is scaled by a factor of , the perimeter also scales by and the area scales by . In this way, the function mapping each triangle to its area or perimeter can be seen as equivariant for a multiplicative group action of the scaling transformations on the positive real numbers. Statistics Another class of simple examples comes from statistical estimation. The mean of a sample (a set of real numbers) is commonly used as a central tendency of the sample. It is equivariant under linear transformations of the real numbers, so for instance it is unaffected by the choice of units used to represent the numbers. By contrast, the mean is not equivariant with respect to nonlinear transformations such as exponentials. The median of a sample is equivariant for a much larger group of transformations, the (strictly) monotonic functions of the real numbers. This analysis indicates that the median is more robust against certain kinds of changes to a data set, and that (unlike the mean) it is meaningful for ordinal data. The concepts of an invariant estimator and equivariant estimator have been used to formalize this style of analysis. Representation theory In the representation theory of finite groups, a vector space equipped with a group that acts by linear transformations of the space is called a linear representation of the group. A linear map that commutes with the action is called an intertwiner. That is, an intertwiner is just an equivariant linear map between two representations. Alternatively, an intertwiner for representations of a group over a field is the same thing as a module homomorphism of -modules, where is the group ring of G. Under some conditions, if X and Y are both irreducible representations, then an intertwiner (other than the zero map) only exists if the two representations are equivalent (that is, are isomorphic as modules). That intertwiner is then unique up to a multiplicative factor (a non-zero scalar from ). These properties hold when the image of is a simple algebra, with centre (by what is called Schur's lemma: see simple module). As a consequence, in important cases the construction of an intertwiner is enough to show the representations are effectively the same. Formalization Equivariance can be formalized using the concept of a -set for a group . This is a mathematical object consisting of a mathematical set and a group action (on the left) of on . If and are both -sets for the same group , then a function is said to be equivariant if for all and all . If one or both of the actions are right actions the equivariance condition may be suitably modified: ; (right-right) ; (right-left) ; (left-right) Equivariant maps are homomorphisms in the category of G-sets (for a fixed G). Hence they are also known as G-morphisms, G-maps, or G-homomorphisms. Isomorphisms of G-sets are simply bijective equivariant maps. The equivariance condition can also be understood as the following commutative diagram. Note that denotes the map that takes an element and returns . Generalization Equivariant maps can be generalized to arbitrary categories in a straightforward manner. Every group G can be viewed as a category with a single object (morphisms in this category are just the elements of G). Given an arbitrary category C, a representation of G in the category C is a functor from G to C. Such a functor selects an object of C and a subgroup of automorphisms of that object. For example, a G-set is equivalent to a functor from G to the category of sets, Set, and a linear representation is equivalent to a functor to the category of vector spaces over a field, VectK. Given two representations, ρ and σ, of G in C, an equivariant map between those representations is simply a natural transformation from ρ to σ. Using natural transformations as morphisms, one can form the category of all representations of G in C. This is just the functor category CG. For another example, take C = Top, the category of topological spaces. A representation of G in Top is a topological space on which G acts continuously. An equivariant map is then a continuous map f : X → Y between representations which commutes with the action of G. See also Curtis–Hedlund–Lyndon theorem, a characterization of cellular automata in terms of equivariant maps References Group actions (mathematics) Representation theory Symmetry
Equivariant map
[ "Physics", "Mathematics" ]
1,460
[ "Group actions", "Fields of abstract algebra", "Geometry", "Representation theory", "Symmetry" ]
423,207
https://en.wikipedia.org/wiki/Electron%20spiral%20toroid
Electron Power Systems, Inc. of Acton, Massachusetts, United States, claims to have developed a technology for maintaining small stable plasma toroids called electron spiral toroids (ESTs) which remain stable in Earth's atmosphere without the use of any special magnetic fields. They claim to have created these toroids in the laboratory, and to have developed a mathematical model for them that is similar to some explanations for ball lightning. An EST may be a special case of a spheromak. Because of EST's claimed lack of need for an external stabilizing magnetic field, EPS hope to be able to create small efficient fusion reactors by colliding magnetically accelerated ESTs together at speeds high enough to induce ballistic nuclear fusion. Their model for their reported toroidal phenomena was extensively criticised by a December 2000 technical report commissioned by NASA. , EPS claim to have met the criticisms of the NASA report, and to have demonstrated that their model is mathematically sound, and state that they are ready to proceed with development of their technology. References Jean-Luc Cambier and David A. Micheletti. Theoretical Analysis of the Electron Spiral Toroid Concept NASA/CR-2000-210654. C. Chen, R. Pakter, and D. C. Seward. Equilibrium and stability properties of self-organized electron spiral toroids. Physics of Plasmas Vol 8(10) pp. 4441–4449. October 2001. External links Electron Power Systems: The official web site of Electron Power Systems. Plasma technology and applications
Electron spiral toroid
[ "Physics" ]
311
[ "Plasma technology and applications", "Plasma physics stubs", "Plasma physics" ]
423,387
https://en.wikipedia.org/wiki/Antifreeze%20protein
Antifreeze proteins (AFPs) or ice structuring proteins refer to a class of polypeptides produced by certain animals, plants, fungi and bacteria that permit their survival in temperatures below the freezing point of water. AFPs bind to small ice crystals to inhibit the growth and recrystallization of ice that would otherwise be fatal. There is also increasing evidence that AFPs interact with mammalian cell membranes to protect them from cold damage. This work suggests the involvement of AFPs in cold acclimatization. Non-colligative properties Unlike the widely used automotive antifreeze, ethylene glycol, AFPs do not lower freezing point in proportion to concentration. Rather, they work in a noncolligative manner. This phenomenon allows them to act as an antifreeze at concentrations 1/300th to 1/500th of those of other dissolved solutes. Their low concentration minimizes their effect on osmotic pressure. The unusual properties of AFPs are attributed to their selective affinity for specific crystalline ice forms and the resulting blockade of the ice-nucleation process. Thermal hysteresis AFPs create a difference between the melting point and freezing point (busting temperature of AFP bound ice crystal) known as thermal hysteresis. The addition of AFPs at the interface between solid ice and liquid water inhibits the thermodynamically favored growth of the ice crystal. Ice growth is kinetically inhibited by the AFPs covering the water-accessible surfaces of ice. Thermal hysteresis is easily measured in the lab with a nanolitre osmometer. Organisms differ in their values of thermal hysteresis. The maximum level of thermal hysteresis shown by fish AFP is approximately −3.5 °C (Sheikh Mahatabuddin et al., SciRep)(29.3 °F). In contrast, aquatic organisms are exposed only to −1 to −2 °C below freezing. During the extreme winter months, the spruce budworm resists freezing at temperatures approaching −30 °C. The rate of cooling can influence the thermal hysteresis value of AFPs. Rapid cooling can substantially decrease the nonequilibrium freezing point, and hence the thermal hysteresis value. Consequently, organisms cannot necessarily adapt to their subzero environment if the temperature drops abruptly. Freeze tolerance versus freeze avoidance Species containing AFPs may be classified as Freeze avoidant: These species are able to prevent their body fluids from freezing altogether. Generally, the AFP function may be overcome at extremely cold temperatures, leading to rapid ice growth and death. Freeze tolerant: These species are able to survive body fluid freezing. Some freeze tolerant species are thought to use AFPs as cryoprotectants to prevent the damage of freezing, but not freezing altogether. The exact mechanism is still unknown. However, it is thought AFPs may inhibit recrystallization and stabilize cell membranes to prevent damage by ice. They may work in conjunction with ice nucleating proteins (INPs) to control the rate of ice propagation following freezing. Diversity There are many known nonhomologous types of AFPs. Fish AFPs Antifreeze glycoproteins or AFGPs are found in Antarctic notothenioids and northern cod. They are 2.6-3.3 kD. AFGPs evolved separately in notothenioids and northern cod. In notothenioids, the AFGP gene arose from an ancestral trypsinogen-like serine protease gene. Type I AFP is found in winter flounder, longhorn sculpin and shorthorn sculpin. It is the best documented AFP because it was the first to have its three-dimensional structure determined. Type I AFP consists of a single, long, amphipathic alpha helix, about 3.3-4.5 kD in size. There are three faces to the 3D structure: the hydrophobic, hydrophilic, and Thr-Asx face. Type I-hyp AFP (where hyp stands for hyperactive) are found in several righteye flounders. It is approximately 32 kD (two 17 kD dimeric molecules). The protein was isolated from the blood plasma of winter flounder. It is considerably better at depressing freezing temperature than most fish AFPs. The ability is partially derived from its many repeats of the Type I ice-binding site. Type II AFPs (e.g. ) are found in sea raven, smelt and herring. They are cysteine-rich globular proteins containing five disulfide bonds. Type II AFPs likely evolved from calcium dependent (c-type) lectins. Sea ravens, smelt, and herring are quite divergent lineages of teleost. If the AFP gene were present in the most recent common ancestor of these lineages, it is peculiar that the gene is scattered throughout those lineages, present in some orders and absent in others. It has been suggested that lateral gene transfer could be attributed to this discrepancy, such that the smelt acquired the type II AFP gene from the herring. Type III AFPs are found in Antarctic eelpout. They exhibit similar overall hydrophobicity at ice binding surfaces to type I AFPs. They are approximately 6kD in size. Type III AFPs likely evolved from a sialic acid synthase (SAS) gene present in Antarctic eelpout. Through a gene duplication event, this gene—which has been shown to exhibit some ice-binding activity of its own—evolved into an effective AFP gene by loss of the N-terminal part. Type IV AFPs () are found in longhorn sculpins. They are alpha helical proteins rich in glutamate and glutamine. This protein is approximately 12KDa in size and consists of a 4-helix bundle. Its only posttranslational modification is a pyroglutamate residue, a cyclized glutamine residue at its N-terminus. Plant AFPs The classification of AFPs became more complicated when antifreeze proteins from plants were discovered. Plant AFPs are rather different from the other AFPs in the following aspects: They have much weaker thermal hysteresis activity when compared to other AFPs. Their physiological function is likely in inhibiting the recrystallization of ice rather than in preventing ice formation. Most of them are evolved pathogenesis-related proteins, sometimes retaining antifungal properties. Insect AFPs There are a number of AFPs found in insects, including those from Dendroides, Tenebrio and Rhagium beetles, spruce budworm and pale beauty moths, and midges (same order as flies). Insect AFPs share certain similarities, with most having higher activity (i.e. greater thermal hysteresis value, termed hyperactive) and a repetitive structure with a flat ice-binding surface. Those from the closely related Tenebrio and Dendroides beetles are homologous and each 12–13 amino-acid repeat is stabilized by an internal disulfide bond. Isoforms have between 6 and 10 of these repeats that form a coil, or beta-solenoid. One side of the solenoid has a flat ice-binding surface that consists of a double row of threonine residues. Other beetles (genus Rhagium) have longer repeats without internal disulfide bonds that form a compressed beta-solenoid (beta sandwich) with four rows of threonine residus, and this AFP is structurally similar to that modelled for the non-homologous AFP from the pale beauty moth. In contrast, the AFP from the spruce budworm moth is a solenoid that superficially resembles the Tenebrio protein, with a similar ice-binding surface, but it has a triangular cross-section, with longer repeats that lack the internal disulfide bonds. The AFP from midges is structurally similar to those from Tenebrio and Dendroides, but the disulfide-braced beta-solenoid is formed from shorter 10 amino-acids repeats, and instead of threonine, the ice-binding surface consists of a single row of tyrosine residues. Springtails (Collembola) are not insects, but like insects, they are arthropods with six legs. A species found in Canada, which is often called a "snow flea", produces hyperactive AFPs. Although they are also repetitive and have a flat ice-binding surface, the similarity ends there. Around 50% of the residues are glycine (Gly), with repeats of Gly-Gly- X or Gly-X-X, where X is any amino acid. Each 3-amino-acid repeat forms one turn of a polyproline type II helix. The helices then fold together, to form a bundle that is two helices thick, with an ice-binding face dominated by small hydrophobic residues like alanine, rather than threonine. Other insects, such as an Alaskan beetle, produce hyperactive antifreezes that are even less similar, as they are polymers of sugars (xylomannan) rather than polymers of amino acids (proteins). Taken together, this suggests that most of the AFPs and antifreezes arose after the lineages that gave rise to these various insects diverged. The similarities they do share are the result of convergent evolution. Sea ice organism AFPs Many microorganisms living in sea ice possess AFPs that belong to a single family. The diatoms Fragilariopsis cylindrus and F. curta play a key role in polar sea ice communities, dominating the assemblages of both platelet layer and within pack ice. AFPs are widespread in these species, and the presence of AFP genes as a multigene family indicates the importance of this group for the genus Fragilariopsis. AFPs identified in F. cylindrus belong to an AFP family which is represented in different taxa and can be found in other organisms related to sea ice (Colwellia spp., Navicula glaciei, Chaetoceros neogracile and Stephos longipes and Leucosporidium antarcticum) and Antarctic inland ice bacteria (Flavobacteriaceae), as well as in cold-tolerant fungi (Typhula ishikariensis, Lentinula edodes and Flammulina populicola). Several structures for sea ice AFPs have been solved. This family of proteins fold into a beta helix that form a flat ice-binding surface. Unlike the other AFPs, there is not a singular sequence motif for the ice-binding site. AFP found from the metagenome of the ciliate Euplotes focardii and psychrophilic bacteria has an efficient ice re-crystallization inhibition ability. 1 μM of Euplotes focardii consortium ice-binding protein (EfcIBP) is enough for the total inhibition of ice re-crystallization in –7.4 °C temperature. This ice-recrystallization inhibition ability helps bacteria to tolerate ice rather than preventing the formation of ice. EfcIBP produces also thermal hysteresis gap, but this ability is not as efficient as the ice-recrystallization inhibition ability. EfcIBP helps to protect both purified proteins and whole bacterial cells in freezing temperatures. Green fluorescent protein is functional after several cycles of freezing and melting when incubated with EfcIBP. Escherichia coli survives longer periods in 0 °C temperature when the efcIBP gene was inserted to E. coli genome. EfcIBP has a typical AFP structure consisting of multiple beta-sheets and an alpha-helix. Also, all the ice-binding polar residues are at the same site of the protein. Evolution The remarkable diversity and distribution of AFPs suggest the different types evolved recently in response to sea level glaciation occurring 1–2 million years ago in the Northern hemisphere and 10-30 million years ago in Antarctica. Data collected from deep sea ocean drilling has revealed that the development of the Antarctic Circumpolar Current was formed over 30 million years ago. The cooling of Antarctic imposed from this current caused a mass extinction of teleost species that were unable to withstand freezing temperatures. Notothenioids species with the antifreeze gylcoprotein were able to survive the glaciation event and diversify into new niches. This independent development of similar adaptations is referred to as convergent evolution. Evidence for convergent evolution in Northern cod (Gadidae) and Notothenioids is supported by the findings of different spacer sequences and different organization of  introns and exons as well as unmatching AFGP tripeptide sequences, which emerged from duplications of short ancestral sequences which were differently permuted (for the same tripeptide) by each group. These groups diverged approximately 7-15 million years ago. Shortly after (5-15 mya), the AFGP gene evolved from an ancestral pancreatic trypsinogen gene in Notothenioids. AFGP and trypsinogen genes split via a sequence divergence - an adaptation which occurred alongside the cooling and eventual freezing of the Antarctic Ocean. The evolution of the AFGP gene in Northern cod occurred more recently (~3.2 mya) and emerged from a noncoding sequence via tandem duplications in a Thr-Ala-Ala unit. Antarctic notothenioid fish and arctic cod, Boreogadus saida, are part of two distinct orders and have very similar antifreeze glycoproteins. Although the two fish orders have similar antifreeze proteins, cod species contain arginine in AFG, while Antarctic notothenioid do not. The role of arginine as an enhancer has been investigated in Dendroides canadensis antifreeze protein (DAFP-1) by observing the effect of a chemical modification using 1-2 cyclohexanedione. Previous research has found various enhancers of this bettles' antifreeze protein including a thaumatin-like protein and polycarboxylates. Modifications of DAFP-1 with the arginine specific reagent resulted in the partial and complete loss of thermal hysteresis in DAFP-1, indicating that arginine plays a crucial role in enhancing its ability. Different enhancer molecules of DAFP-1 have distinct thermal hysteresis activity. Amornwittawat et al. 2008 found that the number of carboxylate groups in a molecules influence the enhancing ability of DAFP-1. Optimum activity in TH is correlated with high concentration of enhancer molecules. Li et al. 1998 investigated the effects of pH and solute on thermal hysteresis in Antifreeze proteins from Dendrioides canadensis. TH activity of DAFP-4 was not affected by pH unless the there was a low solute concentration (pH 1) in which TH decreased. The effect of five solutes; succinate, citrate, malate, malonate, and acetate, on TH activity was reported. Among the five solutes, citrate was shown to have the greatest enhancing effect. This is an example of a proto-ORF model, a rare occurrence where new genes pre exist as a formed open reading frame before the existence of the regulatory element needed to activate them. In fishes, horizontal gene transfer is responsible for the presence of Type II AFP proteins in some groups without a recently shared phylogeny. In Herring and smelt, up to 98% of introns for this gene are shared; the method of transfer is assumed to occur during mating via sperm cells exposed to foreign DNA. The direction of transfer is known to be from herring to smelt as herring have 8 times the copies of AFP gene as smelt (1) and the segments of the gene in smelt house transposable elements which are otherwise characteristic of and common in herring but not found in other fishes. There are two reasons why many types of AFPs are able to carry out the same function despite their diversity: Although ice is uniformly composed of water molecules, it has many different surfaces exposed for binding. Different types of AFPs may interact with different surfaces. Although the five types of AFPs differ in their primary structure of amino acids, when each folds into a functioning protein they may share similarities in their three-dimensional or tertiary structure that facilitates the same interactions with ice. Antifreeze glycoprotein activity has been observed across several ray-finned species including eelpouts, sculpins, and cod species. Fish species that possess the antifreeze glycoprotein express different levels of protein activity. Polar cod (Boreogadus saida) exhibit similar protein activity and properties to the Antarctic species, T. borchgrevinki. Both species have higher protein activity than saffron cod (Eleginus gracilis). Ice antifreeze proteins have been reported in diatom species to help decrease the freezing point of organism's proteins. Bayer-Giraldi et al. 2010 found 30 species from distinct taxa with homologues of ice antifreeze proteins. The diversity is consistent with previous research that has observed the presence of these genes in crustaceans, insects, bacteria, and fungi. Horizontal gene transfer is responsible for the presence of ice antifreeze proteins in two sea diatom species, F. cylindrus and F. curta. Mechanisms of action AFPs are thought to inhibit ice growth by an adsorption–inhibition mechanism. They adsorb to nonbasal planes of ice, inhibiting thermodynamically-favored ice growth. The presence of a flat, rigid surface in some AFPs seems to facilitate its interaction with ice via Van der Waals force surface complementarity. Binding to ice Normally, ice crystals grown in solution only exhibit the basal (0001) and prism faces (1010), and appear as round and flat discs. However, it appears the presence of AFPs exposes other faces. It now appears the ice surface 2021 is the preferred binding surface, at least for AFP type I. Through studies on type I AFP, ice and AFP were initially thought to interact through hydrogen bonding (Raymond and DeVries, 1977). However, when parts of the protein thought to facilitate this hydrogen bonding were mutated, the hypothesized decrease in antifreeze activity was not observed. Recent data suggest hydrophobic interactions could be the main contributor. It is difficult to discern the exact mechanism of binding because of the complex water-ice interface. Currently, attempts to uncover the precise mechanism are being made through use of molecular modelling programs (molecular dynamics or the Monte Carlo method). Binding mechanism and antifreeze function According to the structure and function study on the antifreeze protein from Pseudopleuronectes americanus, the antifreeze mechanism of the type-I AFP molecule was shown to be due to the binding to an ice nucleation structure in a zipper-like fashion through hydrogen bonding of the hydroxyl groups of its four Thr residues to the oxygens along the direction in ice lattice, subsequently stopping or retarding the growth of ice pyramidal planes so as to depress the freeze point. The above mechanism can be used to elucidate the structure-function relationship of other antifreeze proteins with the following two common features: recurrence of a Thr residue (or any other polar amino acid residue whose side-chain can form a hydrogen bond with water) in an 11-amino-acid period along the sequence concerned, and a high percentage of an Ala residue component therein. History In the 1950s, Norwegian scientist Scholander set out to explain how Arctic fish can survive in water colder than the freezing point of their blood. His experiments led him to believe there was “antifreeze” in the blood of Arctic fish. Then in the late 1960s, animal biologist Arthur DeVries was able to isolate the antifreeze protein through his investigation of Antarctic fish. These proteins were later called antifreeze glycoproteins (AFGPs) or antifreeze glycopeptides to distinguish them from newly discovered nonglycoprotein biological antifreeze agents (AFPs). DeVries worked with Robert Feeney (1970) to characterize the chemical and physical properties of antifreeze proteins. In 1992, Griffith et al. documented their discovery of AFP in winter rye leaves. Around the same time, Urrutia, Duman and Knight (1992) documented thermal hysteresis protein in angiosperms. The next year, Duman and Olsen noted AFPs had also been discovered in over 23 species of angiosperms, including ones eaten by humans. They reported their presence in fungi and bacteria as well. Name change Recent attempts have been made to relabel antifreeze proteins as ice structuring proteins to more accurately represent their function and to dispose of any assumed negative relation between AFPs and automotive antifreeze, ethylene glycol. These two things are completely separate entities, and show loose similarity only in their function. Commercial and medical applications Numerous fields would be able to benefit from the protection of tissue damage by freezing. Businesses are currently investigating the use of these proteins in: Increasing freeze tolerance of crop plants and extending the harvest season in cooler climates Improving farm fish production in cooler climates Lengthening shelf life of frozen foods Improving cryosurgery Enhancing preservation of tissues for transplant or transfusion in medicine Therapy for hypothermia Human Cryopreservation (Cryonics) Unilever has obtained UK, US, EU, Mexico, China, Philippines, Australia and New Zealand approval to use a genetically modified yeast to produce antifreeze proteins from fish for use in ice cream production. They are labeled "ISP" or ice structuring protein on the label, instead of AFP or antifreeze protein. Recent news One recent, successful business endeavor has been the introduction of AFPs into ice cream and yogurt products. This ingredient, labelled ice-structuring protein, has been approved by the Food and Drug Administration. The proteins are isolated from fish and replicated, on a larger scale, in genetically modified yeast. There is concern from organizations opposed to genetically modified organisms (GMOs) who believe that antifreeze proteins may cause inflammation. Intake of AFPs in diet is likely substantial in most northerly and temperate regions already. Given the known historic consumption of AFPs, it is safe to conclude their functional properties do not impart any toxicologic or allergenic effects in humans. As well, the transgenic process of ice structuring proteins production is widely used in society. Insulin and rennet are produced using this technology. The process does not impact the product; it merely makes production more efficient and prevents the death of fish that would otherwise be killed to extract the protein. Currently, Unilever incorporates AFPs into some of its American products, including some Popsicle ice pops and a new line of Breyers Light Double Churned ice cream bars. In ice cream, AFPs allow the production of very creamy, dense, reduced fat ice cream with fewer additives. They control ice crystal growth brought on by thawing on the loading dock or kitchen table, which reduces texture quality. In November 2009, the Proceedings of the National Academy of Sciences published the discovery of a molecule in an Alaskan beetle that behaves like AFPs, but is composed of saccharides and fatty acids. A 2010 study demonstrated the stability of superheated water ice crystals in an AFP solution, showing that while the proteins can inhibit freezing, they can also inhibit melting. In 2021, EPFL and Warwick scientists have found an artificial imitation of antifreeze proteins. References Further reading External links Cold, Hard Fact: Fish Antifreeze Produced in Pancreas Antifreeze Proteins: Molecule of the Month , by David Goodsell, RCSB Protein Data Bank Proteins by function Cryobiology
Antifreeze protein
[ "Physics", "Chemistry", "Biology" ]
5,011
[ "Biochemistry", "Physical phenomena", "Phase transitions", "Cryobiology" ]
15,291,723
https://en.wikipedia.org/wiki/Hierarchical%20control%20system
A hierarchical control system (HCS) is a form of control system in which a set of devices and governing software is arranged in a hierarchical tree. When the links in the tree are implemented by a computer network, then that hierarchical control system is also a form of networked control system. Overview A human-built system with complex behavior is often organized as a hierarchy. For example, a command hierarchy has among its notable features the organizational chart of superiors, subordinates, and lines of organizational communication. Hierarchical control systems are organized similarly to divide the decision making responsibility. Each element of the hierarchy is a linked node in the tree. Commands, tasks and goals to be achieved flow down the tree from superior nodes to subordinate nodes, whereas sensations and command results flow up the tree from subordinate to superior nodes. Nodes may also exchange messages with their siblings. The two distinguishing features of a hierarchical control system are related to its layers. Each higher layer of the tree operates with a longer interval of planning and execution time than its immediately lower layer. The lower layers have local tasks, goals, and sensations, and their activities are planned and coordinated by higher layers which do not generally override their decisions. The layers form a hybrid intelligent system in which the lowest, reactive layers are sub-symbolic. The higher layers, having relaxed time constraints, are capable of reasoning from an abstract world model and performing planning. A hierarchical task network is a good fit for planning in a hierarchical control system. Besides artificial systems, an animal's control systems are proposed to be organized as a hierarchy. In perceptual control theory, which postulates that an organism's behavior is a means of controlling its perceptions, the organism's control systems are suggested to be organized in a hierarchical pattern as their perceptions are constructed so. Control system structure The accompanying diagram is a general hierarchical model which shows functional manufacturing levels using computerised control of an industrial control system. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final control elements, such as control valves Level 1 contains the industrialised Input/Output (I/O) modules, and their associated distributed electronic processors. Level 2 contains the supervisory computers, which collate information from processor nodes on the system, and provide the operator control screens. Level 3 is the production control level, which does not directly control the process, but is concerned with monitoring production and monitoring targets Level 4 is the production scheduling level. Applications Manufacturing, robotics and vehicles Among the robotic paradigms is the hierarchical paradigm in which a robot operates in a top-down fashion, heavy on planning, especially motion planning. Computer-aided production engineering has been a research focus at NIST since the 1980s. Its Automated Manufacturing Research Facility was used to develop a five layer production control model. In the early 1990s DARPA sponsored research to develop distributed (i.e. networked) intelligent control systems for applications such as military command and control systems. NIST built on earlier research to develop its Real-Time Control System (RCS) and Real-time Control System Software which is a generic hierarchical control system that has been used to operate a manufacturing cell, a robot crane, and an automated vehicle. In November 2007, DARPA held the Urban Challenge. The winning entry, Tartan Racing employed a hierarchical control system, with layered mission planning, motion planning, behavior generation, perception, world modelling, and mechatronics. Artificial intelligence Subsumption architecture is a methodology for developing artificial intelligence that is heavily associated with behavior based robotics. This architecture is a way of decomposing complicated intelligent behavior into many "simple" behavior modules, which are in turn organized into layers. Each layer implements a particular goal of the software agent (i.e. system as a whole), and higher layers are increasingly more abstract. Each layer's goal subsumes that of the underlying layers, e.g. the decision to move forward by the eat-food layer takes into account the decision of the lowest obstacle-avoidance layer. Behavior need not be planned by a superior layer, rather behaviors may be triggered by sensory inputs and so are only active under circumstances where they might be appropriate. Reinforcement learning has been used to acquire behavior in a hierarchical control system in which each node can learn to improve its behavior with experience. James Albus, while at NIST, developed a theory for intelligent system design named the Reference Model Architecture (RMA), which is a hierarchical control system inspired by RCS. Albus defines each node to contain these components. Behavior generation is responsible for executing tasks received from the superior, parent node. It also plans for, and issues tasks to, the subordinate nodes. Sensory perception is responsible for receiving sensations from the subordinate nodes, then grouping, filtering, and otherwise processing them into higher level abstractions that update the local state and which form sensations that are sent to the superior node. Value judgment is responsible for evaluating the updated situation and evaluating alternative plans. World Model is the local state that provides a model for the controlled system, controlled process, or environment at the abstraction level of the subordinate nodes. At its lowest levels, the RMA can be implemented as a subsumption architecture, in which the world model is mapped directly to the controlled process or real world, avoiding the need for a mathematical abstraction, and in which time-constrained reactive planning can be implemented as a finite-state machine. Higher levels of the RMA however, may have sophisticated mathematical world models and behavior implemented by automated planning and scheduling. Planning is required when certain behaviors cannot be triggered by current sensations, but rather by predicted or anticipated sensations, especially those that come about as result of the node's actions. See also Command hierarchy, a hierarchical power structure Hierarchical organization, a hierarchical organizational structure References Further reading External links The RCS (Realtime Control System) Library Texai An open source project to create artificial intelligence using an Albus hierarchical control system Control engineering Control theory Artificial intelligence Robot architectures
Hierarchical control system
[ "Mathematics", "Engineering" ]
1,208
[ "Robotics engineering", "Applied mathematics", "Control theory", "Control engineering", "Robot architectures", "Dynamical systems" ]
15,291,970
https://en.wikipedia.org/wiki/Schur%E2%80%93Weyl%20duality
Schur–Weyl duality is a mathematical theorem in representation theory that relates irreducible finite-dimensional representations of the general linear and symmetric groups. Schur–Weyl duality forms an archetypical situation in representation theory involving two kinds of symmetry that determine each other. It is named after two pioneers of representation theory of Lie groups, Issai Schur, who discovered the phenomenon, and Hermann Weyl, who popularized it in his books on quantum mechanics and classical groups as a way of classifying representations of unitary and general linear groups. Schur–Weyl duality can be proven using the double centralizer theorem. Statement of the theorem Consider the tensor space with k factors. The symmetric group Sk on k letters acts on this space (on the left) by permuting the factors, The general linear group GLn of invertible n×n matrices acts on it by the simultaneous matrix multiplication, These two actions commute, and in its concrete form, the Schur–Weyl duality asserts that under the joint action of the groups Sk and GLn, the tensor space decomposes into a direct sum of tensor products of irreducible modules (for these two groups) that actually determine each other, The summands are indexed by the Young diagrams D with k boxes and at most n rows, and representations of Sk with different D are mutually non-isomorphic, and the same is true for representations of GLn. The abstract form of the Schur–Weyl duality asserts that two algebras of operators on the tensor space generated by the actions of GLn and Sk are the full mutual centralizers in the algebra of the endomorphisms Example Suppose that k = 2 and n is greater than one. Then the Schur–Weyl duality is the statement that the space of two-tensors decomposes into symmetric and antisymmetric parts, each of which is an irreducible module for GLn: The symmetric group S2 consists of two elements and has two irreducible representations, the trivial representation and the sign representation. The trivial representation of S2 gives rise to the symmetric tensors, which are invariant (i.e. do not change) under the permutation of the factors, and the sign representation corresponds to the skew-symmetric tensors, which flip the sign. Proof First consider the following setup: G a finite group, the group algebra of G, a finite-dimensional right A-module, and , which acts on U from the left and commutes with the right action of G (or of A). In other words, is the centralizer of in the endomorphism ring . The proof uses two algebraic lemmas. Proof: Since U is semisimple by Maschke's theorem, there is a decomposition into simple A-modules. Then . Since A is the left regular representation of G, each simple G-module appears in A and we have that (respectively zero) if and only if correspond to the same simple factor of A (respectively otherwise). Hence, we have: Now, it is easy to see that each nonzero vector in generates the whole space as a B-module and so is simple. (In general, a nonzero module is simple if and only if each of its nonzero cyclic submodule coincides with the module.) Proof: Let . The . Also, the image of W spans the subspace of symmetric tensors . Since , the image of spans . Since is dense in W either in the Euclidean topology or in the Zariski topology, the assertion follows. The Schur–Weyl duality now follows. We take to be the symmetric group and the d-th tensor power of a finite-dimensional complex vector space V. Let denote the irreducible -representation corresponding to a partition and . Then by Lemma 1 is irreducible as a -module. Moreover, when is the left semisimple decomposition, we have: , which is the semisimple decomposition as a -module. Generalizations The Brauer algebra plays the role of the symmetric group in the generalization of the Schur-Weyl duality to the orthogonal and symplectic groups. More generally, the partition algebra and its subalgebras give rise to a number of generalizations of the Schur-Weyl duality. See also Partition algebra Notes References Roger Howe, Perspectives on invariant theory: Schur duality, multiplicity-free actions and beyond. The Schur lectures (1992) (Tel Aviv), 1–182, Israel Math. Conf. Proc., 8, Bar-Ilan Univ., Ramat Gan, 1995. Issai Schur, Über eine Klasse von Matrizen, die sich einer gegebenen Matrix zuordnen lassen. Dissertation. Berlin. 76 S (1901) JMF 32.0165.04 Issai Schur, Über die rationalen Darstellungen der allgemeinen linearen Gruppe. Sitzungsberichte Akad. Berlin 1927, 58–75 (1927) JMF 53.0108.05 Representation theory of groups Hermann Weyl, The Classical Groups. Their Invariants and Representations. Princeton University Press, Princeton, N.J., 1939. xii+302 pp. External links How to constructively/combinatorially prove Schur-Weyl duality? Representation theory Tensors Issai Schur
Schur–Weyl duality
[ "Mathematics", "Engineering" ]
1,139
[ "Representation theory", "Fields of abstract algebra", "Tensors" ]
15,292,435
https://en.wikipedia.org/wiki/Thermally%20stimulated%20current%20spectroscopy
Thermally stimulated current (TSC) spectroscopy (not to be confused with thermally stimulated depolarization current) is an experimental technique which is used to study energy levels in semiconductors or insulators (organic or inorganic). Energy levels are first filled either by optical or electrical injection usually at a relatively low temperature, subsequently electrons or holes are emitted by heating to a higher temperature. A curve of emitted current will be recorded and plotted against temperature, resulting in a TSC spectrum. By analyzing TSC spectra, information can be obtained regarding energy levels in semiconductors or insulators. A driving force is required for emitted carriers to flow when the sample temperature is being increased. This driving force can be an electric field or a temperature gradient. Usually, the driving force adopted is an electric field; however, electron traps and hole traps cannot be distinguished. If the driving force adopted is a temperature gradient, electron traps and hole traps can be distinguished by the sign of the current. TSC based on a temperature gradient is also known as "Thermoelectric Effect Spectroscopy" (TEES) according to 2 scientists (Santic and Desnica) from ex-Yugoslavia; they demonstrated their technique on semi-insulating gallium arsenide (GaAs). (Note: TSC based on a temperature gradient was invented before Santic and Desnica and applied to the study of organic plastic materials. However, Santic and Desnica applied TSC based on a temperature gradient to study a technologically important semiconductor material and coined a new name, TEES, for it.) Historically, Frei and Groetzinger published a paper in German in 1936 with the title "Liberation of electrical energy during the fusion of electrets" (English translation of the original title in German). This may be the first paper on TSC. Before the invention of deep-level transient spectroscopy (DLTS), thermally stimulated current (TSC) spectroscopy was a popular technique to study traps in semiconductors. Nowadays, for traps in Schottky diodes or p-n junctions, DLTS is the standard method to study traps. However, there is an important shortcoming for DLTS: it cannot be used for an insulating material while TSC can be applied to such a situation. (Note: an insulator can be considered as a very large bandgap semiconductor.) In addition, the standard transient capacitance based DLTS method may not be very good for the study of traps in the i-region of a p-i-n diode while the transient current based DLTS (I-DLTS) may be more useful. TSC has been used to study traps in semi-insulating gallium arsenide (GaAs) substrates. It has also been applied to materials used for particle detectors or semiconductor detectors used in nuclear research, for example, high-resistivity silicon, cadmium telluride (CdTe), etc. TSC has also been applied to various organic insulators. TSC is useful for electret research. More advanced modifications of TSC have been applied to study traps in ultrathin high-k dielectric thin films. W. S. Lau (Lau Wai Shing, Republic of Singapore) applied zero-bias thermally stimulated current or zero-temperature-gradient zero-bias thermally stimulated current to ultrathin tantalum pentoxide samples. For samples with some shallow traps which can be filled at low temperature and some deep traps which can be filled only at high temperature, a two-scan TSC may be useful as suggested by Lau in 2007. TSC has also been applied to hafnium oxide. TSC technique is used to study dielectric materials and polymers. Different theories was made to describe the response curve for this technique in order to calculate the peak parameters which are, the activation energy and the relaxation time. References von Heinrich Frei and Gerhart Groetzinger, "Liberation of electrical energy during the fusion of electrets" (English translation of the original title in German), Physikalische Zeitschrift, vol. 37, pp. 720–724 (October 1936). (Note: This may be the first publication on thermally stimulated current.) W.S. Lau, "Zero-temperature-gradient zero-bias thermally stimulated current technique to characterize defects in semiconductors or insulators”, US Patent 6,909,273, filed in 2000 and granted in 2005. (Note: This paper explains two-scan thermally stimulated current spectroscopy.) Semiconductor analysis Spectroscopy
Thermally stimulated current spectroscopy
[ "Physics", "Chemistry" ]
949
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
15,293,846
https://en.wikipedia.org/wiki/Pople%20notation
The Pople notation is named after the Nobel laureate John Pople and is a simple method of presenting second-order spin coupling systems in NMR. The notation labels each (NMR active) nucleus with a letter of the alphabet. The difference in chemical shift, δ, relative to the J-coupling between nuclei mirrors the separation of the letter labels in the Latin alphabet. The letters used tend to be limited to A,B,M,N,X,Y. For example, AB indicates two nuclei which have similar chemical shifts (Δδ similar to or smaller than J), whereas AX indicates two which lie further apart on the spectrum (Δδ significantly larger than J). A2B would similarly indicate a spin system containing two equivalent nuclei (A) and a third, inequivalent one (B). Nuclei which are in equivalent chemical environments (that is, symmetry-related), but inequivalent magnetic environments are distinguished with a prime; e.g. AA'. This key aspect of the notation, i.e., using a prime to differentiate between chemical equivalence only compared to full magnetic equivalence, was introduced by Richards and Schaefer in 1958. The notation can be used to represent systems of more than two nuclei, for example AMX represents three nuclei, each moderately separated from the others, and ABX represents two nuclei whose peaks are closely spaced and one other nucleus which is more distant. Examples: PHCl2 is an AX system whereas CH3CH2F is an A3M2X system, References Nuclear magnetic resonance
Pople notation
[ "Physics", "Chemistry" ]
319
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
15,294,873
https://en.wikipedia.org/wiki/Electroosmotic%20pump
An electroosmotic pump is used for generating flow or pressure by use of an electric field. One application of this is removing liquid flooding water from channels and gas diffusion layers and direct hydration of the proton exchange membrane in the membrane electrode assembly (MEA) of the proton exchange membrane fuel cells. Additionally, electroosmotic pumps have gained significant attention due to their potential applications in microfluidic channels, lab-on-a-chip devices, and biomedical engineering. Principle Electroosmotic pumps are fabricated from silica nanospheres or hydrophilic porous glass, the pumping mechanism is generated by an external electric field applied on an electric double layer (EDL), generates high pressures (e.g., more than 340 atm (34 MPa) at 12 kV applied potentials) and high flow rates (e.g., 40 ml/min at 100 V in a pumping structure less than 1 cm3 in volume). EO pumps are compact, have no moving parts, and scale favorably with fuel cell design. The EO pump might drop the parasitic load of water management in fuel cells from 20% to 0.5% of the fuel cell power. Types Cascaded electroosmotic pumps High pressures or high flow rates are obtained by positioning several regular electroosmotic pumps in series or parallel respectively. Porous electroosmotic pump Pumps based on porous media can be created using sintered glass or microporous polymer membranes with appropriate surface chemistry. Planar shallow electroosmotic pump Planar shallow electroosmotic pumps are made of parallel shallow microchannels. Electroosmotic micropumps Electroosmotic effects can also be induced without external fields in order to power micron-scale motion. Bimetallic gold/silver patches have been shown to generate local fluid pumping by this mechanism when hydrogen peroxide is added to the solution. A related motion can be induced by silver phosphate particles, which can be tailored to generate reversible firework behavior among other properties. Titanium dioxide micromotors (TiO2) demonstrated swarming behavior in the absence or presence of additional fuels due to the self-generated electrolyte diffusioosmosis. See also Capillary electrophoresis Electroosmotic flow Glossary of fuel cell terms Microfluidics Micropump Sol-gel References External links Electroosmotic pump and its applications Principles of electroosmotic pumps Hydrogen technologies Pumps Microfluidics Fuel cells
Electroosmotic pump
[ "Physics", "Chemistry", "Materials_science" ]
513
[ "Pumps", "Microfluidics", "Microtechnology", "Turbomachinery", "Physical systems", "Hydraulics" ]
15,295,573
https://en.wikipedia.org/wiki/Fowler%20process
The Fowler process is an industry and laboratory route to fluorocarbons, by fluorinating hydrocarbons or their partially fluorinated derivatives in the vapor phase over cobalt(III) fluoride. Background The Manhattan Project required the production and handling of uranium hexafluoride for uranium enrichment, whether by diffusion or centrifuge. Uranium hexafluoride is very corrosive, oxidising, volatile solid (sublimes at 56 °C). To handle this material, several new materials were required, including a coolant liquid that could survive contact with uranium hexafluoride. Perfluorocarbons were identified as ideal materials, but at that point no method was available to produce them in any significant quantity. The problem is that fluorine gas is extremely reactive. Simply exposing a hydrocarbon to fluorine will cause the hydrocarbon to ignite. A way to moderate the reaction was required, and the method developed was to react the hydrocarbon with cobalt(III) fluoride, rather than fluorine itself. After World War II, much of the technology that had been kept secret was released into the public domain. The March 1947 issue of Industrial and Engineering Chemistry presented a collection of articles about fluorine chemistry, starting with the generation and handling of fluorine, and going on to discuss the synthesis of organofluorides and related topics. In one of these articles Fowler et al. describe the laboratory preparation of numerous perfluorocarbons by the vapour phase reaction of a hydrocarbon with cobalt(III) fluoride, at a pilot plant scale, in particular, perfluoro-n-heptane and perfluorodimethylcyclohexane (mixture of 1,3-isomer and 1,4 isomer), and on an industrial scale by Du Pont. Chemistry The Fowler process is typically done in two stages, the first stage being fluorination of cobalt(II) fluoride to cobalt(III) fluoride. 2 CoF2 + F2 → 2 CoF3 During the second stage, in this instance to make perfluorohexane, the hydrocarbon feed is introduced and is fluorinated by the cobalt(III) fluoride, which is converted back to cobalt(II) fluoride for reuse. Both stages are performed at high temperature. C6H14 + 28 CoF3 → C6F14 + 14 HF + 28 CoF2 The reaction proceeds through a single electron transfer process, involving a carbocation. This carbocation intermediate can readily undergo rearrangements, which can lead to a complex mixture of products. Feedstocks Typically hydrocarbon compounds are used as the feedstocks. For cyclic perfluorocarbon, the aromatic hydrocarbon is the preferred choice, so for example, toluene is the feedstock for perfluoromethylcyclohexane, rather than methylcyclohexane, as less fluorine is required. Often partially fluorinated feedstocks are used, for example, bis-1,3-(trifluoromethyl)benzene to make perfluoro-1,3-dimethylcyclohexane. Although these are considerably more expensive, they require less fluorine and more importantly, they generally give higher yields, as the carbocation rearrangements are much less likely. Flutec perfluorocarbons In the UK, Imperial Chemical Industries Limited (later ICI) was also developing cobalt(III) fluoride technology during the war, prompted by the work in the US. The process was later commercialized under the tradename Flutec by the Imperial Smelting Company (later ISC Chemicals) at Avonmouth near Bristol. Physical properties were determined by a company called G.V. Planer, under a project in 1965 called the Planar Project. Products were therefore designated PP1, PP2, PP3, etc. The designation has remained to this day. ISC Chemicals became part of RTZ in 1968, and that part of the business was transferred to Rhone-Poulenc in 1988. The Flutec business went into a decline, due to a drop in its main application, vapor phase reflow soldering (used in surface-mount technology, and six years later the Flutec business was purchased by BNFL Fluorochemicals Ltd and transferred to Preston, Lancashire, where it has been developed into several new applications. BNFL Fluorochemicals Ltd became F2 Chemicals Ltd in 1998. References Chemical processes Organofluorides
Fowler process
[ "Chemistry" ]
961
[ "Chemical process engineering", "Chemical processes", "nan" ]
10,147,895
https://en.wikipedia.org/wiki/Explosion%20protection
Explosion protection is used to protect all sorts of buildings and civil engineering infrastructure against internal and external explosions or deflagrations. It was widely believed until recently that a building subject to an explosive attack had a chance to remain standing only if it possessed some extraordinary resistive capacity. This belief rested on the assumption that the specific impulse or the time integral of pressure, which is a dominant characteristic of the blast load, is fully beyond control. Techniques Avoidance Avoidance makes it impossible for an explosion or deflagration to occur, for instance by means of suppressing the heat and the pressure needed for an explosion using an aluminum mesh structure such as eXess, by means of consistent displacement of the O2 necessary for an explosion or deflagration to take place, by means of padding gas (f. i. CO2 or N2), or, by means of keeping the concentration of flammable content of an atmosphere consistently below or above the explosive limit, or by means of consistent elimination of ignition sources. Constructional Constructional explosion protection aims at pre-defined, limited or zero damage that results from applied protective techniques in combination with reinforcement of the equipment or structures that must be expected to become subject to internal explosion pressure and flying debris or external violent impact. Method selection The technology of protection can range in price dramatically but where the type of device is rational to use, is typically from the least to the most expensive solution: explosion doors and vents (dependent on quantities and common denominators); inerting: explosion suppression; isolation – or combinations of same. To focus on the most cost effective, doors typically have lower release pressure capabilities; are not susceptible to fatigue failures or subject to changing release pressures with changes in temperature, as "rupture membrane" types are; capable of leak tight service; service temperatures of up to 2,000 °F; and can be more cost effective in small quantities. Rupture membrane type vents can provide a leak tight seal more readily in most cases; have a relatively broad tolerance on their release pressure and are more readily incorporated into systems with discharge ducts. There are several fundamental considerations in the review of a system handling potentially explosive dusts, gases or a mixture of the two. Dependent upon the design basis being used, often National Fire Protection Association Guideline 68, the definition of these may vary somewhat. To facilitate providing the reader with an appreciation of the issues rather than a design primer, the following have been limited to the major ones only. Database combustion and explosion characteristics of dusts The database GESTIS-DUST-EX comprises important combustion and explosion characteristics of more than 7,000 dust samples from nearly all sectors of industry. It serves as a basis for the safe handling of combustible dusts and for the planning of preventive and protective measures against dust explosions in dust-generating and processing plants. The GESTIS-DUST-EX database is produced and maintained by the Institute for Occupational Safety and Health of the German Social Accident Insurance. It was elaborated in co-operation with other institutions and companies. The database is available free of charge to be used for occupational safety and health purposes. See also Explosion pressure Explosion prevention standards for electrical equipment Explosion vent Explosives safety Inert gas Pressure relief valve Prestressed structure References
Explosion protection
[ "Chemistry", "Engineering" ]
658
[ "Explosion protection", "Combustion engineering", "Explosions" ]
10,148,651
https://en.wikipedia.org/wiki/Coal%20combustion%20products
Coal combustion products (CCPs), also called coal combustion wastes (CCWs) or coal combustion residuals (CCRs), are categorized in four groups, each based on physical and chemical forms derived from coal combustion methods and emission controls: Fly ash is captured after coal combustion by filters (bag houses), electrostatic precipitators and other air pollution control devices. It comprises 60 percent of all coal combustion waste (labeled here as coal combustion products). It is most commonly used as a high-performance substitute for Portland cement or as clinker for Portland cement production. Cements blended with fly ash are becoming more common. Building material applications range from grouts and masonry products to cellular concrete and roofing tiles. Many asphaltic concrete pavements contain fly ash. Geotechnical applications include soil stabilization, road base, structural fill, embankments and mine reclamation. Fly ash also serves as filler in wood and plastic products, paints and metal castings. Flue-gas desulfurization (FGD) materials are produced by chemical "scrubber" emission control systems that remove sulfur and oxides from power plant flue gas streams. FGD comprises 24 percent of all coal combustion waste. Residues vary, but the most common are FGD gypsum (or "synthetic" gypsum) and spray dryer absorbents. FGD gypsum is used in almost thirty percent of the gypsum panel products manufactured in the U.S. It is also used in agricultural applications to treat undesirable soil conditions and to improve crop performance. Other FGD materials are used in mining and land reclamation activities. Bottom ash and boiler slag can be used as a raw feed for manufacturing portland cement clinker, as well as for skid control on icy roads. The two materials comprise 12 and 4 percent of coal combustion waste respectively. These materials are also suitable for geotechnical applications such as structural fills and land reclamation. The physical characteristics of bottom ash and boiler slag lend themselves as replacements for aggregate in flowable fill and in concrete masonry products. Boiler slag is also used for roofing granules and as blasting grit. Fly ash Fly ash, flue ash, coal ash, or pulverised fuel ash (in the UK)—plurale tantum: coal combustion residuals (CCRs)—is a coal combustion product that is composed of the particulates that are driven out of coal-fired boilers together with the flue gases. Ash that falls to the bottom of the boiler's combustion chamber (commonly called a firebox) is called bottom ash. In modern coal-fired power plants, fly ash is generally captured by electrostatic precipitators or other particle filtration equipment before the flue gases reach the chimneys. Together with bottom ash removed from the bottom of the boiler, it is known as coal ash. Depending upon the source and composition of the coal being burned, the components of fly ash vary considerably, but all fly ash includes substantial amounts of silicon dioxide (SiO2) (both amorphous and crystalline), aluminium oxide (Al2O3) and calcium oxide (CaO), the main mineral compounds in coal-bearing rock strata. The use of fly ash as a lightweight aggregate (LWA) offers a valuable opportunity to recycle one of the largest waste streams in the US. In addition, fly ash can offer many benefits, both economically and environmentally when utilized as a LWA. The minor constituents of fly ash depend upon the specific coal bed composition but may include one or more of the following elements or compounds found in trace concentrations (up to hundreds of ppm): gallium, arsenic, beryllium, boron, cadmium, chromium, hexavalent chromium, cobalt, lead, manganese, mercury, molybdenum, selenium, strontium, thallium, and vanadium, along with very small concentrations of dioxins, PAH compounds, and other trace carbon compounds. In the past, fly ash was generally released into the atmosphere, but air pollution control standards now require that it be captured prior to release by fitting pollution control equipment. In the United States, fly ash is generally stored at coal power plants or placed in landfills. About 43% is recycled, often used as a pozzolan to produce hydraulic cement or hydraulic plaster and a replacement or partial replacement for Portland cement in concrete production. Pozzolans ensure the setting of concrete and plaster and provide concrete with more protection from wet conditions and chemical attack. In the case that fly (or bottom) ash is not produced from coal, for example when solid waste is incinerated in a waste-to-energy facility to produce electricity, the ash may contain higher levels of contaminants than coal ash. In that case the ash produced is often classified as hazardous waste. Chemical composition and classification Fly ash material solidifies while suspended in the exhaust gases and is collected by electrostatic precipitators or filter bags. Since the particles solidify rapidly while suspended in the exhaust gases, fly ash particles are generally spherical in shape and range in size from 0.5 μm to 300 μm. The major consequence of the rapid cooling is that few minerals have time to crystallize, and that mainly amorphous, quenched glass remains. Nevertheless, some refractory phases in the pulverized coal do not melt (entirely), and remain crystalline. In consequence, fly ash is a heterogeneous material. SiO2, Al2O3, Fe2O3 and occasionally CaO are the main chemical components present in fly ashes. The mineralogy of fly ashes is very diverse. The main phases encountered are a glass phase, together with quartz, mullite and the iron oxides hematite, magnetite and/or maghemite. Other phases often identified are cristobalite, anhydrite, free lime, periclase, calcite, sylvite, halite, portlandite, rutile and anatase. The Ca-bearing minerals anorthite, gehlenite, akermanite and various calcium silicates and calcium aluminates identical to those found in Portland cement can be identified in Ca-rich fly ashes. The mercury content can reach , but is generally included in the range 0.01–1 ppm for bituminous coal. The concentrations of other trace elements vary as well according to the kind of coal combusted to form it. Classification Two classes of fly ash are defined by American Society for Testing and Materials (ASTM) C618: Class F fly ash and Class C fly ash. The chief difference between these classes is the amount of calcium, silica, alumina, and iron content in the ash. The chemical properties of the fly ash are largely influenced by the chemical content of the coal burned (i.e., anthracite, bituminous, and lignite). Not all fly ashes meet ASTM C618 requirements, although depending on the application, this may not be necessary. Fly ash used as a cement replacement must meet strict construction standards, but no standard environmental regulations have been established in the United States. Seventy-five percent of the fly ash must have a fineness of 45 μm or less, and have a carbon content, measured by the loss on ignition (LOI), of less than 4%. In the US, LOI must be under 6%. The particle size distribution of raw fly ash tends to fluctuate constantly, due to changing performance of the coal mills and the boiler performance. This makes it necessary that, if fly ash is used in an optimal way to replace cement in concrete production, it must be processed using beneficiation methods like mechanical air classification. But if fly ash is used as a filler to replace sand in concrete production, unbeneficiated fly ash with higher LOI can be also used. Especially important is the ongoing quality verification. This is mainly expressed by quality control seals like the Bureau of Indian Standards mark or the DCL mark of the Dubai Municipality. Class "F": The burning of harder, older anthracite and bituminous coal typically produces Class F fly ash. This fly ash is pozzolanic in nature, and contains less than 7% lime (CaO). Possessing pozzolanic properties, the glassy silica and alumina of Class F fly ash requires a cementing agent, such as Portland cement, quicklime, or hydrated lime—mixed with water to react and produce cementitious compounds. Alternatively, adding a chemical activator such as sodium silicate (water glass) to a Class F ash can form a geopolymer. Class "C": Fly ash produced from the burning of younger lignite or sub-bituminous coal, in addition to having pozzolanic properties, also has some self-cementing properties. In the presence of water, Class C fly ash hardens and gets stronger over time. Class C fly ash generally contains more than 20% lime (CaO). Unlike Class F, self-cementing Class C fly ash does not require an activator. Alkali and sulfate () contents are generally higher in Class C fly ashes. At least one US manufacturer has announced a fly ash brick containing up to 50% Class C fly ash. Testing shows bricks meet or exceed the performance standards listed in ASTM C 216 for conventional clay brick. It is also within the allowable shrinkage limits for concrete brick in ASTM C 55, Standard Specification for Concrete Building Brick. It is estimated that the production method used in fly ash bricks will reduce the embodied energy of masonry construction by up to 90%. Bricks and pavers were expected to be available in commercial quantities before the end of 2009. Disposal and market sources In the past, fly ash produced from coal combustion was simply entrained in flue gases and dispersed into the atmosphere. This created environmental and health concerns that prompted laws in heavily industrialized countries like the United States that have reduced fly ash emissions to less than 1% of ash produced. Worldwide, more than 65% of fly ash produced from coal power stations is disposed of in landfills and ash ponds. Ash that is stored or deposited outdoors can eventually leach toxic compounds into underground water aquifers. For this reason, much of the current debate around fly ash disposal revolves around creating specially lined landfills that prevent the chemical compounds from being leached into the ground water and local ecosystems. Since coal was the dominant energy source in the United States for many decades, power companies often located their coal plants near metropolitan areas. Compounding the environmental issues, the coal plants need significant amounts of water to operate their boilers, leading coal plants (and later their fly ash storage basins) to be located near metropolitan areas and near rivers and lakes which are often used as drinking supplies by nearby cities. Many of those fly ash basins were unlined and also at great risk of spilling and flooding from nearby rivers and lakes. For example, Duke Energy in North Carolina has been involved in several major lawsuits related to its coal ash storage and spills into the leakage of ash into the water basin. The recycling of fly ash has become an increasing concern in recent years due to increasing landfill costs and current interest in sustainable development. , coal-fired power plants in the US reported producing of fly ash, of which were reused in various applications. Environmental benefits to recycling fly ash includes reducing the demand for virgin materials that would need quarrying and cheap substitution for materials such as Portland cement. Reuse About 52 percent of CCPs in the U.S. were recycled for "beneficial uses" in 2019, according to the American Coal Ash Association. In Australia about 47% of coal ash was recycled in 2020. The chief benefit of recycling is to stabilize the environmentally harmful components of the CCPs such as arsenic, beryllium, boron, cadmium, chromium, chromium VI, cobalt, lead, manganese, mercury, molybdenum, selenium, strontium, thallium, and vanadium, along with dioxins and polycyclic aromatic hydrocarbons. There is no US governmental registration or labelling of fly ash utilization in the different sectors of the economy – industry, infrastructures and agriculture. Fly ash utilization survey data, acknowledged as incomplete, are published annually by the American Coal Ash Association. Coal ash uses include (approximately in order of decreasing importance): Concrete production, as a substitute material for Portland cement, sand. Corrosion control in reinforced concrete (RC) structures Fly-ash pellets which can replace normal aggregate in concrete mixture. Embankments and other structural fills (usually for road construction) Grout and Flowable fill production Waste stabilization and solidification Cement clinker production (as a substitute material for clay) Mine reclamation Stabilization of soft soils Road subbase construction As aggregate substitute material (e.g. for brick production) Mineral filler in asphaltic concrete Agricultural uses: soil amendment, fertilizer, cattle feeders, soil stabilization in stock feed yards, and agricultural stakes Loose application on rivers to melt ice Loose application on roads and parking lots for ice control Other applications include cosmetics, toothpaste, kitchen counter tops, floor and ceiling tiles, bowling balls, flotation devices, stucco, utensils, tool handles, picture frames, auto bodies and boat hulls, cellular concrete, geopolymers, roof tiles, roofing granules, decking, fireplace mantles, cinder block, PVC pipe, structural insulated panels, house siding and trim, running tracks, blasting grit, recycled plastic lumber, utility poles and crossarms, railway sleepers, highway noise barriers, marine pilings, doors, window frames, scaffolding, sign posts, crypts, columns, railroad ties, vinyl flooring, paving stones, shower stalls, garage doors, park benches, landscape timbers, planters, pallet blocks, molding, mail boxes, artificial reef, binding agent, paints and undercoatings, metal castings, and filler in wood and plastic products. Portland cement Owing to its pozzolanic properties, fly ash is used as a replacement for Portland cement in concrete. The use of fly ash as a pozzolanic ingredient was recognized as early as 1914, although the earliest noteworthy study of its use was in 1937. Roman structures such as aqueducts or the Pantheon in Rome used volcanic ash or pozzolana (which possesses similar properties to fly ash) as pozzolan in their concrete. As pozzolan greatly improves the strength and durability of concrete, the use of ash is a key factor in their preservation. Use of fly ash as a partial replacement for Portland cement is particularly suitable but not limited to Class C fly ashes. Class "F" fly ashes can have volatile effects on the entrained air content of concrete, causing reduced resistance to freeze/thaw damage. Fly ash often replaces up to 30% by mass of Portland cement, but can be used in higher dosages in certain applications. In some cases, fly ash can add to the concrete's final strength and increase its chemical resistance and durability. Fly ash can significantly improve the workability of concrete. Recently, techniques have been developed to replace partial cement with high-volume fly ash (50% cement replacement). For roller-compacted concrete (RCC)[used in dam construction], replacement values of 70% have been achieved with processed fly ash at the Ghatghar dam project in Maharashtra, India. Due to the spherical shape of fly ash particles, it can increase workability of cement while reducing water demand. Proponents of fly ash claim that replacing Portland cement with fly ash reduces the greenhouse gas "footprint" of concrete, as the production of one ton of Portland cement generates approximately one ton of CO2, compared to no CO2 generated with fly ash. New fly ash production, i.e., the burning of coal, produces approximately 20 to 30 tons of CO2 per ton of fly ash. Since the worldwide production of Portland cement is expected to reach nearly 2 billion tons by 2010, replacement of any large portion of this cement by fly ash could significantly reduce carbon emissions associated with construction, as long as the comparison takes the production of fly ash as a given. Embankment Fly ash properties are unusual among engineering materials. Unlike soils typically used for embankment construction, fly ash has a large uniformity coefficient and it consists of clay-sized particles. Engineering properties that affect the use of fly ash in embankments include grain size distribution, compaction characteristics, shear strength, compressibility, permeability, and frost susceptibility. Nearly all the types of fly ash used in embankments are Class F. Soil stabilization Soil stabilization is the permanent physical and chemical alteration of soils to enhance their physical properties. Stabilization can increase the shear strength of a soil and/or control the shrink-swell properties of a soil, thus improving the load-bearing capacity of a sub-grade to support pavements and foundations. Stabilization can be used to treat a wide range of sub-grade materials from expansive clays to granular materials. Stabilization can be achieved with a variety of chemical additives including lime, fly ash, and Portland cement. Proper design and testing is an important component of any stabilization project. This allows for the establishment of design criteria, and determination of the proper chemical additive and admixture rate that achieves the desired engineering properties. Stabilization process benefits can include: Higher resistance (R) values, Reduction in plasticity, Lower permeability, Reduction of pavement thickness, Elimination of excavation – material hauling/handling – and base importation, Aids compaction, Provides "all-weather" access onto and within projects sites. Another form of soil treatment closely related to soil stabilization is soil modification, sometimes referred to as "mud drying" or soil conditioning. Although some stabilization inherently occurs in soil modification, the distinction is that soil modification is merely a means to reduce the moisture content of a soil to expedite construction, whereas stabilization can substantially increase the shear strength of a material such that it can be incorporated into the project's structural design. The determining factors associated with soil modification vs soil stabilization may be the existing moisture content, the end use of the soil structure and ultimately the cost benefit provided. Equipment for the stabilization and modification processes include: chemical additive spreaders, soil mixers (reclaimers), portable pneumatic storage containers, water trucks, deep lift compactors, motor graders. Flowable fill Fly ash is also used as a component in the production of flowable fill (also called controlled low strength material, or CLSM), which is used as self-leveling, self-compact backfill material in lieu of compacted earth or granular fill. The strength of flowable fill mixes can range from 50 to 1,200 lbf/in2 (0.3 to 8.3 MPa), depending on the design requirements of the project in question. Flowable fill includes mixtures of Portland cement and filler material, and can contain mineral admixtures. Fly ash can replace either the Portland cement or fine aggregate (in most cases, river sand) as a filler material. High fly ash content mixes contain nearly all fly ash, with a small percentage of Portland cement and enough water to make the mix flowable. Low fly ash content mixes contain a high percentage of filler material, and a low percentage of fly ash, Portland cement, and water. Class F fly ash is best suited for high fly ash content mixes, whereas Class C fly ash is almost always used in low fly ash content mixes. Asphalt concrete Asphalt concrete is a composite material consisting of an asphalt binder and mineral aggregate commonly used to surface roads. Both Class F and Class C fly ash can typically be used as a mineral filler to fill the voids and provide contact points between larger aggregate particles in asphalt concrete mixes. This application is used in conjunction, or as a replacement for, other binders (such as Portland cement or hydrated lime). For use in asphalt pavement, the fly ash must meet mineral filler specifications outlined in ASTM D242. The hydrophobic nature of fly ash gives pavements better resistance to stripping. Fly ash has also been shown to increase the stiffness of the asphalt matrix, improving rutting resistance and increasing mix durability. Filler for thermoplastics Coal and shale oil fly ashes have been used as a filler for thermoplastics that could be used for injection molding applications. Geopolymers More recently, fly ash has been used as a component in geopolymers, where the reactivity of the fly ash glasses can be used to create a binder similar to a hydrated Portland cement in appearance, but with potentially superior properties, including reduced CO2 emissions, depending on the formulation. Roller compacted concrete Another application of using fly ash is in roller compacted concrete dams. Many dams in the US have been constructed with high fly ash contents. Fly ash lowers the heat of hydration allowing thicker placements to occur. Data for these can be found at the US Bureau of Reclamation. This has also been demonstrated in the Ghatghar Dam Project in India. Bricks There are several techniques for manufacturing construction bricks from fly ash, producing a wide variety of products. One type of fly ash brick is manufactured by mixing fly ash with an equal amount of clay, then firing in a kiln at about This approach has the principal benefit of reducing the amount of clay required. Another type of fly ash brick is made by mixing soil, plaster of Paris, fly ash and water, and allowing the mixture to dry. Because no heat is required, this technique reduces air pollution. More modern manufacturing processes use a greater proportion of fly ash, and a high pressure manufacturing technique, which produces high strength bricks with environmental benefits. In the United Kingdom, fly ash has been used for over fifty years to make concrete building blocks. They are widely used for the inner skin of cavity walls. They are naturally more thermally insulating than blocks made with other aggregates. Ash bricks have been used in house construction in Windhoek, Namibia, since the 1970s. There is, however, a problem with the bricks in that they tend to fail or produce unsightly pop-outs. This happens when the bricks come into contact with moisture and a chemical reaction occurs causing the bricks to expand. In India, fly ash bricks are used for construction. Leading manufacturers use an industrial standard known as "Pulverized fuel ash for lime-Pozzolana mixture" using over 75% post-industrial recycled waste, and a compression process. This produces a strong product with good insulation properties and environmental benefits. Metal matrix composites Fly ash particles have proved their potential as good reinforcement with aluminum alloys and show the improvement of physical and mechanical properties. In particular, the compression strength, tensile strength, and hardness increase when the percentage of fly ash content is increased, whereas the density decreases. The presence of fly ash cenospheres in a pure Al matrix decreases its coefficient of thermal expansion (CTE). Mineral extraction It may be possible to use vacuum distillation in order to extract germanium and tungsten from fly ash and recycle them. Waste treatment and stabilization Fly ash, in view of its alkalinity and water absorption capacity, may be used in combination with other alkaline materials to transform sewage sludge into organic fertilizer or biofuel. Catalyst Fly ash, when treated with sodium hydroxide, appears to function well as a catalyst for converting polyethylene into substance similar to crude oil in a high-temperature process called pyrolysis and utilized in waste water treatment. In addition, fly ash, mainly class C, may be used in the stabilization/solidification process of hazardous wastes and contaminated soils. For example, the Rhenipal process uses fly ash as an admixture to stabilize sewage sludge and other toxic sludges. This process has been used since 1996 to stabilize large amounts of chromium(VI) contaminated leather sludges in Alcanena, Portugal. Environmental impacts The majority of CCPs are landfilled, placed in mine shafts or stored in ash ponds at coal-fired power plants. Groundwater pollution from unlined ash ponds has been a continuing environmental problem in the United States. Additionally some of these ponds have had structural failures, causing massive ash spills into rivers, such as the 2014 Dan River coal ash spill. Federal design standards for ash ponds were strengthened in 2015. Following litigation challenges to various provisions of the 2015 regulations, EPA issued two final rules in 2020, labeled as the "CCR Part A" and "CCR Part B" rules. The rules require some facilities to retrofit their impoundments with liners, while other facilities may propose alternative designs and request additional time to achieve compliance. In March 2023 published a proposed rule that would strengthen wastewater limits for discharges to surface waters. Groundwater contamination Coal contains trace levels of trace elements (such as arsenic, barium, beryllium, boron, cadmium, chromium, thallium, selenium, molybdenum and mercury), many of which are highly toxic to humans and other life. Therefore, fly ash obtained after combustion of this coal contains enhanced concentrations of these elements and the potential of the ash to cause groundwater pollution is significant. In the US there are documented cases of groundwater pollution that followed ash disposal or utilization without the necessary protection having been put in place. Examples Maryland Constellation Energy disposed fly ash generated by its Brandon Shores Generating Station at a former sand and gravel mine in Gambrills, Maryland, during 1996 to 2007. The ash contaminated groundwater with heavy metals. The Maryland Department of the Environment issued a fine of $1 million to Constellation. Nearby residents filed a lawsuit against Constellation and in 2008 the company settled the case for $54 million. North Carolina In 2014, residents living near the Buck Steam Station in Dukeville, North Carolina, were told that "coal ash pits near their homes could be leaching dangerous materials into groundwater". Illinois Illinois has many coal ash dumpsites with coal ash generated by coal-burning electric power plants. Of the state's 24 coal ash dumpsites with available data, 22 have released toxic pollutants including arsenic, cobalt, and lithium, into groundwater, rivers and lakes. The hazardous toxic chemicals dumped into the water in Illinois by these coal ash dumpsites include more than 300,000 pounds of aluminum, 600 pounds of arsenic, nearly 300,000 pounds of boron, over 200 pounds of cadmium, over 15,000 pounds of manganese, roughly 1,500 pounds of selenium, roughly 500,000 pounds of nitrogen, and nearly 40 million pounds of sulfate, according to a report by the Environmental Integrity Project, Earthjustice, the Prairie Rivers Network, and the Sierra Club. Tennessee In 2008, the Kingston Fossil Plant in Roane County spilled 1.1 billion gallons of coal ash into the Emory and Clinch Rivers and damaged nearby residential areas. It is the largest industrial spill in the U.S. Texas Groundwater surrounding every single one of the 16 coal-burning power plants in Texas has been polluted by coal ash, according to a study by the Environmental Integrity Project (EIP). Unsafe levels of arsenic, cobalt, lithium, and other contaminants were found in the groundwater near all the ash dump sites. At 12 of the 16 sites, the EIP analysis found levels of arsenic in the groundwater 10 times higher than the EPA Maximum Contaminant Level; arsenic has been found to cause several types of cancer. At 10 of the sites, lithium, which causes neurological disease, was found in the groundwater at concentrations more than 1,000 micrograms per liter, which is 25 times the maximum acceptable level. The report concludes that the fossil fuel industry in Texas has failed to comply with federal regulations on coal ash processing, and state regulators have failed to protect the groundwater. Ecology The effect of fly ash on the environment can vary based on the thermal power plant where it is produced, as well as the proportion of fly ash to bottom ash in the waste product. This is due to the different chemical make-up of the coal based on the geology of the area the coal is found and the burning process of the coal in the power plant. When the coal is combusted, it creates an alkaline dust. This alkaline dust can have a pH ranging from 8 to as high as 12. Fly ash dust can be deposited on topsoil increasing the pH and affecting the plants and animals in the surrounding ecosystem. Trace elements, such as, iron, manganese, zinc, copper, lead, nickel, chromium, cobalt, arsenic, cadmium, and mercury, can be found at higher concentrations compared to bottom ash and the parent coal. Fly ash can leach toxic constituents that can be anywhere from one hundred to one thousand times greater than the federal standard for drinking water. Fly ash can contaminate surface water through erosion, surface runoff, airborne particles landing on the water surface, contaminated ground water moving into surface waters, flooding drainage, or discharge from a coal ash pond. Fish can be contaminated a couple of different ways. When the water is contaminated by fly ash, the fish can absorb the toxins through their gills. The sediment in the water can also become contaminated. The contaminated sediment can contaminate the food sources for the fish, the fish can then become contaminated from consuming those food sources. This can then lead to contamination of organisms that consume these fish, such as, birds, bear, and even humans. Once exposed to fly ash contaminating the water, aquatic organisms have had increased levels of calcium, zinc, bromine, gold, cerium, chromium, selenium, cadmium, and mercury. Soils contaminated by fly ash showed an increase in bulk density and water capacity, but a decrease in hydraulic conductivity and cohesiveness. The effect of fly ash on soils and microorganisms in the soils are influenced by the pH of the ash and trace metal concentrations in the ash. Microbial communities in contaminated soil have shown reductions in respiration and nitrification. These contaminated soils can be detrimental or beneficial to plant development. Fly ash typically has beneficial outcomes when it corrects nutrient deficiencies in the soil. Most detrimental effects were observed when boron phytotoxicity was observed. Plants absorb elements elevated by the fly ash from the soil. Arsenic, molybdenum, and selenium were the only elements found at potentially toxic levels for grazing animals. Terrestrial organisms exposed to fly ash only showed increased levels of selenium. In the UK, fly ash lagoons from old coal-fired power stations have been made into nature reserves such as Newport Wetlands, providing habitat for rare birds and other wildlife. Spills of bulk storage Where fly ash is stored in bulk, it is usually stored wet rather than dry to minimize fugitive dust. The resulting impoundments (ash ponds) are typically large and stable for long periods, but any breach of their dams or bunding is rapid and on a massive scale. In December 2008, the collapse of an embankment at an impoundment for wet storage of fly ash at the Tennessee Valley Authority's Kingston Fossil Plant caused a major release of 5.4 million cubic yards of coal fly ash, damaging three homes and flowing into the Emory River. Cleanup costs may exceed $1.2 billion. This spill was followed a few weeks later by a smaller TVA-plant spill in Alabama, which contaminated Widows Creek and the Tennessee River. In 2014, 39,000 tons of ash and 27 million gallons (100,000 cubic meters) of contaminated water spilled into the Dan River near Eden, NC from a closed North Carolina coal-fired power plant that is owned by Duke Energy. It is currently the third worst coal ash spill ever to happen in the United States. The U.S. Environmental Protection Agency (EPA) published a Coal Combustion Residuals (CCR) regulation in 2015. The agency continued to classify coal ash as non-hazardous (thereby avoiding strict permitting requirements under Subtitle C of the Resource Conservation and Recovery Act (RCRA), but with new restrictions: Existing ash ponds that are contaminating groundwater must stop receiving CCR, and close or retrofit with a liner. Existing ash ponds and landfills must comply with structural and location restrictions, where applicable, or close. A pond no longer receiving CCR is still subject to all regulations unless it is dewatered and covered by 2018. New ponds and landfills must include a geomembrane liner over a layer of compacted soil. The regulation was designed to prevent pond failures and protect groundwater. Enhanced inspection, record keeping and monitoring is required. Procedures for closure are also included and include capping, liners, and dewatering. The CCR regulation has since been subject to litigation. Contaminants Fly ash contains trace concentrations of heavy metals and other substances that are known to be detrimental to health in sufficient quantities. Potentially toxic trace elements in coal include arsenic, beryllium, cadmium, barium, chromium, copper, lead, mercury, molybdenum, nickel, radium, selenium, thorium, uranium, vanadium, and zinc. Approximately 10% of the mass of coals burned in the United States consists of unburnable mineral material that becomes ash, so the concentration of most trace elements in coal ash is approximately 10 times the concentration in the original coal. A 1997 analysis by the United States Geological Survey (USGS) found that fly ash typically contained 10 to 30 ppm of uranium, comparable to the levels found in some granitic rocks, phosphate rock, and black shale. In 1980 the U.S. Congress defined coal ash as a "special waste" that would not be regulated under the stringent hazardous waste permitting requirements of RCRA. In its amendments to RCRA, Congress directed EPA to study the special waste issue and make a determination as to whether stricter permit regulation was necessary. In 2000, EPA stated that coal fly ash did not need to be regulated as a hazardous waste. As a result, most power plants were not required to install geomembranes or leachate collection systems in ash ponds. Studies by the USGS and others of radioactive elements in coal ash have concluded that fly ash compares with common soils or rocks and should not be the source of alarm. However, community and environmental organizations have documented numerous environmental contamination and damage concerns. Exposure concerns Crystalline silica and lime along with toxic chemicals represent exposure risks to human health and the environment. Fly ash contains crystalline silica which is known to cause lung disease, in particular silicosis, if inhaled. Crystalline silica is listed by the IARC and US National Toxicology Program as a known human carcinogen. Lime (CaO) reacts with water (H2O) to form calcium hydroxide [Ca(OH)2], giving fly ash a pH somewhere between 10 and 12, a medium to strong base. This can also cause lung damage if present in sufficient quantities. Material Safety Data Sheets recommend a number of safety precautions be taken when handling or working with fly ash. These include wearing protective goggles, respirators and disposable clothing and avoiding agitating the fly ash in order to minimize the amount which becomes airborne. The National Academy of Sciences noted in 2007 that "the presence of high contaminant levels in many CCR (coal combustion residue) leachates may create human health and ecological concerns". Regulation United States Following the 2008 Kingston Fossil Plant coal fly ash slurry spill, EPA began developing regulations that would apply to all ash ponds nationwide. EPA published the CCR rule in 2015. Some of the provisions in the 2015 CCR regulation were challenged in litigation, and the United States Court of Appeals for the District of Columbia Circuit remanded certain portions of the regulation to EPA for further rulemaking. EPA published a proposed rule on August 14, 2019, that would use location-based criteria, rather than a numerical threshold (i.e. impoundment or landfill size) that would require an operator to demonstrate minimal environmental impact so that a site could remain in operation. In response to the court remand, EPA published its "CCR Part A" final rule on August 28, 2020, requiring all unlined ash ponds to retrofit with liners or close by April 11, 2021. Some facilities may apply to obtain additional time—up to 2028—to find alternatives for managing ash wastes before closing their surface impoundments. EPA published its "CCR Part B" rule on November 12, 2020, which allows certain facilities to use an alternative liner, based on a demonstration that human health and the environment will not be affected. Further litigation on the CCR regulation is pending as of 2021. In October 2020 EPA published a final effluent guidelines rule that reverses some provisions of its 2015 regulation, which had tightened requirements on toxic metals in wastewater discharged from ash ponds and other power plant wastestreams. The 2020 rule has also been challenged in litigation. In March 2023 EPA published a proposed rule that would reverse some aspects of the 2020 rule and impose more stringent wastewater limitations for some facilities. India The Ministry of Environment, Forest and Climate Change of India first published a gazette notification in 1999 specifying use of fly ash and mandating a target date for all thermal power plants to comply by ensuring 100% utilisation. Subsequent amendments in 2003 and 2009 shifted the deadline for compliance to 2014. As reported by Central Electricity Authority, New Delhi, as of 2015, only 60% of fly ash produced was being utilised. This has resulted in the latest notification in 2015 which has set December 31, 2017, as the revised deadline to achieve 100% utilisation. Out of the approximately 55.7% fly ash utilised, bulk of it (42.3%) goes into cement production whereas only about 0.74% is used as an additive in concrete (See Table 5 [29]). Researchers in India are actively addressing this challenge by working on fly ash as an admixture for concrete and activated pozzolanic cement such as geopolymer [34] to help achieve the target of 100% utilisation. The biggest scope clearly lies in the area of increasing the quantity of fly ash being incorporated in concrete. India produced 280 Million Tonnes of Cement in 2016 . With housing sector consuming 67% of the cement, there is a huge scope for incorporating fly ash in both the increasing share of PPC and low to moderate strength concrete. There is a misconception that the Indian codes IS 456:2000 for Concrete and Reinforced Concrete and IS 3812.1:2013 for Fly Ash restrict the use of Fly Ash to less than 35%. Similar misconceptions exists in countries like US but evidence to the contrary is the use of HVFA in many large projects where design mixes have been used under strict quality control. It is suggested that in order to make the most of the research results presented in the paper, Ultra High Volume Fly ash Concrete (UHVFA) concrete is urgently developed for widespread use in India using local fly ash. Urgent steps are also required to promote alkali activated pozzolan or geopolymer cement based concretes. In the geologic record Due to the ignition of coal deposits by the Siberian Traps during the Permian–Triassic extinction event around 252 million years ago, large amounts of char very similar to modern fly ash were released into the oceans, which is preserved in the geologic record in marine deposits located in the Canadian High Arctic. It has been hypothesised that the fly ash could have resulted in toxic environmental conditions. See also Alkali–silica reaction (ASR) Alkali–aggregate reaction Cement Cenosphere – a CCP, often recycled Coal waste Energetically modified cement (EMC) Health effects of coal ash Pozzolanic reaction Silica fume Cenocell References External links Evaluation of Dust Exposures at Lehigh Portland Cement Company, Union Bridge, MD, a NIOSH Report, HETA 2000-0309-2857 Determination of Airborne Crystalline Silica Treatise by NIOSH American Coal Ash Association Fly Ash Info, the Ash Library Website, University of Kentucky United States Geological Survey – Radioactive Elements in Coal and Fly Ash (document) Public Employees for Environmental Responsibility: Coal Combustion Waste UK Quality Ash Association : A site promoting the many uses of fly ash in the UK Coal Ash Is More Radioactive than Nuclear Waste, Scientific American (13 December 2007) UK Quality Ash Association A web site providing further information on the applications for PFA. Asian Coal Ash Association A web site providing further information on technologies and trade related to coal combustion products. Coal technology Waste
Coal combustion products
[ "Physics" ]
8,398
[ "Materials", "Waste", "Matter" ]
10,155,189
https://en.wikipedia.org/wiki/STAT5
Signal transducer and activator of transcription 5 (STAT5) refers to two highly related proteins, STAT5A and STAT5B, which are part of the seven-membered STAT family of proteins. Though STAT5A and STAT5B are encoded by separate genes, the proteins are 90% identical at the amino acid level. STAT5 proteins are involved in cytosolic signalling and in mediating the expression of specific genes. Aberrant STAT5 activity has been shown to be closely connected to a wide range of human cancers, and silencing this aberrant activity is an area of active research in medicinal chemistry. Activation and function In order to be functional, STAT5 proteins must first be activated. This activation is carried out by kinases associated with transmembrane receptors: Ligands binding to these transmembrane receptors on the outside of the cell activate the kinases; The stimulated kinases add a phosphate group to a specific tyrosine residue on the receptor; STAT5 then binds to these phosphorylated-tyrosines using their SH2 domain (STAT domains illustrated below); The bound STAT5 is then phosphorylated by the kinase, the phosphorylation occurring at particular tyrosine residues on the C-terminus of the protein; Phosphorylation causes STAT5 to dissociate from the receptor; The phosphorylated STAT5 finally goes on to form either homodimers, STAT5-STAT5, or heterodimers, STAT5-STATX, with other STAT proteins. The SH2 domains of the STAT5 proteins are once again used for this dimerization. STAT5 can also form homo-tetramers, usually in concert with the histone methyltransferase EZH2, and act as a transcriptional repressor. In the activation pathway illustrated to the left, the ligand involved is a cytokine and the specific kinase taking part in activation is JAK. The dimerized STAT5 represents the active form of the protein, which is ready for translocation into the nucleus. Once in the nucleus, the dimers bind to STAT5 response elements, inducing transcription of specific sets of genes. Upregulation of gene expression by STAT5 dimers has been observed for genes dealing with: Controlled cell growth and division, or cell proliferation Programmed cell death, or apoptosis Cell specialization, or differentiation and Inflammation. Activated STAT5 dimers are, however, short-lived and the dimers are made to undergo rapid deactivation. Deactivation may be carried out by a direct pathway, removing the phosphate groups using phosphatases like PIAS or SHP-2 for example, or by an indirect pathway, which involves reducing cytokine signalling. STAT5 and cancer STAT5 has been found to be constitutively phosphorylated in cancer cells, implying that the protein is always present in its active form. This constant activation is brought about either by mutations or by aberrant expressions of cell signalling, resulting in poor regulation, or complete lack of control, of the activation of transcription for genes influenced by STAT5. This leads to constant and increased expression of these genes. For example, mutations may lead to increased expression of anti-apoptotic genes, the products of which actively prevent cell death. The constant presence of these products preserves the cell in spite of it having become cancerous, causing the cell to eventually become malignant. Treatment approaches Attempts at treatment for cancer cells with constitutively phosphorylated STAT5 have included both indirect and direct inhibition of STAT5 activity. While more medicinal work has been done in indirect inhibition, this approach can lead to increased toxicity in cells and can also result in non-specific effects, both of which are better handled by direct inhibition. Indirect inhibition targets kinases associated with STAT5, or targets proteases that carry out terminal truncation of proteins. Different inhibitors have been designed to target different kinases: Inhibition of BCR/ABl constitutes the basis of the functioning of drugs like imatinib Inhibition of FLT3 is carried out by drugs like lestaurtinib Inhibition of JAK2 is carried out by the drug CYT387, which was successful in preclinical trials and is currently undergoing clinical trials. Direct inhibition of STAT5 activity makes use of small molecule inhibitors that prevent STAT5 from properly binding to DNA or prevent proper dimerization. The inhibiting of DNA binding utilizes RNA interference, antisense oligodeoxynucleotide, and short hairpin RNA. The inhibition of proper dimerization, on the other hand, is brought about by the use of small molecules that target the SH2 domain. Recent work on drug development in the latter field have proved particularly effective. References Gene expression Immune system Proteins Transcription factors Signal transduction
STAT5
[ "Chemistry", "Biology" ]
1,000
[ "Biomolecules by chemical classification", "Immune system", "Gene expression", "Signal transduction", "Organ systems", "Molecular genetics", "Induced stem cells", "Cellular processes", "Molecular biology", "Biochemistry", "Proteins", "Neurochemistry", "Transcription factors" ]
10,158,085
https://en.wikipedia.org/wiki/Yevgeny%20Krinov
Yevgeny Leonidovich Krinov () (3 March 1906 – 2 January 1984), D.G.S., was a Soviet Russian astronomer and geologist, born in Otyassy () village in the Morshansky District of the Tambov Governorate of the Russian Empire. Krinov was a renowned meteorite researcher; the mineral Krinovite, discovered in 1966, was named after him. Scientific work From 1926 through 1930 Yevgeny Krinov worked in the meteor division of the Mineralogy Museum of the Soviet Academy of Sciences. During this period he conducted research into the Tunguska event under the supervision of Leonid Kulik. Krinov took part in the longest expedition to the Tunguska site in the years 1929–1930 as an astronomer. The data that was gathered during this expedition became the basis for his 1949 monograph (in Russian) called The Tunguska Meteorite. In 1975, Yevgeny Krinov ordered the burning of 1500 negatives from a 1938 expedition by Leonid Kulik to the Tunguska event as part of an effort to dispose of hazardous nitrate film. Positive imprints were preserved for further studies in the Russian city of Tomsk. Science awards 1961 - Doctor honoris causa awarded by Soviet Academy of Sciences 1971 - Leonard Medal Legacy A minor planet, 2887 Krinov, discovered in 1977 by Soviet astronomer Nikolai Stepanovich Chernykh, is named after him. Selected bibliography 1947 Spectral Reflective Capacity of Natural Formations 1949 The Tunguska Meteorite (Russian) 1952 Fundamentals of Meteoritics 1959 Sikhote-Alin Iron Meteorite Shower, Vol. I (Russian) 1963 Sikhote-Alin Iron Meteorite Shower, Vol. II (Russian) 1966 Giant Meteorites References External links A list of people from Tambov, Russia, briefly mentioning Yevgeny Krinov A short biography of Yevgeny Krinov 1906 births 1984 deaths Russian astronomers Soviet astronomers Russian geologists Soviet geologists Tunguska event
Yevgeny Krinov
[ "Physics" ]
409
[ "Unsolved problems in physics", "Tunguska event" ]
10,158,584
https://en.wikipedia.org/wiki/Kicker%20magnet
Kicker magnets are dipole magnets used to rapidly switch a particle beam between two paths. Conceptually similar to a railroad switch in function, a kicker magnet must switch on very rapidly, then maintain a stable magnetic field for some minimum time. Switch-off time is also important, but less critical. An injection kicker magnet merges two beams incoming from different directions. Most commonly, there is a beam circulating in a synchrotron, in the form of a particle train which only partially fills the arc. As soon as the circulating particle train has passed the kicker, it is switched on so that an additional batch of particles may be appended to the train. The magnet must then be switched off in time to not affect the head of the train when it next rounds the synchrotron. An ejection kicker magnet does the opposite, diverting a circulating beam so it leaves the synchrotron. Almost always, an ejection kicker is used to eject the entire particle train, emptying the synchrotron. This means that it has the entire tail-to-head gap in the synchrotron to function, and the switch-off time is essentially irrelevant. However, it must hold a stable field for longer (one full rotation of the synchrotron), and must generate a stronger magnetic field, as it is used to eject a higher energy beam that has been accelerated in the synchrotron. The magnets are powered by a high voltage (usually in the range of tens of thousands of volts) source called a power modulator which uses a pulse forming network to produce a short pulse of current (usually in the range of a few nanoseconds to a microsecond and thousands of amperes in amplitude). The current produces a magnetic field in the magnet, which in turn imparts a Lorentz force on the particles as they traverse the magnet's length, causing the beam to deflect into the proper trajectory. Because a kicker magnet applies a particular lateral impulse to the beam, to achieve a fixed deflection angle the strength of the kick must be accurately matched to the momentum of the particles. This is part of the power modulator's job. References Accelerator physics
Kicker magnet
[ "Physics" ]
460
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
222,676
https://en.wikipedia.org/wiki/Lewis%20acids%20and%20bases
A Lewis acid (named for the American physical chemist Gilbert N. Lewis) is a chemical species that contains an empty orbital which is capable of accepting an electron pair from a Lewis base to form a Lewis adduct. A Lewis base, then, is any species that has a filled orbital containing an electron pair which is not involved in bonding but may form a dative bond with a Lewis acid to form a Lewis adduct. For example, NH3 is a Lewis base, because it can donate its lone pair of electrons. Trimethylborane [(CH3)3B] is a Lewis acid as it is capable of accepting a lone pair. In a Lewis adduct, the Lewis acid and base share an electron pair furnished by the Lewis base, forming a dative bond. In the context of a specific chemical reaction between NH3 and Me3B, a lone pair from NH3 will form a dative bond with the empty orbital of Me3B to form an adduct NH3•BMe3. The terminology refers to the contributions of Gilbert N. Lewis. The terms nucleophile and electrophile are sometimes interchangeable with Lewis base and Lewis acid, respectively. These terms, especially their abstract noun forms nucleophilicity and electrophilicity, emphasize the kinetic aspect of reactivity, while the Lewis basicity and Lewis acidity emphasize the thermodynamic aspect of Lewis adduct formation. Depicting adducts In many cases, the interaction between the Lewis base and Lewis acid in a complex is indicated by an arrow indicating the Lewis base donating electrons toward the Lewis acid using the notation of a dative bond — for example, ←. Some sources indicate the Lewis base with a pair of dots (the explicit electrons being donated), which allows consistent representation of the transition from the base itself to the complex with the acid: A center dot may also be used to represent a Lewis adduct, such as . Another example is boron trifluoride diethyl etherate, . In a slightly different usage, the center dot is also used to represent hydrate coordination in various crystals, as in for hydrated magnesium sulfate, irrespective of whether the water forms a dative bond with the metal. Although there have been attempts to use computational and experimental energetic criteria to distinguish dative bonding from non-dative covalent bonds, for the most part, the distinction merely makes note of the source of the electron pair, and dative bonds, once formed, behave simply as other covalent bonds do, though they typically have considerable polar character. Moreover, in some cases (e.g., sulfoxides and amine oxides as and ), the use of the dative bond arrow is just a notational convenience for avoiding the drawing of formal charges. In general, however, the donor–acceptor bond is viewed as simply somewhere along a continuum between idealized covalent bonding and ionic bonding. Lewis acids Lewis acids are diverse and the term is used loosely. Simplest are those that react directly with the Lewis base, such as boron trihalides and the pentahalides of phosphorus, arsenic, and antimony. In the same vein, can be considered to be the Lewis acid in methylation reactions. However, the methyl cation never occurs as a free species in the condensed phase, and methylation reactions by reagents like CH3I take place through the simultaneous formation of a bond from the nucleophile to the carbon and cleavage of the bond between carbon and iodine (SN2 reaction). Textbooks disagree on this point: some asserting that alkyl halides are electrophiles but not Lewis acids, while others describe alkyl halides (e.g. CH3Br) as a type of Lewis acid. The IUPAC states that Lewis acids and Lewis bases react to form Lewis adducts, and defines electrophile as Lewis acids. Simple Lewis acids Some of the most studied examples of such Lewis acids are the boron trihalides and organoboranes: BF3 + F− → In this adduct, all four fluoride centres (or more accurately, ligands) are equivalent. BF3 + OMe2 → BF3OMe2 Both BF4− and BF3OMe2 are Lewis base adducts of boron trifluoride. Many adducts violate the octet rule, such as the triiodide anion: I2 + I− → The variability of the colors of iodine solutions reflects the variable abilities of the solvent to form adducts with the Lewis acid I2. Some Lewis acids bind with two Lewis bases, a famous example being the formation of hexafluorosilicate: SiF4 + 2 F− → Complex Lewis acids Most compounds considered to be Lewis acids require an activation step prior to formation of the adduct with the Lewis base. Complex compounds such as Et3Al2Cl3 and AlCl3 are treated as trigonal planar Lewis acids but exist as aggregates and polymers that must be degraded by the Lewis base. A simpler case is the formation of adducts of borane. Monomeric BH3 does not exist appreciably, so the adducts of borane are generated by degradation of diborane: B2H6 + 2 H− → 2 In this case, an intermediate can be isolated. Many metal complexes serve as Lewis acids, but usually only after dissociating a more weakly bound Lewis base, often water. [Mg(H2O)6]2+ + 6 NH3 → [Mg(NH3)6]2+ + 6 H2O H+ as Lewis acid The proton (H+)  is one of the strongest but is also one of the most complicated Lewis acids. It is convention to ignore the fact that a proton is heavily solvated (bound to solvent). With this simplification in mind, acid-base reactions can be viewed as the formation of adducts: H+ + NH3 → H+ + OH− → H2O Applications of Lewis acids A typical example of a Lewis acid in action is in the Friedel–Crafts alkylation reaction. The key step is the acceptance by AlCl3 of a chloride ion lone-pair, forming and creating the strongly acidic, that is, electrophilic, carbonium ion. RCl +AlCl3 → R+ + Lewis bases A Lewis base is an atomic or molecular species where the highest occupied molecular orbital (HOMO) is highly localized. Typical Lewis bases are conventional amines such as ammonia and alkyl amines. Other common Lewis bases include pyridine and its derivatives. Some of the main classes of Lewis bases are amines of the formula NH3−xRx where R = alkyl or aryl. Related to these are pyridine and its derivatives. phosphines of the formula PR3−xArx. compounds of O, S, Se and Te in oxidation state −2, including water, ethers, ketones The most common Lewis bases are anions. The strength of Lewis basicity correlates with the of the parent acid: acids with high 's give good Lewis bases. As usual, a weaker acid has a stronger conjugate base. Examples of Lewis bases based on the general definition of electron pair donor include: simple anions, such as H− and F− other lone-pair-containing species, such as H2O, NH3, HO−, and CH3− complex anions, such as sulfate electron-rich -system Lewis bases, such as ethyne, ethene, and benzene The strength of Lewis bases have been evaluated for various Lewis acids, such as I2, SbCl5, and BF3. Applications of Lewis bases Nearly all electron pair donors that form compounds by binding transition elements can be viewed ligands. Thus, a large application of Lewis bases is to modify the activity and selectivity of metal catalysts. Chiral Lewis bases, generally multidentate, confer chirality on a catalyst, enabling asymmetric catalysis, which is useful for the production of pharmaceuticals. The industrial synthesis of the anti-hypertension drug mibefradil uses a chiral Lewis base (R-MeOBIPHEP), for example. Hard and soft classification Lewis acids and bases are commonly classified according to their hardness or softness. In this context hard implies small and nonpolarizable and soft indicates larger atoms that are more polarizable. typical hard acids: H+, alkali/alkaline earth metal cations, boranes, Zn2+ typical soft acids: Ag+, Mo(0), Ni(0), Pt2+ typical hard bases: ammonia and amines, water, carboxylates, fluoride and chloride typical soft bases: organophosphines, thioethers, carbon monoxide, iodide For example, an amine will displace phosphine from the adduct with the acid BF3. In the same way, bases could be classified. For example, bases donating a lone pair from an oxygen atom are harder than bases donating through a nitrogen atom. Although the classification was never quantified it proved to be very useful in predicting the strength of adduct formation, using the key concepts that hard acid—hard base and soft acid—soft base interactions are stronger than hard acid—soft base or soft acid—hard base interactions. Later investigation of the thermodynamics of the interaction suggested that hard—hard interactions are enthalpy favored, whereas soft—soft are entropy favored. Quantifying Lewis acidity Many methods have been devised to evaluate and predict Lewis acidity. Many are based on spectroscopic signatures such as shifts NMR signals or IR bands e.g. the Gutmann-Beckett method and the Childs method. The ECW model is a quantitative model that describes and predicts the strength of Lewis acid base interactions, −ΔH. The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is −ΔH = EAEB + CACB + W The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. and that single property scales are limited to a smaller range of acids or bases. History The concept originated with Gilbert N. Lewis who studied chemical bonding. In 1923, Lewis wrote An acid substance is one which can employ an electron lone pair from another molecule in completing the stable group of one of its own atoms. The Brønsted–Lowry acid–base theory was published in the same year. The two theories are distinct but complementary. A Lewis base is also a Brønsted–Lowry base, but a Lewis acid does not need to be a Brønsted–Lowry acid. The classification into hard and soft acids and bases (HSAB theory) followed in 1963. The strength of Lewis acid-base interactions, as measured by the standard enthalpy of formation of an adduct can be predicted by the Drago–Wayland two-parameter equation. Reformulation of Lewis theory Lewis had suggested in 1916 that two atoms are held together in a chemical bond by sharing a pair of electrons. When each atom contributed one electron to the bond, it was called a covalent bond. When both electrons come from one of the atoms, it was called a dative covalent bond or coordinate bond. The distinction is not very clear-cut. For example, in the formation of an ammonium ion from ammonia and hydrogen the ammonia molecule donates a pair of electrons to the proton; the identity of the electrons is lost in the ammonium ion that is formed. Nevertheless, Lewis suggested that an electron-pair donor be classified as a base and an electron-pair acceptor be classified as acid. A more modern definition of a Lewis acid is an atomic or molecular species with a localized empty atomic or molecular orbital of low energy. This lowest-energy molecular orbital (LUMO) can accommodate a pair of electrons. Comparison with Brønsted–Lowry theory A Lewis base is often a Brønsted–Lowry base as it can donate a pair of electrons to H+; the proton is a Lewis acid as it can accept a pair of electrons. The conjugate base of a Brønsted–Lowry acid is also a Lewis base as loss of H+ from the acid leaves those electrons which were used for the A—H bond as a lone pair on the conjugate base. However, a Lewis base can be very difficult to protonate, yet still react with a Lewis acid. For example, carbon monoxide is a very weak Brønsted–Lowry base but it forms a strong adduct with BF3. In another comparison of Lewis and Brønsted–Lowry acidity by Brown and Kanner, 2,6-di-t-butylpyridine reacts to form the hydrochloride salt with HCl but does not react with BF3. This example demonstrates that steric factors, in addition to electron configuration factors, play a role in determining the strength of the interaction between the bulky di-t-butylpyridine and tiny proton. See also Acid Base (chemistry) Acid–base reaction Brønsted–Lowry acid–base theory Chiral Lewis acid Frustrated Lewis pair Gutmann–Beckett method ECW model Philosophy of chemistry References Further reading Acid–base chemistry Acids Bases (chemistry)
Lewis acids and bases
[ "Chemistry" ]
2,889
[ "Acid–base chemistry", "Acids", "Equilibrium chemistry", "nan", "Bases (chemistry)" ]
222,685
https://en.wikipedia.org/wiki/Dilithium
Dilithium, Li2, is a strongly electrophilic, diatomic molecule comprising two lithium atoms covalently bonded together. Li2 has been observed in the gas phase. It has a bond order of 1, an internuclear separation of 267.3 pm and a bond energy of 102 kJ/mol or 1.06 eV in each bond. The electron configuration of Li2 may be written as σ2. Being the third-lightest stable neutral homonuclear diatomic molecule (after dihydrogen and dihelium), dilithium is an extremely important model system for studying fundamentals of physics, chemistry, and electronic structure theory. It is the most thoroughly characterized compound in terms of the accuracy and completeness of the empirical potential energy curves of its electronic states. Analytic empirical potential energy curves have been constructed for the X-state, a-state, A-state, c-state, B-state, 2d-state, l-state, E-state, and the F-state. The most reliable of these potential energy curves are of the Morse/Long-range variety (see entries in the table below). Li2 potentials are often used to extract atomic properties. For example, the C3 value for atomic lithium extracted from the A-state potential of Li2 by Le Roy et al. in is more precise than any previously measured atomic oscillator strength. This lithium oscillator strength is related to the radiative lifetime of atomic lithium and is used as a benchmark for atomic clocks and measurements of fundamental constants. See also Morse/Long-range potential Dilithium (Star Trek) References Further reading Lithium Homonuclear diatomic molecules Allotropes de:Dilithium it:Litio#Dilitio
Dilithium
[ "Physics", "Chemistry" ]
375
[ "Periodic table", "Properties of chemical elements", "Allotropes", "Materials", "Matter" ]
222,834
https://en.wikipedia.org/wiki/Model%20airport
A model airport is a scale model of an airport. While airport models have been around, in a way, since airfields were open to the public, early model airports were basically restricted to public showcases about the airport and its surroundings to the public; these were usually located inside the airport themselves. Since Herpa Wings's introduction of their airport set series to their line of airline related toys, there has been an increase of aircraft modelers who have made mock airports to showcase their private collection of model aircraft. Often, the collector will model their airport after a real-life airport. Model airports can be made to look very realistic, with many real airport features such as terminals, control towers, cargo terminals, hangars, passenger bridges and more. Companies such as Gemini Jets, Herpa Wings, and JC Wings have produced ground support equipment in various scales. Collectors who make model airports may use die-cast models for their creation. Among the brands of die cast aircraft models most commonly used on these airports are Aeroclassics, Herpa Wings, Dragon Models Limited, Gemini Jets, Phoenix Models, JC Wings, and NG Model. In 2011, what may be the world's largest model airport opened for public view at Miniatur Wunderland in Hamburg, Germany. The model, named Knuffingen International Airport, is based on Hamburg International Airport. Another popular park in Europe, Madurodam in the Netherlands includes a model airport featuring models of several airlines such as KLM, Emirates, Lufthansa, EVA Air, Turkish Airlines, UPS Airlines, Transavia, Thai Airways, Korean Air, Delta Air Lines, and an A380 of Singapore Airlines, alongside a DHL-branded Airbus A300. The Madurodam airport is based on Amsterdam's Schiphol Airport. In addition, the TWA Hotel in New York features a model airport that demonstrates the way the TWA airline's operation center used to look like at New York's John F. Kennedy Airport in the 1960s and 1970s. This model airport was made using TWA model aircraft and runways and buildings in the 1:400 scale. See also Diorama Model building References Scale modeling
Model airport
[ "Physics" ]
443
[ "Scale modeling" ]
223,011
https://en.wikipedia.org/wiki/Teratology
Teratology is the study of abnormalities of physiological development in organisms during their life span. It is a sub-discipline in medical genetics which focuses on the classification of congenital abnormalities in dysmorphology caused by teratogens and also in pharmacology and toxicology. Teratogens are substances that may cause non-heritable birth defects via a toxic effect on an embryo or fetus. Defects include malformations, disruptions, deformations, and dysplasia that may cause stunted growth, delayed mental development, or other congenital disorders that lack structural malformations. The related term developmental toxicity includes all manifestations of abnormal development that are caused by environmental insult. The extent to which teratogens will impact an embryo is dependent on several factors, such as how long the embryo has been exposed, the stage of development the embryo was in when exposed (gestational timing), the genetic makeup of the embryo, and the transfer rate of the teratogen. The dose of the teratogen, the route of exposure to the teratogen, and the chemical nature of the teratogenic agent also contribute to the level of teratogenicity. Etymology The term was borrowed in 1842 from the French , where it was formed in 1830 from the Greek (word stem ), meaning "sign sent by the gods, portent, marvel, monster", and (-ology), used to designate a discourse, treaty, science, theory, or study of some topic. Old literature referred to abnormalities of all kinds under the Latin term Lusus naturae (lit. "freak of nature"). As early as the 17th century, Teratology referred to a discourse on prodigies and marvels of anything so extraordinary as to seem abnormal. In the 19th century, it acquired a meaning more closely related to biological deformities, mostly in the field of botany. Currently, its most instrumental meaning is that of the medical study of teratogenesis, congenital malformations or individuals with significant malformations. Historically, people have used many pejorative terms to describe/label cases of significant physical malformations. In the 1960s, David W. Smith of the University of Washington Medical School (one of the researchers who became known in 1973 for the discovery of fetal alcohol syndrome), popularized the term teratology. With the growth of understanding of the origins of birth defects, the field of teratology overlaps with other fields of science, including developmental biology, embryology, and genetics. Until the 1940s, teratologists regarded birth defects as primarily hereditary. In 1941, the first well-documented cases of environmental agents being the cause of severe birth defects were reported. Teratogenesis Wilson's principles In 1959 and in his 1973 monograph Environment and Birth Defects, embryologist James Wilson put forth six principles of teratogenesis to guide the study and understanding of teratogenic agents and their effects on developing organisms. These principles were derived from and expanded on by those laid forth by zoologist Camille Dareste in the late 1800s: Susceptibility to teratogenesis depends on the genotype of the conceptus and the manner in which this interacts with adverse environmental factors. Susceptibility to teratogenesis varies with the developmental stage at the time of exposure to an adverse influence. There are critical periods of susceptibility to agents and organ systems affected by these agents. Teratogenic agents act in specific ways on developing cells and tissues to initiate sequences of abnormal developmental events. The access of adverse influences to developing tissues depends on the nature of the influence. Several factors affect the ability of a teratogen to contact a developing conceptus, such as the nature of the agent itself, route and degree of maternal exposure, rate of placental transfer and systemic absorption, and composition of the maternal and embryonic/fetal genotypes. There are four manifestations of deviant development (death, malformation, growth retardation and functional defect). Manifestations of deviant development increase in frequency and degree as dosage increases from the No Observable Adverse Effect Level (NOAEL) to a dose producing 100% lethality (LD100). Causes Common causes of teratogenesis include: Genetic disorders and chromosomal abnormalities Maternal health factors Nutrition during pregnancy (e.g., spina bifida resulting from folate deficiency) Metabolic disorders such as diabetes and thyroid disease Stress Chemical agents Prescription and recreational drugs (e.g., alcohol, thalidomide) Environmental toxins and contaminants (e.g., heavy metals such as mercury and lead, polychlorinated biphenyls (PCBs)) Vertically transmitted infections such as rubella and syphilis Ionizing radiation such as X-rays and that emitted from nuclear fallout Temperatures outside the accepted range for a given organism Human pregnancy In humans, congenital disorders resulted in about 510,000 deaths globally in 2010. About 3% of newborns have a "major physical anomaly", meaning a physical anomaly that has cosmetic or functional significance. Congenital disorders are responsible for 20% of infant deaths. The most common congenital diseases are heart defects, Down syndrome, and neural tube defects. Trisomy 21 is the most common type of Down Syndrome. About 95% of infants born with Down Syndrome have this disorder and it consists of 3 separate copies of chromosomes. Translocation Down syndrome is not as common, as only 3% of infants with Down Syndrome are diagnosed with this type. VSD, ventricular septal defect, is the most common type of heart defect in infants. If an infant has a large VSD it can result into heart failure. Infants with a smaller VSD have a 96% survival rate and those with a moderate VSD have about an 86% survival rate. Lastly, NTD, neural tube defect, is a defect that forms in the brain and spine during early development. If the spinal cord is exposed and touching the skin it can require surgery to prevent an infection. Medicines Acitretin Acitretin is highly teratogenic and noted for the possibility of severe birth defects. It should not be used by pregnant women or women planning to get pregnant within 3 years following the use of acitretin. Sexually active women of childbearing age who use acitretin should also use at least two forms of birth control concurrently. Men and women who use it should not donate blood for three years after using it, because of the possibility that the blood might be used in a pregnant patient and cause birth defects. In addition, it may cause nausea, headache, itching, dry, red or flaky skin, dry or red eyes, dry or chapped lips, swollen lips, dry mouth, thirst, cystic acne or hair loss. Etretinate Etretinate (trade name Tegison) is a medication developed by Hoffmann–La Roche that was approved by the FDA in 1986 to treat severe psoriasis. It is a second-generation retinoid. It was subsequently removed from the Canadian market in 1996 and the United States market in 1998 due to the high risk of birth defects. It remains on the market in Japan as Tigason. Vaccination In humans, vaccination has become readily available, and is important for the prevention of various communicable diseases such as polio and rubella, among others. There has been no association between congenital malformations and vaccination — for example, a population-wide study in Finland in which expectant mothers received the oral polio vaccine found no difference in infant outcomes when compared with mothers from reference cohorts who had not received the vaccine. However, on grounds of theoretical risk, it is still not recommended to vaccinate for polio while pregnant unless there is risk of infection. An important exception to this relates to provision of the influenza vaccine while pregnant. During the 1918 and 1957 influenza pandemics, mortality from influenza in pregnant women was 45%. In a 2005 study of vaccination during pregnancy, Munoz et al. demonstrated that there was no adverse outcome observed in the new infants or mothers, suggesting that the balance of risk between infection and vaccination favored preventative vaccination. Reproductive hormones and hormone replacement therapy There are a number of ways that a fetus can be affected in pregnancy, specifically due to exposure to various substances. There are a number of drugs that can do this, specifically drugs such as female reproductive hormones or hormone replacement drugs such as estrogen and progesterone that are not only essential for reproductive health, but also pose concerns when it comes to the synthetic alternatives to these. This can cause a multitude of congenital abnormalities and deformities, many of which can ultimately affect the fetus and even the mother's reproductive system in the long term. According to a study conducted from 2015 till 2018, it was found that there was an increased risk of both maternal and neonatal complications developing as a result of hormone replacement therapy cycles being conducted during pregnancy, especially in regards to hormones such as estrogen, testosterone and thyroid hormone. When hormones such as estrogen and testosterone are replaced, this can cause the fetus to become stunted in growth, born prematurely with a lower birth weight, develop mental retardation, while in turn causing the mother's ovarian reserve to be depleted while increasing ovarian follicular recruitment. Withdrawn drugs Thalidomide Thalidomide was once prescribed therapeutically from the 1950s to early 1960s in Europe as an anti-nausea medication to alleviate morning sickness among pregnant women. While the exact mechanism of action of thalidomide is not known, it is thought to be related to inhibition of angiogenesis through interaction with the insulin like growth factor(IGF-1) and fibroblast like growth factor 2 (FGF-2) pathways. In the 1960s, it became apparent that thalidomide altered embryo development and led to limb deformities such as thumb absence, underdevelopment of entire limbs, or phocomelia. Thalidomide may have caused teratogenic effects in over 10,000 babies worldwide. Recreational drugs Alcohol In the US, alcohol is subject to the FDA drug labeling Pregnancy Category X (Contraindicated in pregnancy). Alcohol is known to cause fetal alcohol spectrum disorder. There are a wide range of affects that Prenatal Alcohol Exposure (PAE) can have on a developing fetus. Some of the most prominent possible outcomes include the development of Fetal Alcohol Syndrome, a reduction in brain volume, still births, spontaneous abortions, impairments of the nervous system, and much more. Fetal Alcohol Syndrome has numerous symptoms which may include cognitive impairments and impairment of the facial features. PAE remains the leading cause of birth defects and neurodevelopmental abnormalities in the United States, affecting 9.1 to 50 per 1000 live births in the U.S. and 68.0 to 89.2 per 1000 in populations with high levels of alcohol use. Tobacco Consuming tobacco products while pregnant or breastfeeding can have significant negative impacts on the health and development of the unborn child and newborn infant. Lead exposure during pregnancy Long before modern science, it was understood that heavy metals could cause negative effects to those who were exposed. The Greek physician Pedanius Dioscorides described the effects of lead exposure as something that "makes the mind give way." Lead exposure in adults can lead to cardiological, renal, reproductive, and cognitive issues that are often irreversible, however, lead exposure during pregnancy can be detrimental to the long-term health of the fetus. Exposure to lead during pregnancy is well known to have teratogenic effects on the development of a fetus. Specifically, fetal exposure to lead can cause cognitive impairment, premature births, unplanned abortions, ADHD, and much more. Lead exposure during the first trimester of pregnancy leads to the greatest predictability of cognitive development issues after birth. Low socioeconomic status correlates to a higher probability of lead exposure. A well-known recent example of lead and the impacts it can have on a was the 2014 water crisis in Flint, Michigan. Researchers have found that female fetuses developed at a higher rate than male fetuses in Flint when compared to surrounding areas. The higher rate of female births indicated a problem because male fetuses are more sensitive to pregnancy hazards than female fetuses. Other animals Fossil record Evidence for congenital deformities found in the fossil record is studied by paleopathologists, specialists in ancient disease and injury. Fossils bearing evidence of congenital deformity are scientifically significant because they can help scientists infer the evolutionary history of life's developmental processes. For instance, because a Tyrannosaurus rex specimen has been discovered with a block vertebra, it means that vertebrae have been developing the same basic way since at least the most recent common ancestor of dinosaurs and mammals. Other notable fossil deformities include a hatchling specimen of the bird-like dinosaur, Troodon, the tip of whose jaw was twisted. Another notably deformed fossil was a specimen of the Choristodera Hyphalosaurus, which had two heads- the oldest known example of polycephaly. Thalidomide and chick limb development Thalidomide is a teratogen known to be significantly detrimental to organ and limb development during embryogenesis. It has been observed in chick embryos that exposure to thalidomide can induce limb outgrowth deformities, due to increased oxidative stress interfering with the Wnt signaling pathway, increasing apoptosis, and damaging immature blood vessels in developing limb buds. Retinoic acid and mouse limb development Retinoic acid (RA) is significant in embryonic development. It induces the function of limb patterning of a developing embryo in species such as mice and other vertebrate limbs. For example, during the process of regenerating a newt limb an increased amount of RA moves the limb more proximal to the distal blastoma and the extent of the proximalization of the limb increases with the amount of RA present during the regeneration process. A study looked at the RA activity intracellularly in mice in relation to human regulating CYP26 enzymes which play a critical role in metabolizing RA. This study also helps to reveal that RA is significant in various aspects of limb development in an embryo, however irregular control or excess amounts of RA can have teratogenic impacts causing malformations of limb development. They looked specifically at CYP26B1 which is highly expressed in regions of limb development in mice. The lack of CYP26B1 was shown to cause a spread of RA signal towards the distal section of the limb causing proximo-distal patterning irregularities of the limb. Not only did it show spreading of RA but a deficiency in the CYP26B1 also showed an induced apoptosis effect in the developing mouse limb but delayed chondrocyte maturation, which are cells that secrete a cartilage matrix which is significant for limb structure. They also looked at what happened to development of the limbs in wild type mice, that are mice with no CYP26B1 deficiencies, but which had an excess amount of RA present in the embryo. The results showed a similar impact to limb patterning if the mice did have the CYP26B1 deficiency meaning that there was still a proximal distal patterning deficiency observed when excess RA was present. This then concludes that RA plays the role of a morphogen to identify proximal distal patterning of limb development in mice embryos and that CYP26B1 is significant to prevent apoptosis of those limb tissues to further proper development of mice limbs in vivo. Rat development and lead exposure There has been evidence of teratogenic effects of lead in rats as well. An experiment was conducted where pregnant rats were given drinking water, before and during pregnancy, that contained lead. Many detrimental effects, and signs of teratogenesis were found, such as negative impacts on the formation of the cerebellum, fetal mortality, and developmental issues for various parts of the body. Plants In botany, teratology investigates the theoretical implications of abnormal specimens. For example, the discovery of abnormal flowers—for example, flowers with leaves instead of petals, or flowers with staminoid pistils—furnished important evidence for the "foliar theory", the theory that all flower parts are highly specialised leaves. In plants, such specimens are denoted as 'lusus naturae' ('sports of nature', abbreviated as 'lus.'); and occasionally as 'ter.', 'monst.', or 'monstr.'. Types of deformations in plants Plants can have mutations that leads to different types of deformations such as: Fasciation: Development of the apex (growing tip) in a flat plane perpendicular to the axis of elongation Variegation: Degeneration of genes, manifesting itself among other things by anomalous pigmentation Virescence: Anomalous development of a green pigmentation in unexpected parts of the plant Phyllody: Floral organs or fruits transformed into leaves Witch's broom: Unusually high multiplication of branches in the upper part of the plant, mainly in a tree Pelorism: Zygomorphic flower regress to their ancestral actinomorphic symmetry Proliferation: Repetitive growth of an entire organ, such as a flower Research Studies designed to test the teratogenic potential of environmental agents use animal model systems (e.g., rat, mouse, rabbit, dog, and monkey). Early teratologists exposed pregnant animals to environmental agents and observed the fetuses for gross visceral and skeletal abnormalities. While this is still part of the teratological evaluation procedures today, the field of Teratology is moving to a more molecular level, seeking the mechanism(s) of action by which these agents act. One example of this is the use of mammalian animal models to evaluate the molecular role of teratogens in the development of embryonic populations, such as the neural crest, which can lead to the development of neurocristopathies. Genetically modified mice are commonly used for this purpose. In addition, pregnancy registries are large, prospective studies that monitor exposures women receive during their pregnancies and record the outcome of their births. These studies provide information about possible risks of medications or other exposures in human pregnancies. Prenatal alcohol exposure (PAE) can produce craniofacial malformations, a phenotype that is visible in Fetal Alcohol Syndrome. Current evidence suggests that craniofacial malformations occur via: apoptosis of neural crest cells, interference with neural crest cell migration, as well as the disruption of sonic hedgehog (shh) signaling. Understanding how a teratogen causes its effect is not only important in preventing congenital abnormalities but also has the potential for developing new therapeutic drugs safe for use with pregnant women. See also Carcinogen Congenital abnormalities Mutagen Polydactyly Retinoic acid Teratoma Thalidomide References Graham, Dr. Olga: (York University) AUTISM: THE TERATOGEN FALLOUT ISBN 978-0-9689383-1-7 External links Society of Teratology European Teratology Society Organization of Teratology Information Specialists March of Dimes Foundation A Telling of Wonders: Teratology in Western Medicine through 1800 (New York Academy of Medicine Historical Collections) The Reproductive Toxicology Center Database Alcohol and health Developmental biology Radiation health effects Substance-related disorders Teratogens
Teratology
[ "Chemistry", "Materials_science", "Biology" ]
4,037
[ "Radiation health effects", "Behavior", "Developmental biology", "Reproduction", "Radiation effects", "Teratogens", "Radioactivity" ]
223,160
https://en.wikipedia.org/wiki/Gantt%20chart
A Gantt chart is a bar chart that illustrates a project schedule. It was designed and popularized by Henry Gantt around the years 1910–1915. Modern Gantt charts also show the dependency relationships between activities and the current schedule status. Definition A Gantt chart is a type of bar chart that illustrates a project schedule. This chart lists the tasks to be performed on the vertical axis, and time intervals on the horizontal axis. The width of the horizontal bars in the graph shows the duration of each activity. Gantt charts illustrate the start and finish dates of the terminal elements and summary elements of a project. Terminal elements and summary elements constitute the work breakdown structure of the project. Modern Gantt charts also show the dependency (i.e., precedence network) relationships between activities. Gantt charts can be used to show current schedule status using percent-complete shadings and a vertical "TODAY" line. Gantt charts are sometimes equated with bar charts. Gantt charts are usually created initially using an early start time approach, where each task is scheduled to start immediately when its prerequisites are complete. This method maximizes the float time available for all tasks. History Widely used in project planning in the present day, Gantt charts were considered revolutionary when introduced. The first known tool of this type was developed in 1896 by Karol Adamiecki, who called it a harmonogram. Adamiecki, however, published his chart only in Russian and Polish which limited both its adoption and recognition of his authorship. In 1912, published what could be considered Gantt charts while discussing a construction project. Charts of the type published by Schürch appear to have been in common use in Germany at the time; however, the prior development leading to Schürch's work is unclear. Unlike later Gantt charts, Schürch's charts did not display interdependencies, leaving them to be inferred by the reader. These were also static representations of a planned schedule. The chart is named after Henry Gantt (1861–1919), who designed his chart around the years 1910–1915. Gantt originally created his tool for systematic, routine operations. He designed this visualization tool to more easily measure productivity levels of employees and gauge which employees were under- or over-performing. Gantt also frequently included graphics and other visual indicators in his charts to track performance. One of the first major applications of Gantt charts was by the United States during World War I, at the instigation of General William Crozier. The earliest Gantt charts were drawn on paper and therefore had to be redrawn entirely in order to adjust to schedule changes. For many years, project managers used pieces of paper or blocks for Gantt chart bars so they could be adjusted as needed. Gantt's collaborator Walter Polakov introduced Gantt charts to the Soviet Union in 1929 when he was working for the Supreme Soviet of the National Economy. They were used in developing the First Five Year Plan, supplying Russian translations to explain their use. In the 1980s, personal computers allowed widespread creation of complex and elaborate Gantt charts. The first desktop applications were intended mainly for project managers and project schedulers. With the advent of the Internet and increased collaboration over networks at the end of the 1990s, Gantt charts became a common feature of web-based applications, including collaborative groupware. By 2012, almost all Gantt charts were made by software which can easily adjust to schedule changes. In 1999, Gantt charts were identified as "one of the most widely used management tools for project scheduling and control". Example In the following tables there are seven tasks, labeled a through g. Some tasks can be done concurrently (a and b) while others cannot be done until their predecessor task is complete (c and d cannot begin until a is complete). Additionally, each task has three time estimates: the optimistic time estimate (O), the most likely or normal time estimate (M), and the pessimistic time estimate (P). The expected time (TE) is estimated using the beta probability distribution for the time estimates, using the formula (O + 4M + P) ÷ 6. Once this step is complete, one can draw a Gantt chart or a network diagram. Progress Gantt charts In a progress Gantt chart, tasks are shaded in proportion to the degree of their completion: a task that is 60% complete would be 60% shaded, starting from the left. A vertical line is drawn at the time index when the progress Gantt chart is created, and this line can then be compared with shaded tasks. If everything is on schedule, all task portions left of the line will be shaded, and all task portions right of the line will not be shaded. This provides a visual representation of how the project and its tasks are ahead or behind schedule. Linked Gantt charts Linked Gantt charts contain lines indicating the dependencies between tasks. However, linked Gantt charts quickly become cluttered in all but the simplest cases. Critical path network diagrams are superior to visually communicate the relationships between tasks. Nevertheless, Gantt charts are often preferred over network diagrams because Gantt charts are easily interpreted without training, whereas critical path diagrams require training to interpret. Gantt chart software typically provides mechanisms to link task dependencies, although this data may or may not be visually represented. Gantt charts and network diagrams are often used for the same project, both being generated from the same data by a software application. See also Critical path method Data and information visualization Event chain methodology Float (project management) List of project management software, which includes specific Gantt chart software. Program evaluation and review technique (PERT) Progress bar Event chain diagram Citations References republished as (It is unclear when this was last modified. The PDF metadata indicates 2015, and a note in the text says "Augmented with additional materials received since publication".) Adapted from a 2006 presentation at conference myPrimavera06, Canberra. Further reading External links Long-running discussion regarding limitations of the Gantt chart format, and alternatives, on Edward Tufte's website Charts Diagrams Planning Project management techniques Schedule (project management) Technical drawing
Gantt chart
[ "Physics", "Engineering" ]
1,260
[ "Design engineering", "Physical quantities", "Time", "Civil engineering", "Spacetime", "Technical drawing", "Schedule (project management)" ]
223,239
https://en.wikipedia.org/wiki/Euratom
The European Atomic Energy Community (EAEC or Euratom) is an international organisation established by the Euratom Treaty on 25 March 1957 with the original purpose of creating a specialist market for nuclear power in Europe, by developing nuclear energy and distributing it to its member states while selling the surplus to non-member states. However, over the years its scope has been considerably increased to cover a large variety of areas associated with nuclear power and ionising radiation as diverse as safeguarding of nuclear materials, radiation protection and construction of the International Fusion Reactor ITER. It is legally distinct from the European Union (EU) although it has the same membership, and is governed by many of the EU's institutions; but it is the only remaining community organisation that is independent of the EU and therefore outside the regulatory control of the European Parliament. Since 2014, Switzerland has also participated in Euratom programmes as an associated state. The United Kingdom ceased to be a full member of the organisation on 31 January 2020. However, under the terms of the UK–EU Trade and Cooperation Agreement, the United Kingdom participates in Euratom as an associated state following the end of the transition period on 31 December 2020. History The driving force behind the creation of Euratom was France's desire to develop nuclear energy and nuclear weapons without having to rely on the United States and/or the United Kingdom. The costs of nuclear development were also large, motivating France to share the costs with the other members of the European Coal and Steel Community (ECSC). During the negotiations to create Euratom, the United States and the United Kingdom sought to gain influence over nuclear development in Europe. The US and the UK created the European Nuclear Energy Agency (ENEA) as a way to limit the value of Euratom and gain influence over the spread of nuclear technology. The Soviet Union launched a propaganda campaign against Euratom, as it sought to stoke fears among Europeans that the organization would enable West Germany to develop nuclear weapons. The Common Assembly proposed extending the powers of the ECSC to cover other sources of energy. However, Jean Monnet, ECSC architect and President, wanted a separate community to cover nuclear power. Louis Armand was put in charge of a study into the prospects of nuclear energy use in Europe; his report concluded that further nuclear development was needed to fill the deficit left by the exhaustion of coal deposits and to reduce dependence on oil producers. However, the Benelux states and Germany were also keen on creating a general single market, although it was opposed by France due to its protectionism, and Jean Monnet thought it too large and difficult a task. In the end, Monnet proposed the creation of separate atomic energy and economic communities to reconcile both groups. The Intergovernmental Conference on the Common Market and Euratom at the Château of Val-Duchesse in 1956 drew up the essentials of the new treaties. Euratom would foster cooperation in the nuclear field, at the time a very popular area, and would, along with the EEC, share the Common Assembly and Court of Justice of the ECSC, but not its executives. Euratom would have its own Council and Commission, with fewer powers than the High Authority of the European Coal and Steel Community. On 25 March 1957, the Treaties of Rome (the Euratom Treaty and the EEC Treaty) were signed by the six ECSC members and on 1 January 1958 they came into force. To save on resources, these separate executives created by the Rome Treaties were merged in 1965 by the Merger Treaty. The institutions of the EEC would take over responsibilities for the running of the ECSC and Euratom, with all three then becoming known as the European Communities even if each legally existed separately. In 1993, the Maastricht Treaty created the European Union, which absorbed the Communities into the European Community pillar, yet Euratom still maintained a distinct legal personality. The European Constitution was intended to consolidate all previous treaties and increase democratic accountability in them. The Euratom treaty had not been amended as the other treaties had, so the European Parliament had been granted few powers over it. However, the reason it had gone unamended was the same reason the Constitution left it to remain separate from the rest of the EU: anti-nuclear sentiment among the European electorate, which may unnecessarily turn voters against the treaty. The Euratom treaty thus remains in force relatively unamended from its original signing. EU evolution timeline This overall timeline includes the establishment and development of Euratom, and shows that currently, it is the only former EC body that has not been incorporated into the EU. Cooperation Since 2014, Switzerland has participated in Euratom programmes as an associated state. Since January 2021, the United Kingdom participates in Euratom programmes as an associated state under the terms of the UK-EU Trade and Cooperation Agreement. As of 2024, Euratom maintains Co-operation Agreements of various scopes with ten countries: Armenia, Australia, Canada, India, Japan, Kazakhstan, South Africa, Ukraine, United States, and Uzbekistan. Withdrawal of the United Kingdom The United Kingdom announced its intention to withdraw from the EAEC on 26 January 2017, following on from its decision to withdraw from the European Union. Formal notice to withdraw from the EAEC was provided in March 2017, within the Article 50 notification letter, where the withdrawal was made explicit. Withdrawal only became effective following negotiations on the terms of the exit, which lasted two years and ten months. A report by the House of Commons Business, Energy and Industrial Strategy Committee, published in May 2017, questioned the legal necessity of leaving Euratom and called for a temporary extension of membership to allow time for new arrangements to be made. In June 2017, the European Commission's negotiations task force published a Position paper transmitted to EU27 on nuclear materials and safeguard equipment (Euratom), titled "Essential Principles on nuclear materials and safeguard equipment". The following month, a briefing paper from the House of Commons Library assessed the implications of leaving Euratom. In 2017, an article in The Independent questioned the availability of nuclear fuel to the UK after 2019 if the UK were to withdraw, and the need for new treaties relating to the transportation of nuclear materials. A 2017 article in the New Scientist stated that radioisotope supply for cancer treatments would also need to be considered in new treaties. UK politicians speculated that the UK could stay in Euratom. In 2017, some argued that this would require – beyond the consent of the EU27 – amendment or revocation of the Article 50 letter of March 2017. The Nuclear Safeguards Act 2018, making provision for safeguards after withdrawal from Euratom, received royal assent on 26 June 2018. The UK-EU Trade and Cooperation Agreement, outlining the UK's relationship with the European Union from 1 January 2021, makes provision for the United Kingdom's participation "as an associated country of all parts of the Euratom programme". Achievements In the history of European regulation, Article 37 of the Euratom Treaty represents pioneering legislation concerning binding transfrontier obligations with respect to environmental impact and protection of humans. President The five-member Commission was led by only three presidents while it had independent executives (1958–1967), all from France: See also EU Directorate General Joint Research Centre – often incorrectly referred to as Euratom due to EURATOM being its origin. Energy Community Energy policy of the European Union History of the European Union Institutions of the European Union International Atomic Energy Agency Nuclear energy in the European Union The nuclear part of the Seventh Framework Programme for research and technological development, the European Union's chief instrument for funding research. References External links ( Documents of the European Atomic Energy Community are consultable at the Historical Archives of the EU in Florence History of the Rome Treaties Online collection by the CVCE European Commission Fusion Research European Commission Fission Research International nuclear energy organizations International organizations based in Europe Energy policies and initiatives of the European Union European Union and science and technology Organizations established in 1958 Intergovernmental organizations established by treaty 1958 establishments in Europe Radiation protection organizations
Euratom
[ "Engineering" ]
1,647
[ "International nuclear energy organizations", "Nuclear organizations", "Radiation protection organizations" ]
223,352
https://en.wikipedia.org/wiki/Alternator
An alternator is an electrical generator that converts mechanical energy to electrical energy in the form of alternating current. For reasons of cost and simplicity, most alternators use a rotating magnetic field with a stationary armature. Occasionally, a linear alternator or a rotating armature with a stationary magnetic field is used. In principle, any AC electrical generator can be called an alternator, but usually, the term refers to small rotating machines driven by automotive and other internal combustion engines. An alternator that uses a permanent magnet for its magnetic field is called a magneto. Alternators in power stations driven by steam turbines are called turbo-alternators. Large 50 or 60 Hz three-phase alternators in power plants generate most of the world's electric power, which is distributed by electric power grids. History Alternating current generating systems were known in simple forms from the discovery of the magnetic induction of electric current in the 1830s. Rotating generators naturally produced alternating current, but since there was little use for it, it was normally converted into direct current via the addition of a commutator in the generator. The early machines were developed by pioneers such as Michael Faraday and Hippolyte Pixii. Faraday developed the "rotating rectangle", whose operation was heteropolar – each active conductor passed successively through regions where the magnetic field was in opposite directions. Lord Kelvin and Sebastian Ferranti also developed early alternators, producing frequencies between 100 and 300 Hz. The late 1870s saw the introduction of the first large-scale electrical systems with central generation stations to power Arc lamps, used to light whole streets, factory yards, or the interior of large warehouses. Some, such as Yablochkov arc lamps introduced in 1878, ran better on alternating current, and the development of these early AC generating systems was accompanied by the first use of the word "alternator". Supplying the proper amount of voltage from generating stations in these early systems was left up to the engineer's skill in "riding the load". In 1883 the Ganz Works invented the constant voltage generator that could produce a stated output voltage, regardless of the value of the actual load. The introduction of transformers in the mid-1880s led to the widespread use of alternating current and the use of alternators needed to produce it. After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for various alternating current frequencies between sixteen and about one hundred hertz for use with arc lighting, incandescent lighting, and electric motors. Specialized radio frequency alternators like the Alexanderson alternator were developed as longwave radio transmitters around World War 1 and used in a few high power wireless telegraphy stations before vacuum tube transmitters replaced them. Principle of operation A conductor moving relative to a magnetic field develops an electromotive force (EMF) in it (Faraday's Law). This EMF reverses its polarity when it moves under magnetic poles of opposite polarity. Typically, a rotating magnet, called the rotor, turns within a stationary set of conductors, called the stator, wound in coils on an iron core. The field cuts across the conductors, generating an induced EMF (electromotive force), as the mechanical input causes the rotor to turn. The rotating magnetic field induces an AC voltage in the stator windings. Since the currents in the stator windings vary in step with the position of the rotor, an alternator is a synchronous generator. The rotor's magnetic field may be produced by permanent magnets or by a field coil electromagnet. Automotive alternators use a rotor winding, which allows control of the alternator's generated voltage by varying the current in the rotor field winding. Permanent magnet machines avoid the loss due to magnetizing current in the rotor, but are restricted in size due to the cost of the magnet material. Since the permanent magnet field is constant, the terminal voltage varies directly with the speed of the generator. Brushless AC generators are usually larger than those used in automotive applications. An automatic voltage control device controls the field current to keep the output voltage constant. If the output voltage from the stationary armature coils drops due to an increase in demand, more current is fed into the rotating field coils through the voltage regulator (VR). This increases the magnetic field around the field coils, which induces a greater voltage in the armature coils. Thus, the output voltage is brought back up to its original value. Alternators used in central power stations also control the field current to regulate reactive power and to help stabilize the power system against the effects of momentary faults. Often, there are three sets of stator windings, physically offset so that the rotating magnetic field produces a three phase current, displaced by one-third of a period with respect to each other. Synchronous speeds One cycle of alternating current is produced each time a pair of field poles passes over a point on the stationary winding. The relation between speed and frequency is , where is the frequency in Hz (cycles per second). is the number of poles (2, 4, 6, …), and is the rotational speed in revolutions per minute (r/min). Old descriptions of alternating current systems sometimes give the frequency in terms of alternations per minute, counting each half-cycle as one alternation; so 12,000 alternations per minute corresponds to 100 Hz. An alternator's output frequency depends on the number of poles and the rotational speed. The speed corresponding to a particular frequency is called the synchronous speed. This table gives some examples: Classifications Alternators may be classified by the method of excitation, number of phases, the type of rotation, cooling method, and their application. By excitation There are two main ways to produce the magnetic field used in the alternators: by using permanent magnets, which create their persistent magnetic field, or by using field coils. The alternators that use permanent magnets are specifically called magnetos. In other alternators, wound field coils form an electromagnet to produce the rotating magnetic field. A device that uses permanent magnets to produce alternating current is called a permanent magnet alternator (PMA). A permanent magnet generator (PMG) may produce either alternating current or direct current if it has a commutator. Direct-connected direct-current (DC) generator This method of excitation consists of a smaller direct-current (DC) generator fixed on the same shaft as the alternator. The DC generator generates a small amount of electricity, just enough to excite the field coils of the connected alternator to generate electricity. A variation of this system is a type of alternator that uses direct current from a battery for initial excitation upon start-up, after which the alternator becomes self-excited. Direct-connected alternating-current (AC) generator This method of excitation consists of a smaller alternating-current (AC) generator fixed on the same shaft as the alternator. The AC stator generates a small amount of field coil excitation current, which is induced in the rotor and rectified to DC by a bridge rectifier built in to the windings where it excites the field coils of the larger connected alternator to generate electricity. This system has the advantage of not requiring brushes, which increases service life, although with a slightly lower overall efficiency. A variation of this system is a type of alternator that uses direct current from a battery for initial excitation upon start-up, after which the alternator becomes self-excited. Transformation and rectification This method depends on residual magnetism retained in the iron core to generate a weak magnetic field, which would allow a weak voltage to be generated. This voltage is used to excite the field coils so the alternator can generate stronger voltage as part of its build up process. After the initial AC voltage buildup, the field is supplied with rectified voltage from the alternator. Brushless alternators A brushless alternator is composed of two alternators built end-to-end on one shaft. Until 1966, alternators used brushes with rotating field. With the advancement in semiconductor technology, brushless alternators are possible. Smaller brushless alternators may look like one unit, but the two parts are readily identifiable in the large versions. The main alternator is the larger of the two sections, and the smaller one is the exciter. The exciter has stationary field coils and a rotating armature (power coils). The main alternator uses the opposite configuration with a rotating field and stationary armature. A bridge rectifier, called the rotating rectifier assembly, is mounted on the rotor. Neither brushes nor slip rings are used, which reduces the number of wearing parts. The main alternator has a rotating field and a stationary armature (power generation windings). Varying the amount of current through the stationary exciter field coils varies the 3-phase output from the exciter. This output is rectified by a rotating rectifier assembly mounted on the rotor, and the resultant DC supplies the rotating field of the main alternator and hence alternator output. The result is that a small DC exciter current indirectly controls the output of the main alternator. By number of phases Another way to classify alternators is by the number of phases of their output voltage. The output can be single phase or polyphase. Three-phase alternators are the most common, but polyphase alternators can be two-phase, six-phase, or more. By rotating part The revolving part of alternators can be the armature or the magnetic field. The revolving armature type has the armature wound on the rotor, where the winding moves through a stationary magnetic field. The revolving armature type is not often used. The revolving field type has a magnetic field on the rotor to rotate through a stationary armature winding. The advantage is that then the rotor circuit carries much less power than the armature circuit, making the slip ring connections smaller and less costly; only two contacts are needed for the direct-current rotor, whereas often a rotor winding has three phases, and multiple sections which would each require a slip-ring connection. The stationary armature can be wound for any convenient medium voltage level, up to tens of thousands of volts; manufacture of slip ring connections for more than a few thousand volts is costly and inconvenient. Cooling methods Many alternators are cooled by ambient air, forced through the enclosure by an attached fan on the shaft that drives the alternator. In vehicles such as transit buses, a heavy demand on the electrical system may require a large alternator to be oil-cooled. In marine applications water-cooling is also used. Expensive automobiles may use water-cooled alternators to meet high electrical system demands. Specific applications Synchronous generators Most power generation stations use synchronous machines as their generators. The connection of these generators to the utility grid requires synchronization conditions to be met. Automotive alternators Alternators are used in modern internal combustion engine automobiles to charge the battery and to power the electrical system when its engine is running. Until the 1960s, automobiles used DC dynamo generators with commutators. With the availability of affordable silicon-diode rectifiers, alternators were used instead. Diesel-electric locomotive alternators In later diesel-electric locomotives and diesel electric multiple units, the prime mover turns an alternator which provides electricity for the traction motors (AC or DC). The traction alternator usually incorporates integral silicon diode rectifiers to provide the traction motors with up to 1,200 volts DC. The first diesel electric locomotives, and many of those still in service, use DC generators as, before silicon power electronics, it was easier to control the speed of DC traction motors. Most of these had two generators: one to generate the excitation current for a larger main generator. Optionally, the generator also supplies head-end power (HEP) or power for electric train heating. The HEP option requires a constant engine speed, typically 900 r/min for a 480 V 60 Hz HEP application, even when the locomotive is not moving. Marine alternators Marine alternators used in yachts are similar to automotive alternators, with appropriate adaptations to the salt-water environment. Marine alternators are designed to be explosion proof (ignition protected) so that brush sparking will not ignite explosive gas mixtures in an engine room environment. Depending on the type of system installed, they may be 12 or 24 volts. Larger marine diesels may have two or more alternators to cope with the heavy electrical demand of a modern yacht. On single alternator circuits, the power may be split between the engine starting battery and the domestic or house battery (or batteries) by use of a split-charge diode (battery isolator) or a voltage-sensitive relay. Due to the high cost of large house battery banks, Marine alternators generally use external regulators. Multistep regulators control the field current to maximize the charging effectiveness (time to charge) and battery life. Multistep regulators can be programmed for different battery types. Two temperature sensors can be added: one for the battery to adjust the charging voltage and an over-temperature sensor on the actual alternator to protect it from overheating. Aviation Radio alternators High-frequency alternators of the variable-reluctance type were applied commercially to radio transmission in low-frequency radio bands. These were used for transmitting Morse code and, experimentally, for transmitting voice and music. In the Alexanderson alternator, both the field winding and armature winding are stationary, and current is induced in the armature by the changing magnetic reluctance of the rotor (which has no windings or current-carrying parts). Such machines were made to produce radio frequency current for radio transmissions, although the efficiency was low. See also Bottle dynamo Dynamo Electric generator Engine-generator Flux switching alternator Folsom Powerhouse State Historic Park Hub dynamo Induction generator Jedlik's dynamo Linear alternator Magneto Polyphase coil Revolving armature alternator Single-phase generator References External links White, Thomas H.,"Alternator-Transmitter Development (1891–1920)". EarlyRadioHistory.us. Alternators at Integrated Publishing (TPub.com) Wooden Low-RPM Alternator, ForceField, Fort Collins, Colorado, USA Understanding 3 phase alternators at WindStuffNow Alternator, Arc and Spark. The first Wireless Transmitters (G0UTY homepage) Electrical generators Energy conversion
Alternator
[ "Physics", "Technology" ]
3,066
[ "Physical systems", "Electrical generators", "Machines" ]
223,361
https://en.wikipedia.org/wiki/Linear%20regulator
In electronics, a linear regulator is a voltage regulator used to maintain a steady voltage. The resistance of the regulator varies in accordance with both the input voltage and the load, resulting in a constant voltage output. The regulating circuit varies its resistance, continuously adjusting a voltage divider network to maintain a constant output voltage and continually dissipating the difference between the input and regulated voltages as waste heat. By contrast, a switching regulator uses an active device that switches on and off to maintain an average value of output. Because the regulated voltage of a linear regulator must always be lower than input voltage, efficiency is limited and the input voltage must be high enough to always allow the active device to reduce the voltage by some amount. Linear regulators may place the regulating device in parallel with the load (shunt regulator) or may place the regulating device between the source and the regulated load (a series regulator). Simple linear regulators may only contain as little as a Zener diode and a series resistor; more complicated regulators include separate stages of voltage reference, error amplifier and power pass element. Because a linear voltage regulator is a common element of many devices, single-chip regulators ICs are very common. Linear regulators may also be made up of assemblies of discrete solid-state or vacuum tube components. Despite their name, linear regulators are non-linear circuits because they contain non-linear components (such as Zener diodes, as shown below in the simple shunt regulator) and because the output voltage is ideally constant (and a circuit with a constant output that does not depend on its input is a non-linear circuit). Overview The transistor (or other device) is used as one half of a voltage divider to establish the regulated output voltage. The output voltage is compared to a reference voltage to produce a control signal to the transistor which will drive its gate or base. With negative feedback and good choice of compensation, the output voltage is kept reasonably constant. Linear regulators are often inefficient: since the transistor is acting like a resistor, it will waste electrical energy by converting it to heat. In fact, the power loss due to heating in the transistor is the current multiplied by the voltage difference between input and output voltage. The same function can often be performed much more efficiently by a switched-mode power supply, but a linear regulator may be preferred for light loads or where the desired output voltage approaches the source voltage. In these cases, the linear regulator may dissipate less power than a switcher. The linear regulator also has the advantage of not requiring magnetic devices (inductors or transformers) which can be relatively expensive or bulky, being often of simpler design, and cause less electromagnetic interference. Some designs of linear regulators use only transistors, diodes and resistors, which are easier to fabricate into an integrated circuit, further reducing their weight, footprint on a PCB, and price. All linear regulators require an input voltage at least some minimum amount higher than the desired output voltage. That minimum amount is called the dropout voltage. For example, a common regulator such as the 7805 has an output voltage of 5 V, but can only maintain this if the input voltage remains above about 7 V, before the output voltage begins sagging below the rated output. Its dropout voltage is therefore 7 V − 5 V = 2 V. When the supply voltage is less than about 2 V above the desired output voltage, as is the case in low-voltage microprocessor power supplies, so-called low dropout regulators (LDOs) must be used. When the output regulated voltage must be higher than the available input voltage, no linear regulator will work (not even a low dropout regulator). In this situation, a boost converter or a charge pump must be used. Most linear regulators will continue to provide some output voltage approximately the dropout voltage below the input voltage for inputs below the nominal output voltage until the input voltage drops significantly. Linear regulators exist in two basic forms: shunt regulators and series regulators. Most linear regulators have a maximum rated output current. This is generally limited by either power dissipation capability, or by the current carrying capability of the output transistor. Shunt regulators The shunt regulator works by providing a path from the supply voltage to ground through a variable resistance (the main transistor is in the "bottom half" of the voltage divider). The current through the shunt regulator is diverted away from the load and flows directly to ground, making this form usually less efficient than the series regulator. It is, however, simpler, sometimes consisting of just a voltage-reference diode, and is used in very low-powered circuits where the wasted current is too small to be of concern. This form is very common for voltage reference circuits. A shunt regulator can usually only sink (absorb) current. Series regulators Series regulators are the more common form; they are more efficient than shunt designs. The series regulator works by providing a path from the supply voltage to the load through a variable resistance, usually a transistor (in this role it is usually termed the series pass transistor); it is in the "top half" of the voltage divider - the bottom half being the load. The power dissipated by the regulating device is equal to the power supply output current times the voltage drop in the regulating device. For efficiency and reduced stress on the pass transistor, designers try to minimize the voltage drop but not all circuits regulate well once the input (unregulated) voltage comes close to the required output voltage; those that do are termed low dropout regulators, A series regulator can usually only source (supply) current, unlike shunt regulators. Simple shunt regulator The image shows a simple shunt voltage regulator that operates by way of the Zener diode's action of maintaining a constant voltage across itself when the current through it is sufficient to take it into the Zener breakdown region. The resistor R1 supplies the Zener current as well as the load current IR2 (R2 is the load). R1 can be calculated as , where is the Zener voltage, and IR2 is the required load current. This regulator is used for very simple low-power applications where the currents involved are very small and the load is permanently connected across the Zener diode (such as voltage reference or voltage source circuits). Once R1 has been calculated, removing R2 will allow the full load current (plus the Zener current) through the diode and may exceed the diode's maximum current rating, thereby damaging it. The regulation of this circuit is also not very good because the Zener current (and hence the Zener voltage) will vary depending on and inversely depending on the load current. In some designs, the Zener diode may be replaced with another similarly functioning device, especially in an ultra-low-voltage scenario, like (under forward bias) several normal diodes or LEDs in series. Simple series regulator Adding an emitter follower stage to the simple shunt regulator forms a simple series voltage regulator and substantially improves the regulation of the circuit. Here, the load current IR2 is supplied by the transistor whose base is now connected to the Zener diode. Thus the transistor's base current (IB) forms the load current for the Zener diode and is much smaller than the current through R2. This regulator is classified as "series" because the regulating element, viz., the transistor, appears in series with the load. R1 sets the Zener current (IZ) and is determined as where, VZ is the Zener voltage, IB is the transistor's base current, K = 1.2 to 2 (to ensure that R1 is low enough for adequate IB) and where, IR2 is the required load current and is also the transistor's emitter current (assumed to be equal to the collector current) and hFE(min) is the minimum acceptable DC current gain for the transistor. This circuit has much better regulation than the simple shunt regulator, since the base current of the transistor forms a very light load on the Zener, thereby minimising variation in Zener voltage due to variation in the load. Note that the output voltage will always be about 0.65 V less than the Zener due to the transistor's VBE drop. Although this circuit has good regulation, it is still sensitive to the load and supply variation. This can be resolved by incorporating negative feedback circuitry into it. This regulator is often used as a "pre-regulator" in more advanced series voltage regulator circuits. The circuit is readily made adjustable by adding a potentiometer across the Zener, moving the transistor base connection from the top of the Zener to the pot wiper. It may be made step adjustable by switching in different Zeners. Finally it is occasionally made microadjustable by adding a low value pot in series with the Zener; this allows a little voltage adjustment, but degrades regulation (see also capacitance multiplier). Fixed regulators Three-terminal linear regulators, used for generating "fixed" voltages, are readily available. They can generate plus or minus 3.3 V, 5 V, 6 V, 9 V, 12 V, or 15 V, with their performance generally peaking around a load of 1.5 Amperes. The "78xx" series (7805, 7812, etc.) regulate positive voltages while the "79xx" series (7905, 7912, etc.) regulate negative voltages. Often, the last two digits of the device number are the output voltage (e.g., a 7805 is a +5 V regulator, while a 7915 is a −15 V regulator). There are variants on the 78xx series ICs, such as 78L and 78S, some of which can supply up to 2 A. Adjusting fixed regulators By adding another circuit element to a fixed voltage IC regulator, it is possible to adjust the output voltage. Two example methods are: A Zener diode or resistor may be added between the IC's ground terminal and ground. Resistors are acceptable where ground current is constant, but are ill-suited to regulators with varying ground current. By switching in different Zener diodes, diodes or resistors, the output voltage can be adjusted in a step-wise fashion. A potentiometer can be placed in series with the ground terminal to increase the output voltage variably. However, this method degrades regulation, and is not suitable for regulators with varying ground current. Variable regulators An adjustable regulator generates a fixed low nominal voltage between its output and its adjust terminal (equivalent to the ground terminal in a fixed regulator). This family of devices includes low power devices like LM723 and medium power devices like LM317 and L200. Some of the variable regulators are available in packages with more than three pins, including dual in-line packages. They offer the capability to adjust the output voltage by using external resistors of specific values. For output voltages not provided by standard fixed regulators and load currents of less than 7 A, commonly available adjustable three-terminal linear regulators may be used. The LM317 series (+1.25 V) regulates positive voltages while the LM337 series (−1.25 V) regulates negative voltages. The adjustment is performed by constructing a potential divider with its ends between the regulator output and ground, and its centre-tap connected to the 'adjust' terminal of the regulator. The ratio of resistances determines the output voltage using the same feedback mechanisms described earlier. Dual tracking regulators Single IC dual tracking adjustable regulators are available for applications such as op-amp circuits needing matched positive and negative DC supplies. Some have selectable current limiting as well. Some regulators require a minimum load. One example of a single IC dual tracking adjustable regulator is the LM125, which is a precision, dual, tracking, monolithic voltage regulator. It provides separate positive and negative regulated outputs, simplifying dual power supply designs. Operation requires few or no external components, depending on the application. Internal settings provide fixed output voltages at ±15V Protection Linear IC voltage regulators may include a variety of protection methods: Current limiting such as constant-current limiting or foldback Thermal shutdown Safe operating area protection Sometimes external protection is used, such as crowbar protection. Using a linear regulator Linear regulators can be constructed using discrete components but are usually encountered in integrated circuit forms. The most common linear regulators are three-terminal integrated circuits in the TO-220 package. Common voltage regulators are the LM78xx-series (for positive voltages) and LM79xx-series (for negative voltages). Robust automotive voltage regulators, such as LM2940 / MIC2940A / AZ2940, can handle reverse battery connections and brief +50/-50V transients too. Some Low-dropout regulator (LDO) alternatives, such as MCP1700 / MCP1711 / TPS7A05 / XC6206, have a very low quiescent current of less than 5 μA (approximately 1,000 times less than the LM78xx series) making them better suited for battery-powered devices. Common fixed voltages are 1.8 V, 2.5 V, 3.3 V (for low-voltage CMOS logic circuits), 5 V (for transistor-transistor logic circuits) and 12 V (for communications circuits and peripheral devices such as disk drives). In fixed voltage regulators the reference pin is tied to ground, whereas in variable regulators the reference pin is connected to the centre point of a fixed or variable voltage divider fed by the regulator's output. A variable voltage divider such as a potentiometer allows the user to adjust the regulated voltage. See also Brokaw bandgap reference List of LM-series integrated circuits Low-dropout regulator Voltage regulator References External links ECE 327: Procedures for Voltage Regulators Lab — Gives schematics, explanations, and analyses for Zener shunt regulator, series regulator, feedback series regulator, feedback series regulator with current limiting, and feedback series regulator with current foldback. Also discusses the proper use of the LM317 integrated circuit bandgap voltage reference and bypass capacitors. ECE 327: Report Strategies for Voltage Regulators Lab — Gives more-detailed quantitative analysis of behavior of several shunt and series regulators in and out of normal operating ranges. ECE 327: LM317 Bandgap Voltage Reference Example — Brief explanation of the temperature-independent bandgap reference circuit within the LM317. "Zener regulator" at Hyperphysics Voltage regulation
Linear regulator
[ "Physics" ]
3,063
[ "Voltage", "Physical quantities", "Voltage regulation" ]
223,407
https://en.wikipedia.org/wiki/Switched-mode%20power%20supply
A switched-mode power supply (SMPS), also called switching-mode power supply, switch-mode power supply, switched power supply, or simply switcher, is an electronic power supply that incorporates a switching regulator to convert electrical power efficiently. Like other power supplies, a SMPS transfers power from a DC or AC source (often mains power, see AC adapter) to DC loads, such as a personal computer, while converting voltage and current characteristics. Unlike a linear power supply, the pass transistor of a switching-mode supply continually switches between low-dissipation, full-on and full-off states, and spends very little time in the high-dissipation transitions, which minimizes wasted energy. Voltage regulation is achieved by varying the ratio of on-to-off time (also known as duty cycle). In contrast, a linear power supply regulates the output voltage by continually dissipating power in the pass transistor. The switched-mode power supply's higher electrical efficiency is an important advantage. Switched-mode power supplies can also be substantially smaller and lighter than a linear supply because the transformer can be much smaller. This is because it operates at a high switching frequency which ranges from several hundred kHz to several MHz in contrast to the 50 or 60 Hz mains frequency used by the transformer in a linear power supply. Despite the reduced transformer size, the power supply topology and electromagnetic compatibility requirements in commercial designs result in a usually much greater component count and corresponding circuit complexity. Switching regulators are used as replacements for linear regulators when higher efficiency, smaller size or lighter weight is required. They are, however, more complicated; switching currents can cause electrical noise problems if not carefully suppressed, and simple designs may have a poor power factor. History 1836 Induction coils use switches to generate high voltages. 1910 An inductive discharge ignition system invented by Charles F. Kettering and his company Dayton Engineering Laboratories Company (Delco) goes into production for Cadillac. The Kettering ignition system is a mechanically switched version of a flyback boost converter; the transformer is the ignition coil. Variations of this ignition system were used in all non-diesel internal combustion engines until the 1960s when it began to be replaced first by solid-state electronically switched versions, then capacitive discharge ignition systems. 1926 On 23 June, British inventor Philip Ray Coursey applies for a patent in his country and United States, for his "Electrical Condenser". The patent mentions high frequency welding and furnaces, among other uses. Electromechanical relays are used to stabilize the voltage output of generators. See . Car radios used electromechanical vibrators to transform the 6 V battery supply to a suitable plate voltage for the vacuum tubes. 1959 Transistor oscillation and rectifying converter power supply system is filed by Joseph E. Murphy and Francis J. Starzec, from General Motors Company 1960s The Apollo Guidance Computer, developed in the early 1960s by the MIT Instrumentation Laboratory for NASA's ambitious Moon missions (1966–1972), incorporated early switched-mode power supplies. Bob Widlar of Fairchild Semiconductor designs the μA723 IC voltage regulator. One of its applications is as a switched-mode regulator. 1970 Tektronix starts using high-efficiency power supplies in its 7000-series oscilloscopes produced from about 1970 to 1995. 1970 Robert Boschert develops simpler, low-cost switched-mode power supply circuits. By 1977, Boschert Inc. grew to a 650-person company. After a series of mergers, acquisitions, and spin offs (Computer Products, Zytec, Artesyn, Emerson Electric) the company is now part of Advanced Energy. 1972 HP-35, Hewlett-Packard's first pocket calculator, is introduced with transistor switching power supply for light-emitting diodes, clocks, timing, ROM, and registers. 1973 Xerox uses switching power supplies in the Alto minicomputer 1976 Robert Mammano, a co-founder of Silicon General Semiconductors, develops the first integrated circuit for SMPS control, model SG1524. After a series of mergers and acquisitions (Linfinity, Symetricom, Microsemi), the company is now part of Microchip Technology. 1977 The Apple II is designed with a switched-mode power supply. 1980 The HP8662A 10 kHz–1.28 GHz synthesized signal generator was designed with a switched-mode power supply. Explanation A linear power supply (non-SMPS) uses a linear regulator to provide the desired output voltage by dissipating power in ohmic losses (e.g., in a resistor or in the collector–emitter region of a pass transistor in its active mode). A linear regulator regulates either output voltage or current by dissipating the electric power in the form of heat, and hence its maximum power efficiency is voltage-out/voltage-in since the volt difference is wasted. In contrast, a SMPS changes output voltage and current by switching ideally lossless storage elements, such as inductors and capacitors, between different electrical configurations. Ideal switching elements (approximated by transistors operated outside of their active mode) have no resistance when "on" and carry no current when "off", and so converters with ideal components would operate with 100% efficiency (i.e., all input power is delivered to the load; no power is wasted as dissipated heat). In reality, these ideal components do not exist, so a switching power supply cannot be 100% efficient, but it is still a significant improvement in efficiency over a linear regulator. For example, if a DC source, an inductor, a switch, and the corresponding electrical ground are placed in series and the switch is driven by a square wave, the peak-to-peak voltage of the waveform measured across the switch can exceed the input voltage from the DC source. This is because the inductor responds to changes in current by inducing its own voltage to counter the change in current, and this voltage adds to the source voltage while the switch is open. If a diode-and-capacitor combination is placed in parallel to the switch, the peak voltage can be stored in the capacitor, and the capacitor can be used as a DC source with an output voltage greater than the DC voltage driving the circuit. This boost converter acts like a step-up transformer for DC signals. A buck–boost converter works in a similar manner, but yields an output voltage which is opposite in polarity to the input voltage. Other buck circuits exist to boost the average output current with a reduction of voltage. In a SMPS, the output current flow depends on the input power signal, the storage elements and circuit topologies used, and also on the pattern used (e.g., pulse-width modulation with an adjustable duty cycle) to drive the switching elements. The spectral density of these switching waveforms has energy concentrated at relatively high frequencies. As such, switching transients and ripple introduced onto the output waveforms can be filtered with a small LC filter. Advantages and disadvantages The main advantage of the switching power supply is greater efficiency (up to c. 98–99%) and lower heat generation than linear regulators because the switching transistor dissipates little power when acting as a switch. Other advantages include smaller size, and lighter weight from the elimination of heavy and expensive line-frequency transformers. Standby power loss is often much less than transformers. Disadvantages include greater complexity, the generation of high-amplitude, high-frequency energy that the low-pass filter must block to avoid electromagnetic interference (EMI), a ripple voltage at the switching frequency and its harmonic frequencies. Very low-cost SMPSs may couple electrical switching noise back onto the mains power line, causing interference with devices connected to the same phase, such as A/V equipment. Non-power-factor-corrected SMPSs also cause harmonic distortion. SMPS and linear power supply comparison There are two main types of regulated power supplies available: SMPS and linear. The following table compares linear with switching power supplies in general: Theory of operation Input rectifier stage If the SMPS has an AC input, then the first stage is to convert the input to DC. This is called 'rectification'. An SMPS with a DC input does not require this stage. In some power supplies (mostly computer ATX power supplies), the rectifier circuit can be configured as a voltage doubler by the addition of a switch operated either manually or automatically. This feature permits operation from power sources that are normally at 115 VAC or at 230 VAC. The rectifier produces an unregulated DC voltage which is then sent to a large filter capacitor. The current drawn from the mains supply by this rectifier circuit occurs in short pulses around the AC voltage peaks. These pulses have significant high frequency energy which reduces the power factor. To correct for this, many newer SMPS will use a special power factor correction (PFC) circuit to make the input current follow the sinusoidal shape of the AC input voltage, correcting the power factor. Power supplies that use active PFC usually are auto-ranging, supporting input voltages from , with no input voltage selector switch. An SMPS designed for AC input can usually be run from a DC supply, because the DC would pass through the rectifier unchanged. If the power supply is designed for and has no voltage selector switch, the required DC voltage would be (115 × √2). This type of use may be harmful to the rectifier stage, however, as it will only use half of diodes in the rectifier for the full load. This could possibly result in overheating of these components, causing them to fail prematurely. On the other hand, if the power supply has a voltage selector switch, based on the Delon circuit, for 115/230 V (computer ATX power supplies typically are in this category), the selector switch would have to be put in the position, and the required voltage would be (230 × √2). The diodes in this type of power supply will handle the DC current just fine because they are rated to handle double the nominal input current when operated in the mode, due to the operation of the voltage doubler. This is because the doubler, when in operation, uses only half of the bridge rectifier and runs twice as much current through it. Inverter stage This section refers to the block marked chopper in the diagram. The inverter stage converts DC, whether directly from the input or from the rectifier stage described above, to AC by running it through a power oscillator, whose output transformer is very small with few windings, at a frequency of tens or hundreds of kilohertz. The frequency is usually chosen to be above 20 kHz, to make it inaudible to humans. The switching is implemented as a multistage (to achieve high gain) MOSFET amplifier. MOSFETs are a type of transistor with a low on-resistance and a high current-handling capacity. Voltage converter and output rectifier If the output is required to be isolated from the input, as is usually the case in mains power supplies, the inverted AC is used to drive the primary winding of a high-frequency transformer. This converts the voltage up or down to the required output level on its secondary winding. The output transformer in the block diagram serves this purpose. If a DC output is required, the AC output from the transformer is rectified. For output voltages above ten volts or so, ordinary silicon diodes are commonly used. For lower voltages, Schottky diodes are commonly used as the rectifier elements; they have the advantages of faster recovery times than silicon diodes (allowing low-loss operation at higher frequencies) and a lower voltage drop when conducting. For even lower output voltages, MOSFETs may be used as synchronous rectifiers; compared to Schottky diodes, these have even lower conducting state voltage drops. The rectified output is then smoothed by a filter consisting of inductors and capacitors. For higher switching frequencies, components with lower capacitance and inductance are needed. Simpler, non-isolated power supplies contain an inductor instead of a transformer. This type includes boost converters, buck converters, and buck–boost converters. These belong to the simplest class of single input, single output converters which use one inductor and one active switch. The buck converter reduces the input voltage in direct proportion to the ratio of conductive time to the total switching period, called the duty cycle. For example, an ideal buck converter with a 10 V input operating at a 50% duty cycle will produce an average output voltage of 5 V. A feedback control loop is employed to regulate the output voltage by varying the duty cycle to compensate for variations in input voltage. The output voltage of a boost converter is always greater than the input voltage and the buck–boost output voltage is inverted but can be greater than, equal to, or less than the magnitude of its input voltage. There are many variations and extensions to this class of converters but these three form the basis of almost all isolated and non-isolated DC-to-DC converters. By adding a second inductor the Ćuk and SEPIC converters can be implemented, or, by adding additional active switches, various bridge converters can be realized. Other types of SMPSs use a capacitor–diode voltage multiplier instead of inductors and transformers. These are mostly used for generating high voltages at low currents (Cockcroft-Walton generator). The low voltage variant is called charge pump. Regulation A feedback circuit monitors the output voltage and compares it with a reference voltage. Depending on design and safety requirements, the controller may contain an isolation mechanism (such as an opto-coupler) to isolate it from the DC output. Switching supplies in computers, TVs and VCRs have these opto-couplers to tightly control the output voltage. Open-loop regulators do not have a feedback circuit. Instead, they rely on feeding a constant voltage to the input of the transformer or inductor, and assume that the output will be correct. Regulated designs compensate for the impedance of the transformer or coil. Monopolar designs also compensate for the magnetic hysteresis of the core. The feedback circuit needs power to run before it can generate power, so an additional non-switching power supply for stand-by is added. Transformer design Any switched-mode power supply that gets its power from an AC power line (called an "off-line" converter) requires a transformer for galvanic isolation. Some DC-to-DC converters may also include a transformer, although isolation may not be critical in these cases. SMPS transformers run at high frequencies. Most of the cost savings (and space savings) in off-line power supplies result from the smaller size of the high-frequency transformer compared to the 50/60 Hz transformers formerly used. There are additional design tradeoffs. The terminal voltage of a transformer is proportional to the product of the core area, magnetic flux, and frequency. By using a much higher frequency, the core area (and so the mass of the core) can be greatly reduced. However, core losses increase at higher frequencies. Cores generally use ferrite material which has a low loss at the high frequencies and high flux densities used. The laminated iron cores of lower-frequency (<400 Hz) transformers would be unacceptably lossy at switching frequencies of a few kilohertz. Also, more energy is lost during transitions of the switching semiconductor at higher frequencies. Furthermore, more attention to the physical layout of the circuit board is required as parasitics become more significant, and the amount of electromagnetic interference will be more pronounced. Copper loss At low frequencies (such as the line frequency of 50 or 60 Hz), designers can usually ignore the skin effect. For these frequencies, the skin effect is only significant when the conductors are large, more than in diameter. Switching power supplies must pay more attention to the skin effect because it is a source of power loss. At 500 kHz, the skin depth in copper is about – a dimension smaller than the typical wires used in a power supply. The effective resistance of conductors increases, because current concentrates near the surface of the conductor and the inner portion carries less current than at low frequencies. The skin effect is exacerbated by the harmonics present in the high-speed pulse-width modulation (PWM) switching waveforms. The appropriate skin depth is not just the depth at the fundamental, but also the skin depths at the harmonics. In addition to the skin effect, there is also a proximity effect, which is another source of power loss. Power factor Simple off-line switched-mode power supplies incorporate a simple full-wave rectifier connected to a large energy-storing capacitor. Such SMPSs draw current from the AC line in short pulses when the mains instantaneous voltage exceeds the voltage across this capacitor. During the remaining portion of the AC cycle the capacitor provides energy to the power supply. As a result, the input current of such basic switched-mode power supplies has high harmonic content and relatively low power factor. This creates extra load on utility lines, increases heating of building wiring, the utility transformers, and standard AC electric motors, and may cause stability problems in some applications such as in emergency generator systems or aircraft generators. Harmonics can be removed by filtering, but the filters are expensive. Unlike displacement power factor created by linear inductive or capacitive loads, this distortion cannot be corrected by addition of a single linear component. Additional circuits are required to counteract the effect of the brief current pulses. Putting a current regulated boost chopper stage after the off-line rectifier (to charge the storage capacitor) can correct the power factor, but increases the complexity and cost. In 2001, the European Union put into effect the standard IEC 61000-3-2 to set limits on the harmonics of the AC input current up to the 40th harmonic for equipment above 75 W. The standard defines four classes of equipment depending on its type and current waveform. The most rigorous limits (class D) are established for personal computers, computer monitors, and TV receivers. To comply with these requirements, modern switched-mode power supplies normally include an additional power factor correction (PFC) stage. Types Switched-mode power supplies can be classified according to the circuit topology. The most important distinction is between isolated converters and non-isolated ones. Non-isolated topologies Non-isolated converters are simplest, with the three basic types using a single inductor for energy storage. In the voltage relation column, D is the duty cycle of the converter, and can vary from 0 to 1. The input voltage (V1) is assumed to be greater than zero; if it is negative, for consistency, negate the output voltage (V2). When equipment is human-accessible, voltage limits of ≤ 30 V (r.m.s.) AC or ≤ 42.4 V peak or ≤ 60 V DC and power limits of 250 VA apply for safety certification (UL, CSA, VDE approval). The buck, boost, and buck–boost topologies are all strongly related. Input, output and ground come together at one point. One of the three passes through an inductor on the way, while the other two pass through switches. One of the two switches must be active (e.g., a transistor), while the other can be a diode. Sometimes, the topology can be changed simply by re-labeling the connections. A 12 V input, 5 V output buck converter can be converted to a 7 V input, −5 V output buck–boost by grounding the output and taking the output from the ground pin. Likewise, SEPIC and Zeta converters are both minor rearrangements of the Ćuk converter. The neutral point clamped (NPC) topology is used in power supplies and active filters and is mentioned here for completeness. Switchers become less efficient as duty cycles become extremely short. For large voltage changes, a transformer (isolated) topology may be better. Isolated topologies All isolated topologies include a transformer, and thus can produce an output of higher or lower voltage than the input by adjusting the turns ratio. For some topologies, multiple windings can be placed on the transformer to produce multiple output voltages. Some converters use the transformer for energy storage, while others use a separate inductor. Flyback converter logarithmic control loop behavior might be harder to control than other types. The forward converter has several variants, varying in how the transformer is "reset" to zero magnetic flux every cycle. Chopper controller: The output voltage is coupled to the input thus very tightly controlled Quasi-resonant zero-current/zero-voltage switch In a quasi-resonant zero-current/zero-voltage switch (ZCS/ZVS) "each switch cycle delivers a quantized 'packet' of energy to the converter output, and switch turn-on and turn-off occurs at zero current and voltage, resulting in an essentially lossless switch." Quasi-resonant switching, also known as valley switching, reduces EMI in the power supply by two methods: By switching the bipolar switch when the voltage is at a minimum (in the valley) to minimize the hard switching effect that causes EMI. By switching when a valley is detected, rather than at a fixed frequency, introduces a natural frequency jitter that spreads the RF emissions spectrum and reduces overall EMI. Efficiency and EMI Higher input voltage and synchronous rectification mode makes the conversion process more efficient. The power consumption of the controller also has to be taken into account. Higher switching frequency allows component sizes to be shrunk, but can produce more RFI. A resonant forward converter produces the lowest EMI of any SMPS approach because it uses a soft-switching resonant waveform compared with conventional hard switching. Failure modes SMPSs tend to be temperature sensitive. For every 10-15 °C beyond 25 °C, failure rate doubles. Most failures can be attributed to improper design and poor component selections. Power supplies with capacitors that have reached the end of their life or suffer from manufacturing defects such as the capacitor plague will fail eventually. When either the capacitance decreases or the ESR increases, the regulator compensates by increasing the switching frequency, thereby subjecting the switching semiconductors to ever greater thermal stress. Eventually the switching semiconductors fail, usually in a conductive manner. For power supplies without fail-safe protection, this may subject connected loads to the full input voltage and current, and wild oscillations can occur in the output. Failure of the switching transistor is common. Due to the large switching voltages this transistor must handle (around for a non-power-factor-corrected mains supply, otherwise usually around ), these transistors often short out, in turn immediately blowing the main internal power fuse. Power supplies in consumer products are frequently damaged by lightning strikes on power lines as well as internal short circuits caused by insects attracted to the heat and electrostatic fields. Those events may damage any part of the power supply. Precautions The main filter capacitor will often store up to long after the input power has been disconnected. Not all power supplies contain a small "bleeder" resistor which slowly discharges the capacitor. Contact with this capacitor can result in a severe electrical shock. The primary and secondary sides may be connected with a capacitor to reduce EMI and compensate for various capacitive couplings in the converter circuit, where the transformer is one. This may result in electric shock in some cases. The current flowing from line or neutral through a resistor to any accessible part must, according to , be less than for IT equipment. Applications Switched-mode power supply units (PSUs) in domestic products such as personal computers often have universal inputs, meaning that they can accept power from mains supplies throughout the world, although a manual voltage range switch may be required. Switch-mode power supplies can tolerate a wide range of power frequencies and voltages. Due to their high volumes mobile phone chargers have always been particularly cost sensitive. The first chargers were linear power supplies, but they quickly moved to the cost-effective ringing choke converter (RCC) SMPS topology, when new levels of efficiency were required. Recently, the demand for even lower no-load power requirements in the application has meant that flyback topology is being used more widely; primary side sensing flyback controllers are also helping to cut the bill of materials (BOM) by removing secondary-side sensing components such as optocouplers. Switched-mode power supplies are used for DC-to-DC conversion as well. In heavy vehicles that use a nominal cranking supply, 12 V for accessories may be furnished through a DC/DC switch-mode supply. This has the advantage over tapping the battery at the 12 V position (using half the cells) that the entire 12 V load is evenly divided between all cells of the 24 V battery. In industrial settings such as telecommunications racks, bulk power may be distributed at a low DC voltage (e.g. from a battery backup system) and individual equipment items will have DC/DC switched-mode converters to supply required voltages. A common use for switched-mode power supplies is an extra-low-voltage source for lighting. For this application, they are often called "electronic transformers". Terminology The term switch mode was widely used until Motorola claimed ownership of the trademark SWITCHMODE for products aimed at the switching-mode power supply market and started to enforce its trademark. Switching-mode power supply, switching power supply, and switching regulator refer to this type of power supply. See also Auto transformer Boost converter Buck converter Conducted electromagnetic interference DC to DC converter Inrush current Joule thief Leakage inductance Resonant converter Switching amplifier Transformer Vibrator (electronic) 80 Plus Explanatory notes Notes References Further reading Application Note giving an extensive introduction in Buck, Boost, CUK, Inverter applications. (download as PDF from http://www.linear.com/designtools/app_notes.php) External links Switching Power Supply Topologies Poster - Texas Instruments Load Power Sources for Peak Efficiency, by James Colotti, published in EDN 1979 October 5 Notes on the Troubleshooting and Repair of Small Switchmode Power Supplies, by Samuel M. Goldwasser as part of Sci.Electronics.Repair FAQ Power supplies Power electronics Electric power conversion Voltage regulation
Switched-mode power supply
[ "Physics", "Engineering" ]
5,604
[ "Physical quantities", "Voltage regulation", "Electronic engineering", "Voltage", "Power electronics" ]
224,300
https://en.wikipedia.org/wiki/Bowen%20ratio
The Bowen ratio is used to describe the type of heat transfer for a surface that has moisture. Heat transfer can either occur as sensible heat (differences in temperature without evapotranspiration) or latent heat (the energy required during a change of state, without a change in temperature). The Bowen ratio is generally used to calculate heat lost (or gained) in a substance; it is the ratio of energy fluxes from one state to another by sensible heat and latent heating respectively. The ratio was named by Harald Sverdrup after Ira Sprague Bowen (1898–1973), an astrophysicist whose theoretical work on evaporation to air from water bodies made first use of it, and it is used most commonly in meteorology and hydrology. Formulation The Bowen ratio is calculated by the equation: , where is sensible heating and is latent heating. In this context, when the magnitude of is less than one, a greater proportion of the available energy at the surface is passed to the atmosphere as latent heat than as sensible heat, and the converse is true for values of greater than one. As , however, becomes unbounded making the Bowen ratio a poor choice of variable for use in formulae, especially for arid surfaces. For this reason the evaporative fraction is sometimes a more appropriate choice of variable representing the relative contributions of the turbulent energy fluxes to the surface energy budget. The Bowen ratio is related to the evaporative fraction, , through the equation, . The Bowen ratio is an indicator of the type of surface. The Bowen ratio, , is less than one over surfaces with abundant water supplies. References External links National Science Digital Library - Bowen Ratio 1926 introductions Engineering ratios Heat transfer Atmospheric thermodynamics
Bowen ratio
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
358
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Metrics", "Engineering ratios", "Quantity", "Thermodynamics" ]
224,301
https://en.wikipedia.org/wiki/Boundary%20layer
In physics and fluid mechanics, a boundary layer is the thin layer of fluid in the immediate vicinity of a bounding surface formed by the fluid flowing along the surface. The fluid's interaction with the wall induces a no-slip boundary condition (zero velocity at the wall). The flow velocity then monotonically increases above the surface until it returns to the bulk flow velocity. The thin layer consisting of fluid whose velocity has not yet returned to the bulk flow velocity is called the velocity boundary layer. The air next to a human is heated, resulting in gravity-induced convective airflow, which results in both a velocity and thermal boundary layer. A breeze disrupts the boundary layer, and hair and clothing protect it, making the human feel cooler or warmer. On an aircraft wing, the velocity boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. In the Earth's atmosphere, the atmospheric boundary layer is the air layer (~ 1 km) near the ground. It is affected by the surface; day-night heat flows caused by the sun heating the ground, moisture, or momentum transfer to or from the surface. Types of boundary layers Laminar boundary layers can be loosely classified according to their structure and the circumstances under which they are created. The thin shear layer which develops on an oscillating body is an example of a Stokes boundary layer, while the Blasius boundary layer refers to the well-known similarity solution near an attached flat plate held in an oncoming unidirectional flow and Falkner–Skan boundary layer, a generalization of Blasius profile. When a fluid rotates and viscous forces are balanced by the Coriolis effect (rather than convective inertia), an Ekman layer forms. In the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously. The viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the wing's surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow: laminar and turbulent. Laminar boundary layer flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or "eddies." The laminar flow creates less skin friction drag than the turbulent flow, but is less stable. Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the laminar boundary layer increases in thickness. Turbulent boundary layer flow At some distance back from the leading edge, the smooth laminar flow breaks down and transitions to a turbulent flow. From a drag standpoint, it is advisable to have the transition from laminar to turbulent flow as far aft on the wing as possible, or have a large amount of the wing surface within the laminar portion of the boundary layer. The low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The Prandtl boundary layer concept The aerodynamic boundary layer was first hypothesized by Ludwig Prandtl in a paper presented on August 12, 1904, at the third International Congress of Mathematicians in Heidelberg, Germany. It simplifies the equations of fluid flow by dividing the flow field into two areas: one inside the boundary layer, dominated by viscosity and creating the majority of drag experienced by the boundary body; and one outside the boundary layer, where viscosity can be neglected without significant effects on the solution. This allows a closed-form solution for the flow in both areas by making significant simplifications of the full Navier–Stokes equations. The same hypothesis is applicable to other fluids (besides air) with moderate to low viscosity such as water. For the case where there is a temperature difference between the surface and the bulk fluid, it is found that the majority of the heat transfer to and from a body takes place in the vicinity of the velocity boundary layer. This again allows the equations to be simplified in the flow field outside the boundary layer. The pressure distribution throughout the boundary layer in the direction normal to the surface (such as an airfoil) remains relatively constant throughout the boundary layer, and is the same as on the surface itself. The thickness of the velocity boundary layer is normally defined as the distance from the solid body to the point at which the viscous flow velocity is 99% of the freestream velocity (the surface velocity of an inviscid flow). Displacement thickness is an alternative definition stating that the boundary layer represents a deficit in mass flow compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the inviscid case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of a solid object be zero and the fluid temperature be equal to the temperature of the surface. The flow velocity will then increase rapidly within the boundary layer, governed by the boundary layer equations, below. The thermal boundary layer thickness is similarly the distance from the body at which the temperature is 99% of the freestream temperature. The ratio of the two thicknesses is governed by the Prandtl number. If the Prandtl number is 1, the two boundary layers are the same thickness. If the Prandtl number is greater than 1, the thermal boundary layer is thinner than the velocity boundary layer. If the Prandtl number is less than 1, which is the case for air at standard conditions, the thermal boundary layer is thicker than the velocity boundary layer. In high-performance designs, such as gliders and commercial aircraft, much attention is paid to controlling the behavior of the boundary layer to minimize drag. Two effects have to be considered. First, the boundary layer adds to the effective thickness of the body, through the displacement thickness, hence increasing the pressure drag. Secondly, the shear forces at the surface of the wing create skin friction drag. At high Reynolds numbers, typical of full-sized aircraft, it is desirable to have a laminar boundary layer. This results in a lower skin friction due to the characteristic velocity profile of laminar flow. However, the boundary layer inevitably thickens and becomes less stable as the flow develops along the body, and eventually becomes turbulent, the process known as boundary layer transition. One way of dealing with this problem is to suck the boundary layer away through a porous surface (see Boundary layer suction). This can reduce drag, but is usually impractical due to its mechanical complexity and the power required to move the air and dispose of it. Natural laminar flow (NLF) techniques push the boundary layer transition aft by reshaping the airfoil or fuselage so that its thickest point is more aft and less thick. This reduces the velocities in the leading part and the same Reynolds number is achieved with a greater length. At lower Reynolds numbers, such as those seen with model aircraft, it is relatively easy to maintain laminar flow. This gives low skin friction, which is desirable. However, the same velocity profile which gives the laminar boundary layer its low skin friction also causes it to be badly affected by adverse pressure gradients. As the pressure begins to recover over the rear part of the wing chord, a laminar boundary layer will tend to separate from the surface. Such flow separation causes a large increase in the pressure drag, since it greatly increases the effective size of the wing section. In these cases, it can be advantageous to deliberately trip the boundary layer into turbulence at a point prior to the location of laminar separation, using a turbulator. The fuller velocity profile of the turbulent boundary layer allows it to sustain the adverse pressure gradient without separating. Thus, although the skin friction is increased, overall drag is decreased. This is the principle behind the dimpling on golf balls, as well as vortex generators on aircraft. Special wing sections have also been designed which tailor the pressure recovery so laminar separation is reduced or even eliminated. This represents an optimum compromise between the pressure drag from flow separation and skin friction from induced turbulence. When using half-models in wind tunnels, a peniche is sometimes used to reduce or eliminate the effect of the boundary layer. Boundary layer equations The deduction of the boundary layer equations was one of the most important advances in fluid dynamics. Using an order of magnitude analysis, the well-known governing Navier–Stokes equations of viscous fluid flow can be greatly simplified within the boundary layer. Notably, the characteristic of the partial differential equations (PDE) becomes parabolic, rather than the elliptical form of the full Navier–Stokes equations. This greatly simplifies the solution of the equations. By making the boundary layer approximation, the flow is divided into an inviscid portion (which is easy to solve by a number of methods) and the boundary layer, which is governed by an easier to solve PDE. The continuity and Navier–Stokes equations for a two-dimensional steady incompressible flow in Cartesian coordinates are given by where and are the velocity components, is the density, is the pressure, and is the kinematic viscosity of the fluid at a point. The approximation states that, for a sufficiently high Reynolds number the flow over a surface can be divided into an outer region of inviscid flow unaffected by viscosity (the majority of the flow), and a region close to the surface where viscosity is important (the boundary layer). Let and be streamwise and transverse (wall normal) velocities respectively inside the boundary layer. Using scale analysis, it can be shown that the above equations of motion reduce within the boundary layer to become and if the fluid is incompressible (as liquids are under standard conditions): The order of magnitude analysis assumes the streamwise length scale significantly larger than the transverse length scale inside the boundary layer. It follows that variations in properties in the streamwise direction are generally much lower than those in the wall normal direction. Apply this to the continuity equation shows that , the wall normal velocity, is small compared with the streamwise velocity. Since the static pressure is independent of , then pressure at the edge of the boundary layer is the pressure throughout the boundary layer at a given streamwise position. The external pressure may be obtained through an application of Bernoulli's equation. Let be the fluid velocity outside the boundary layer, where and are both parallel. This gives upon substituting for the following result For a flow in which the static pressure also does not change in the direction of the flow so remains constant. Therefore, the equation of motion simplifies to become These approximations are used in a variety of practical flow problems of scientific and engineering interest. The above analysis is for any instantaneous laminar or turbulent boundary layer, but is used mainly in laminar flow studies since the mean flow is also the instantaneous flow because there are no velocity fluctuations present. This simplified equation is a parabolic PDE and can be solved using a similarity solution often referred to as the Blasius boundary layer. Prandtl's transposition theorem Prandtl observed that from any solution which satisfies the boundary layer equations, further solution , which is also satisfying the boundary layer equations, can be constructed by writing where is arbitrary. Since the solution is not unique from mathematical perspective, to the solution can be added any one of an infinite set of eigenfunctions as shown by Stewartson and Paul A. Libby. Von Kármán momentum integral Von Kármán derived the integral equation by integrating the boundary layer equation across the boundary layer in 1921. The equation is where is the wall shear stress, is the suction/injection velocity at the wall, is the displacement thickness and is the momentum thickness. Kármán–Pohlhausen Approximation is derived from this equation. Energy integral The energy integral was derived by Wieghardt. where is the energy dissipation rate due to viscosity across the boundary layer and is the energy thickness. Von Mises transformation For steady two-dimensional boundary layers, von Mises introduced a transformation which takes and (stream function) as independent variables instead of and and uses a dependent variable instead of . The boundary layer equation then become The original variables are recovered from This transformation is later extended to compressible boundary layer by von Kármán and HS Tsien. Crocco's transformation For steady two-dimensional compressible boundary layer, Luigi Crocco introduced a transformation which takes and as independent variables instead of and and uses a dependent variable (shear stress) instead of . The boundary layer equation then becomes The original coordinate is recovered from Turbulent boundary layers The treatment of turbulent boundary layers is far more difficult due to the time-dependent variation of the flow properties. One of the most widely used techniques in which turbulent flows are tackled is to apply Reynolds decomposition. Here the instantaneous flow properties are decomposed into a mean and fluctuating component with the assumption that the mean of the fluctuating component is always zero. Applying this technique to the boundary layer equations gives the full turbulent boundary layer equations not often given in literature: Using a similar order-of-magnitude analysis, the above equations can be reduced to leading order terms. By choosing length scales for changes in the transverse-direction, and for changes in the streamwise-direction, with , the x-momentum equation simplifies to: This equation does not satisfy the no-slip condition at the wall. Like Prandtl did for his boundary layer equations, a new, smaller length scale must be used to allow the viscous term to become leading order in the momentum equation. By choosing as the y-scale, the leading order momentum equation for this "inner boundary layer" is given by: In the limit of infinite Reynolds number, the pressure gradient term can be shown to have no effect on the inner region of the turbulent boundary layer. The new "inner length scale" is a viscous length scale, and is of order , with being the velocity scale of the turbulent fluctuations, in this case a friction velocity. Unlike the laminar boundary layer equations, the presence of two regimes governed by different sets of flow scales (i.e. the inner and outer scaling) has made finding a universal similarity solution for the turbulent boundary layer difficult and controversial. To find a similarity solution that spans both regions of the flow, it is necessary to asymptotically match the solutions from both regions of the flow. Such analysis will yield either the so-called log-law or power-law. Similar approaches to the above analysis has also been applied for thermal boundary layers, using the energy equation in compressible flows. The additional term in the turbulent boundary layer equations is known as the Reynolds shear stress and is unknown a priori. The solution of the turbulent boundary layer equations therefore necessitates the use of a turbulence model, which aims to express the Reynolds shear stress in terms of known flow variables or derivatives. The lack of accuracy and generality of such models is a major obstacle in the successful prediction of turbulent flow properties in modern fluid dynamics. A constant stress layer exists in the near wall region. Due to the damping of the vertical velocity fluctuations near the wall, the Reynolds stress term will become negligible and we find that a linear velocity profile exists. This is only true for the very near wall region. Heat and mass transfer In 1928, the French engineer André Lévêque observed that convective heat transfer in a flowing fluid is affected only by the velocity values very close to the surface. For flows of large Prandtl number, the temperature/mass transition from surface to freestream temperature takes place across a very thin region close to the surface. Therefore, the most important fluid velocities are those inside this very thin region in which the change in velocity can be considered linear with normal distance from the surface. In this way, for when , then where θ is the tangent of the Poiseuille parabola intersecting the wall. Although Lévêque's solution was specific to heat transfer into a Poiseuille flow, his insight helped lead other scientists to an exact solution of the thermal boundary-layer problem. Schuh observed that in a boundary-layer, u is again a linear function of y, but that in this case, the wall tangent is a function of x. He expressed this with a modified version of Lévêque's profile, This results in a very good approximation, even for low numbers, so that only liquid metals with much less than 1 cannot be treated this way. In 1962, Kestin and Persen published a paper describing solutions for heat transfer when the thermal boundary layer is contained entirely within the momentum layer and for various wall temperature distributions. For the problem of a flat plate with a temperature jump at , they propose a substitution that reduces the parabolic thermal boundary-layer equation to an ordinary differential equation. The solution to this equation, the temperature at any point in the fluid, can be expressed as an incomplete gamma function. Schlichting proposed an equivalent substitution that reduces the thermal boundary-layer equation to an ordinary differential equation whose solution is the same incomplete gamma function. Analytic solutions can be derived with the time-dependent self-similar Ansatz for the incompressible boundary layer equations including heat conduction. As is well known from several textbooks, heat transfer tends to decrease with the increase in the boundary layer. Recently, it was observed on a practical and large scale that wind flowing through a photovoltaic generator tends to "trap" heat in the PV panels under a turbulent regime due to the decrease in heat transfer. Despite being frequently assumed to be inherently turbulent, this accidental observation demonstrates that natural wind behaves in practice very close to an ideal fluid, at least in an observation resembling the expected behaviour in a flat plate, potentially reducing the difficulty in analysing this kind of phenomenon on a larger scale. Convective transfer constants from boundary layer analysis Paul Richard Heinrich Blasius derived an exact solution to the above laminar boundary layer equations. The thickness of the boundary layer is a function of the Reynolds number for laminar flow. = the thickness of the boundary layer: the region of flow where the velocity is less than 99% of the far field velocity ; is position along the semi-infinite plate, and is the Reynolds Number given by ( density and dynamic viscosity). The Blasius solution uses boundary conditions in a dimensionless form: at at and Note that in many cases, the no-slip boundary condition holds that , the fluid velocity at the surface of the plate equals the velocity of the plate at all locations. If the plate is not moving, then . A much more complicated derivation is required if fluid slip is allowed. In fact, the Blasius solution for laminar velocity profile in the boundary layer above a semi-infinite plate can be easily extended to describe Thermal and Concentration boundary layers for heat and mass transfer respectively. Rather than the differential x-momentum balance (equation of motion), this uses a similarly derived Energy and Mass balance: Energy: Mass: For the momentum balance, kinematic viscosity can be considered to be the momentum diffusivity. In the energy balance this is replaced by thermal diffusivity , and by mass diffusivity in the mass balance. In thermal diffusivity of a substance, is its thermal conductivity, is its density and is its heat capacity. Subscript AB denotes diffusivity of species A diffusing into species B. Under the assumption that , these equations become equivalent to the momentum balance. Thus, for Prandtl number and Schmidt number the Blasius solution applies directly. Accordingly, this derivation uses a related form of the boundary conditions, replacing with or (absolute temperature or concentration of species A). The subscript S denotes a surface condition. at at and Using the streamline function Blasius obtained the following solution for the shear stress at the surface of the plate. And via the boundary conditions, it is known that We are given the following relations for heat/mass flux out of the surface of the plate So for where are the regions of flow where and are less than 99% of their far field values. Because the Prandtl number of a particular fluid is not often unity, German engineer E. Polhausen who worked with Ludwig Prandtl attempted to empirically extend these equations to apply for . His results can be applied to as well. He found that for Prandtl number greater than 0.6, the thermal boundary layer thickness was approximately given by: and therefore From this solution, it is possible to characterize the convective heat/mass transfer constants based on the region of boundary layer flow. Fourier's law of conduction and Newton's Law of Cooling are combined with the flux term derived above and the boundary layer thickness. This gives the local convective constant at one point on the semi-infinite plane. Integrating over the length of the plate gives an average Following the derivation with mass transfer terms ( = convective mass transfer constant, = diffusivity of species A into species B, ), the following solutions are obtained: These solutions apply for laminar flow with a Prandtl/Schmidt number greater than 0.6. Naval architecture Many of the principles that apply to aircraft also apply to ships, submarines, and offshore platforms, with water as the primary fluid of concern rather than air. As water is not an ideal fluid, ships moving in water experience resistance. The fluid particles cling to the hull of the ship due to the adhesive force between water and the ship, creating a boundary layer where the speed of flow of the fluid forms a small but steep speed gradient, with the fluid in contact with the ship ideally has a relative velocity of 0, and the fluid at the border of the boundary layer being the free-stream speed, or the relative speed of the fluid around the ship. While the front of the ship faces normal pressure forces due to the fluid surrounding it, the aft portion sees a lower acting component of pressure due to the boundary layer. This leads to higher resistance due to pressure known as 'viscous pressure drag' or 'form drag'. For ships, unlike aircraft, one deals with incompressible flows, where change in water density is negligible (a pressure rise close to 1000kPa leads to a change of only 2–3 kg/m3). This field of fluid dynamics is called hydrodynamics. A ship engineer designs for hydrodynamics first, and for strength only later. The boundary layer development, breakdown, and separation become critical because the high viscosity of water produces high shear stresses. Boundary layer turbine This effect was exploited in the Tesla turbine, patented by Nikola Tesla in 1913. It is referred to as a bladeless turbine because it uses the boundary layer effect and not a fluid impinging upon the blades as in a conventional turbine. Boundary layer turbines are also known as cohesion-type turbine, bladeless turbine, and Prandtl layer turbine (after Ludwig Prandtl). Predicting transient boundary layer thickness in a cylinder using dimensional analysis By using the transient and viscous force equations for a cylindrical flow you can predict the transient boundary layer thickness by finding the Womersley Number (). Transient Force = Viscous Force = Setting them equal to each other gives: Solving for delta gives: In dimensionless form: where = Womersley Number; = density; = velocity; ?; = length of transient boundary layer; = viscosity; = characteristic length. Predicting convective flow conditions at the boundary layer in a cylinder using dimensional analysis By using the convective and viscous force equations at the boundary layer for a cylindrical flow you can predict the convective flow conditions at the boundary layer by finding the dimensionless Reynolds Number (). Convective force: Viscous force: Setting them equal to each other gives: Solving for delta gives: In dimensionless form: where = Reynolds Number; = density; = velocity; = length of convective boundary layer; = viscosity; = characteristic length. Boundary layer ingestion Boundary layer ingestion promises an increase in aircraft fuel efficiency with an aft-mounted propulsor ingesting the slow fuselage boundary layer and re-energising the wake to reduce drag and improve propulsive efficiency. To operate in distorted airflow, the fan is heavier and its efficiency is reduced, and its integration is challenging. It is used in concepts like the Aurora D8 or the French research agency Onera's Nova, saving 5% in cruise by ingesting 40% of the fuselage boundary layer. Airbus presented the Nautilius concept at the ICAS congress in September 2018: to ingest all the fuselage boundary layer, while minimizing the azimuthal flow distortion, the fuselage splits into two spindles with 13-18:1 bypass ratio fans. Propulsive efficiencies are up to 90% like counter-rotating open rotors with smaller, lighter, less complex and noisy engines. It could lower fuel burn by over 10% compared to a usual underwing 15:1 bypass ratio engine. See also Boundary layer separation Boundary-layer thickness Thermal boundary layer thickness and shape Boundary layer suction Boundary layer control Boundary microphone Blasius boundary layer Falkner–Skan boundary layer Ekman layer Planetary boundary layer Perturbation theory Logarithmic law of the wall Shape factor (boundary layer flow) Shear stress Surface layer References A.D. Polyanin and V.F. Zaitsev, Handbook of Nonlinear Partial Differential Equations, Chapman & Hall/CRC Press, Boca Raton – London, 2004. A.D. Polyanin, A.M. Kutepov, A.V. Vyazmin, and D.A. Kazenin, Hydrodynamics, Mass and Heat Transfer in Chemical Engineering, Taylor & Francis, London, 2002. Hermann Schlichting, Klaus Gersten, E. Krause, H. Jr. Oertel, C. Mayes "Boundary-Layer Theory" 8th edition Springer 2004 John D. Anderson Jr., "Ludwig Prandtl's Boundary Layer", Physics Today, December 2005 H. Tennekes and J. L. Lumley, "A First Course in Turbulence", The MIT Press, (1972). Lectures in Turbulence for the 21st Century by William K. George External links National Science Digital Library – Boundary Layer Moore, Franklin K., "Displacement effect of a three-dimensional boundary layer". NACA Report 1124, 1953. Benson, Tom, "Boundary layer". NASA Glenn Learning Technologies. Boundary layer separation Boundary layer equations: Exact Solutions – from EqWorld Jones, T.V. BOUNDARY LAYER HEAT TRANSFER Aircraft wing design Heat transfer
Boundary layer
[ "Physics", "Chemistry" ]
5,556
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Boundary layers", "Thermodynamics", "Fluid dynamics" ]
224,312
https://en.wikipedia.org/wiki/Lapse%20rate
The lapse rate is the rate at which an atmospheric variable, normally temperature in Earth's atmosphere, falls with altitude. Lapse rate arises from the word lapse (in its "becoming less" sense, not its "interruption" sense). In dry air, the adiabatic lapse rate (i.e., decrease in temperature of a parcel of air that rises in the atmosphere without exchanging energy with surrounding air) is 9.8 °C/km (5.4 °F per 1,000 ft). The saturated adiabatic lapse rate (SALR), or moist adiabatic lapse rate (MALR), is the decrease in temperature of a parcel of water-saturated air that rises in the atmosphere. It varies with the temperature and pressure of the parcel and is often in the range 3.6 to (2 to ), as obtained from the International Civil Aviation Organization (ICAO). The environmental lapse rate is the decrease in temperature of air with altitude for a specific time and place (see below). It can be highly variable between circumstances. Lapse rate corresponds to the vertical component of the spatial gradient of temperature. Although this concept is most often applied to the Earth's troposphere, it can be extended to any gravitationally supported parcel of gas. Definition A formal definition from the Glossary of Meteorology is: The decrease of an atmospheric variable with height, the variable being temperature unless otherwise specified. Typically, the lapse rate is the negative of the rate of temperature change with altitude change: where (sometimes ) is the lapse rate given in units of temperature divided by units of altitude, T is temperature, and z is altitude. Environmental lapse rate The environmental lapse rate (ELR), is the actual rate of decrease of temperature with altitude in the atmosphere at a given time and location. The ELR is the observed lapse rate, and is to be distinguished from the adiabatic lapse rate which is a theoretical construct. The ELR is forced towards the adiabatic lapse rate whenever air is moving vertically. As an average, the International Civil Aviation Organization (ICAO) defines an international standard atmosphere (ISA) with a temperature lapse rate of or from sea level to 11 km or . From 11 km up to 20 km or , the constant temperature is , which is the lowest assumed temperature in the ISA. The standard atmosphere contains no moisture. Unlike the idealized ISA, the temperature of the actual atmosphere does not always fall at a uniform rate with height. For example, there can be an inversion layer in which the temperature increases with altitude. Cause The temperature profile of the atmosphere is a result of the interaction between radiative heating from sunlight, cooling to space via thermal radiation, and upward heat transport via natural convection (which carries hot air and latent heat upward). Above the tropopause, convection does not occur and all cooling is radiative. Within the troposphere, the lapse rate is a essentially the consequence of a balance between (a) radiative cooling of the air, which by itself would lead to a high lapse rate; and (b) convection, which is activated when the lapse rate exceeds a critical value; convection stabilizes the environmental lapse rate and prevents it from substantially exceeding the adiabatic lapse rate. Sunlight hits the surface of the earth (land and sea) and heats them. The warm surface heats the air above it. In addition, nearly a third of absorbed sunlight is absorbed within the atmosphere, heating the atmosphere directly. Thermal conduction helps transfer heat from the surface to the air; this conduction occurs within the few millimeters of air closest to the surface. However, above that thin interface layer, thermal conduction plays a negligible role in transferring heat within the atmosphere; this is because the thermal conductivity of air is very low. The air is radiatively cooled by greenhouse gases (water vapor, carbon dioxide, etc.) and clouds emitting longwave thermal radiation to space. If radiation were the only way to transfer energy within the atmosphere, then the lapse rate near the surface would be roughly 40 °C/km and the greenhouse effect of gases in the atmosphere would keep the ground at roughly . However, when air gets hot or humid, its density decreases. Thus, air which has been heated by the surface tends to rise and carry internal energy upward, especially if the air has been moistened by evaporation from water surfaces. This is the process of convection. Vertical convective motion stops when a parcel of air at a given altitude has the same density as the other air at the same elevation. Convection carries hot, moist air upward and cold, dry air downward, with a net effect of transferring heat upward. This makes the air below cooler than it would otherwise be and the air above warmer. When convection happens, this shifts the environmental lapse rate towards the adiabatic lapse rate, which is a thermal gradient characteristic of vertically moving air packets. Because convection is available to transfer heat within the atmosphere, the lapse rate in the troposphere is reduced to around 6.5 °C/km and the greenhouse effect is reduced to a point where Earth has its observed surface temperature of around . Convection and adiabatic expansion As convection causes parcels of air to rise or fall, there is little heat transfer between those parcels and the surrounding air. Air has low thermal conductivity, and the bodies of air involved are very large; so transfer of heat by conduction is negligibly small. Also, intra-atmospheric radiative heat transfer is relatively slow and so is negligible for moving air. Thus, when air ascends or descends, there is little exchange of heat with the surrounding air. A process in which no heat is exchanged with the environment is referred to as an adiabatic process. Air expands as it moves upward, and contracts as it moves downward. The expansion of rising air parcels, and the contraction of descending air parcels, are adiabatic processes, to a good approximation. When a parcel of air expands, it pushes on the air around it, doing thermodynamic work. Since the upward-moving and expanding parcel does work but gains no heat, it loses internal energy so that its temperature decreases. Downward-moving and contracting air has work done on it, so it gains internal energy and its temperature increases. Adiabatic processes for air have a characteristic temperature-pressure curve. As air circulates vertically, the air takes on that characteristic gradient. When the air contains little water, this lapse rate is known as the dry adiabatic lapse rate: the rate of temperature decrease is ( per 1,000 ft) (3.0 °C/1,000 ft). The reverse occurs for a sinking parcel of air. When the environmental lapse rate is less than the adiabatic lapse rate the atmosphere is stable and convection will not occur. Only the troposphere (up to approximately of altitude) in the Earth's atmosphere undergoes convection: the stratosphere does not generally convect. However, some exceptionally energetic convection processes, such as volcanic eruption columns and overshooting tops associated with severe supercell thunderstorms, may locally and temporarily inject convection through the tropopause and into the stratosphere. Energy transport in the atmosphere is more complex than the interaction between radiation and dry convection. The water cycle (including evaporation, condensation, precipitation) transports latent heat and affects atmospheric humidity levels, significantly influencing the temperature profile, as described below. Mathematics of the adiabatic lapse rate The following calculations derive the temperature as a function of altitude for a packet of air which is ascending or descending without exchanging heat with its environment. Dry adiabatic lapse rate Thermodynamics defines an adiabatic process as: the first law of thermodynamics can be written as Also, since the density and , we can show that: where is the specific heat at constant pressure. Assuming an atmosphere in hydrostatic equilibrium: where g is the standard gravity. Combining these two equations to eliminate the pressure, one arrives at the result for the dry adiabatic lapse rate (DALR), The DALR () is the temperature gradient experienced in an ascending or descending packet of air that is not saturated with water vapor, i.e., with less than 100% relative humidity. Moist adiabatic lapse rate The presence of water within the atmosphere (usually the troposphere) complicates the process of convection. Water vapor contains latent heat of vaporization. As a parcel of air rises and cools, it eventually becomes saturated; that is, the vapor pressure of water in equilibrium with liquid water has decreased (as temperature has decreased) to the point where it is equal to the actual vapor pressure of water. With further decrease in temperature the water vapor in excess of the equilibrium amount condenses, forming cloud, and releasing heat (latent heat of condensation). Before saturation, the rising air follows the dry adiabatic lapse rate. After saturation, the rising air follows the moist (or wet) adiabatic lapse rate. The release of latent heat is an important source of energy in the development of thunderstorms. While the dry adiabatic lapse rate is a constant ( per 1,000 ft, ), the moist adiabatic lapse rate varies strongly with temperature. A typical value is around , (, , ). The formula for the saturated adiabatic lapse rate (SALR) or moist adiabatic lapse rate (MALR) is given by: where: {| border="0" cellpadding="2" |- | style="text-align:right;" | , | wet adiabatic lapse rate, K/m |- | style="text-align:right;" | , | Earth's gravitational acceleration = 9.8076 m/s2 |- | style="text-align:right;" | , | heat of vaporization of water = |- | style="text-align:right;" | , | specific gas constant of dry air = 287 J/kg·K |- | style="text-align:right;" | , | specific gas constant of water vapour = 461.5 J/kg·K |- | style="text-align:right;" | , | the dimensionless ratio of the specific gas constant of dry air to the specific gas constant for water vapour = 0.622 |- | style="text-align:right;" | , | the water vapour pressure of the saturated air |- | style="text-align:right;" | , | the mixing ratio of the mass of water vapour to the mass of dry air |- | style="text-align:right;" | , | the pressure of the saturated air |- |- | style="text-align:right;" | , | temperature of the saturated air, K |- | style="text-align:right;" | , | the specific heat of dry air at constant pressure, = 1003.5J/kg·K |} The SALR or MALR () is the temperature gradient experienced in an ascending or descending packet of air that is saturated with water vapor, i.e., with 100% relative humidity. Effect on weather The varying environmental lapse rates throughout the Earth's atmosphere are of critical importance in meteorology, particularly within the troposphere. They are used to determine if the parcel of rising air will rise high enough for its water to condense to form clouds, and, having formed clouds, whether the air will continue to rise and form bigger shower clouds, and whether these clouds will get even bigger and form cumulonimbus clouds (thunder clouds). As unsaturated air rises, its temperature drops at the dry adiabatic rate. The dew point also drops (as a result of decreasing air pressure) but much more slowly, typically about per 1,000 m. If unsaturated air rises far enough, eventually its temperature will reach its dew point, and condensation will begin to form. This altitude is known as the lifting condensation level (LCL) when mechanical lift is present and the convective condensation level (CCL) when mechanical lift is absent, in which case, the parcel must be heated from below to its convective temperature. The cloud base will be somewhere within the layer bounded by these parameters. The difference between the dry adiabatic lapse rate and the rate at which the dew point drops is around per 1,000 m. Given a difference in temperature and dew point readings on the ground, one can easily find the LCL by multiplying the difference by 125 m/°C. If the environmental lapse rate is less than the moist adiabatic lapse rate, the air is absolutely stable — rising air will cool faster than the surrounding air and lose buoyancy. This often happens in the early morning, when the air near the ground has cooled overnight. Cloud formation in stable air is unlikely. If the environmental lapse rate is between the moist and dry adiabatic lapse rates, the air is conditionally unstable — an unsaturated parcel of air does not have sufficient buoyancy to rise to the LCL or CCL, and it is stable to weak vertical displacements in either direction. If the parcel is saturated it is unstable and will rise to the LCL or CCL, and either be halted due to an inversion layer of convective inhibition, or if lifting continues, deep, moist convection (DMC) may ensue, as a parcel rises to the level of free convection (LFC), after which it enters the free convective layer (FCL) and usually rises to the equilibrium level (EL). If the environmental lapse rate is larger than the dry adiabatic lapse rate, it has a superadiabatic lapse rate, the air is absolutely unstable — a parcel of air will gain buoyancy as it rises both below and above the lifting condensation level or convective condensation level. This often happens in the afternoon mainly over land masses. In these conditions, the likelihood of cumulus clouds, showers or even thunderstorms is increased. Meteorologists use radiosondes to measure the environmental lapse rate and compare it to the predicted adiabatic lapse rate to forecast the likelihood that air will rise. Charts of the environmental lapse rate are known as thermodynamic diagrams, examples of which include Skew-T log-P diagrams and tephigrams. (See also Thermals). The difference in moist adiabatic lapse rate and the dry rate is the cause of foehn wind phenomenon (also known as "Chinook winds" in parts of North America). The phenomenon exists because warm moist air rises through orographic lifting up and over the top of a mountain range or large mountain. The temperature decreases with the dry adiabatic lapse rate, until it hits the dew point, where water vapor in the air begins to condense. Above that altitude, the adiabatic lapse rate decreases to the moist adiabatic lapse rate as the air continues to rise. Condensation is also commonly followed by precipitation on the top and windward sides of the mountain. As the air descends on the leeward side, it is warmed by adiabatic compression at the dry adiabatic lapse rate. Thus, the foehn wind at a certain altitude is warmer than the corresponding altitude on the windward side of the mountain range. In addition, because the air has lost much of its original water vapor content, the descending air creates an arid region on the leeward side of the mountain. Impact on the greenhouse effect If the environmental lapse rate was zero, so that the atmosphere was the same temperature at all elevations, then there would be no greenhouse effect. This doesn't mean the lapse rate and the greenhouse effect are the same thing, just that the lapse rate is a prerequisite for the greenhouse effect. The presence of greenhouse gases on a planet causes radiative cooling of the air, which leads to the formation of a non-zero lapse rate. So, the presence of greenhouse gases leads to there being a greenhouse effect at a global level. However, this need not be the case at a localized level. The localized greenhouse effect is stronger in locations where the lapse rate is stronger. In Antarctica, thermal inversions in the atmosphere (so that air at higher altitudes is warmer) sometimes cause the localized greenhouse effect to become negative (signifying enhanced radiative cooling to space instead of inhibited radiative cooling as is the case for a positive greenhouse effect). Lapse rate in an isolated column of gas A question has sometimes arisen as to whether a temperature gradient will arise in a column of still air in a gravitational field without external energy flows. This issue was addressed by James Clerk Maxwell in 1902, who established that if any temperature gradient forms, then that temperature gradient must be universal (i.e., the gradient must be same for all materials) or the second law of thermodynamics would be violated. Maxwell also concluded that the universal result must be one in which the temperature is uniform, i.e., the lapse rate is zero. Santiago and Visser (2019) confirm the correctness of Maxwell's conclusion (zero lapse rate) provided relativistic effects are neglected. When relativity is taken into account, gravity gives rise to an extremely small lapse rate, the Tolman gradient (derived by R. C. Tolman in 1930). At Earth's surface, the Tolman gradient would be about m, where is the temperature of the gas at the elevation of Earth's surface. Santiago and Visser remark that "gravity is the only force capable of creating temperature gradients in thermal equilibrium states without violating the laws of thermodynamics" and "the existence of Tolman's temperature gradient is not at all controversial (at least not within the general relativity community)." See also Adiabatic process Atmospheric thermodynamics Fluid dynamics Foehn wind Lapse rate climate feedback Scale height Notes References Further reading www.air-dispersion.com External links Definition, equations and tables of lapse rate from the Planetary Data system. National Science Digital Library glossary: Lapse Rate Environmental lapse rate Absolute stable air An introduction to lapse rate calculation from first principles from U. Texas Atmospheric thermodynamics Climate change feedbacks Fluid mechanics Meteorological quantities Spatial gradient Atmospheric temperature Vertical position
Lapse rate
[ "Physics", "Mathematics", "Engineering" ]
3,881
[ "Functions and mappings", "Physical quantities", "Quantity", "Mathematical objects", "Meteorological quantities", "Vertical distributions", "Civil engineering", "Mathematical relations", "Fluid mechanics" ]
224,636
https://en.wikipedia.org/wiki/Supersymmetry
Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. There have been multiple experiments on supersymmetry that have failed to provide evidence that it exists in nature. If evidence is found, supersymmetry could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics. A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. The names of bosonic partners of fermions are prefixed with s-, because they are scalar particles. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. In the simplest supersymmetry theories, with perfectly "unbroken" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass. Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified. History A supersymmetry relating mesons and baryons was first proposed, in the context of hadronic physics, by Hironari Miyazawa in 1966. This supersymmetry did not involve spacetime, that is, it concerned internal symmetry, and was broken badly. Miyazawa's work was largely ignored at the time. J. L. Gervais and B. Sakita (in 1971), Yu. A. Golfand and E. P. Likhtman (also in 1971), and D. V. Volkov and V. P. Akulov (1972), independently rediscovered supersymmetry in the context of quantum field theory, a radically new type of symmetry of spacetime and fundamental fields, which establishes a relationship between elementary particles of different quantum nature, bosons and fermions, and unifies spacetime and internal symmetries of microscopic phenomena. Supersymmetry with a consistent Lie-algebraic graded structure on which the Gervais−Sakita rediscovery was based directly first arose in 1971 in the context of an early version of string theory by Pierre Ramond, John H. Schwarz and André Neveu. In 1974, Julius Wess and Bruno Zumino identified the characteristic renormalization features of four-dimensional supersymmetric field theories, which identified them as remarkable QFTs, and they and Abdus Salam and their fellow researchers introduced early particle physics applications. The mathematical structure of supersymmetry (graded Lie superalgebras) has subsequently been applied successfully to other topics of physics, ranging from nuclear physics, critical phenomena, quantum mechanics to statistical physics, and supersymmetry remains a vital part of many proposed theories in many branches of physics. In particle physics, the first realistic supersymmetric version of the Standard Model was proposed in 1977 by Pierre Fayet and is known as the Minimal Supersymmetric Standard Model or MSSM for short. It was proposed to solve, amongst other things, the hierarchy problem. Supersymmetry was coined by Abdus Salam and John Strathdee in 1974 as a simplification of the term super-gauge symmetry used by Wess and Zumino, although Zumino also used the same term at around the same time. The term supergauge was in turn coined by Neveu and Schwarz in 1971 when they devised supersymmetry in the context of string theory. Applications Extension of possible symmetry groups One reason that physicists explored supersymmetry is because it offers an extension to the more familiar symmetries of quantum field theory. These symmetries are grouped into the Poincaré group and internal symmetries and the Coleman–Mandula theorem showed that under certain assumptions, the symmetries of the S-matrix must be a direct product of the Poincaré group with a compact internal symmetry group or if there is not any mass gap, the conformal group with a compact internal symmetry group. In 1971 Golfand and Likhtman were the first to show that the Poincaré algebra can be extended through introduction of four anticommuting spinor generators (in four dimensions), which later became known as supercharges. In 1975, the Haag–Łopuszański–Sohnius theorem analyzed all possible superalgebras in the general form, including those with an extended number of the supergenerators and central charges. This extended super-Poincaré algebra paved the way for obtaining a very large and important class of supersymmetric field theories. The supersymmetry algebra Traditional symmetries of physics are generated by objects that transform by the tensor representations of the Poincaré group and internal symmetries. Supersymmetries, however, are generated by objects that transform by the spin representations. According to the spin-statistics theorem, bosonic fields commute while fermionic fields anticommute. Combining the two kinds of fields into a single algebra requires the introduction of a Z2-grading under which the bosons are the even elements and the fermions are the odd elements. Such an algebra is called a Lie superalgebra. The simplest supersymmetric extension of the Poincaré algebra is the Super-Poincaré algebra. Expressed in terms of two Weyl spinors, has the following anti-commutation relation: and all other anti-commutation relations between the Qs and commutation relations between the Qs and Ps vanish. In the above expression are the generators of translation and σμ are the Pauli matrices. There are representations of a Lie superalgebra that are analogous to representations of a Lie algebra. Each Lie algebra has an associated Lie group and a Lie superalgebra can sometimes be extended into representations of a Lie supergroup. Supersymmetric quantum mechanics Supersymmetric quantum mechanics adds the SUSY superalgebra to quantum mechanics as opposed to quantum field theory. Supersymmetric quantum mechanics often becomes relevant when studying the dynamics of supersymmetric solitons, and due to the simplified nature of having fields which are only functions of time (rather than space-time), a great deal of progress has been made in this subject and it is now studied in its own right. SUSY quantum mechanics involves pairs of Hamiltonians which share a particular mathematical relationship, which are called partner Hamiltonians. (The potential energy terms which occur in the Hamiltonians are then known as partner potentials.) An introductory theorem shows that for every eigenstate of one Hamiltonian, its partner Hamiltonian has a corresponding eigenstate with the same energy. This fact can be exploited to deduce many properties of the eigenstate spectrum. It is analogous to the original description of SUSY, which referred to bosons and fermions. We can imagine a "bosonic Hamiltonian", whose eigenstates are the various bosons of our theory. The SUSY partner of this Hamiltonian would be "fermionic", and its eigenstates would be the theory's fermions. Each boson would have a fermionic partner of equal energy. In finance In 2021, supersymmetric quantum mechanics was applied to option pricing and the analysis of markets in finance, and to financial networks. Supersymmetry in quantum field theory In quantum field theory, supersymmetry is motivated by solutions to several theoretical problems, for generally providing many desirable mathematical properties, and for ensuring sensible behavior at high energies. Supersymmetric quantum field theory is often much easier to analyze, as many more problems become mathematically tractable. When supersymmetry is imposed as a local symmetry, Einstein's theory of general relativity is included automatically, and the result is said to be a theory of supergravity. Another theoretically appealing property of supersymmetry is that it offers the only "loophole" to the Coleman–Mandula theorem, which prohibits spacetime and internal symmetries from being combined in any nontrivial way, for quantum field theories with very general assumptions. The Haag–Łopuszański–Sohnius theorem demonstrates that supersymmetry is the only way spacetime and internal symmetries can be combined consistently. While supersymmetry has not been discovered at high energy, see Section Supersymmetry in particle physics, supersymmetry was found to be effectively realized at the intermediate energy of hadronic physics where baryons and mesons are superpartners. An exception is the pion that appears as a zero mode in the mass spectrum and thus protected by the supersymmetry: It has no baryonic partner. The realization of this effective supersymmetry is readily explained in quark–diquark models: Because two different color charges close together (e.g., blue and red) appear under coarse resolution as the corresponding anti-color (e.g. anti-green), a diquark cluster viewed with coarse resolution (i.e., at the energy-momentum scale used to study hadron structure) effectively appears as an antiquark. Therefore, a baryon containing 3 valence quarks, of which two tend to cluster together as a diquark, behaves likes a meson. Supersymmetry in condensed matter physics SUSY concepts have provided useful extensions to the WKB approximation. Additionally, SUSY has been applied to disorder averaged systems both quantum and non-quantum (through statistical mechanics), the Fokker–Planck equation being an example of a non-quantum theory. The 'supersymmetry' in all these systems arises from the fact that one is modelling one particle and as such the 'statistics' do not matter. The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems, which attempts to address the so-called 'problem of the denominator' under disorder averaging. For more on the applications of supersymmetry in condensed matter physics see Efetov (1997). In 2021, a group of researchers showed that, in theory, SUSY could be realised at the edge of a Moore–Read quantum Hall state. However, to date, no experiments have been done yet to realise it at an edge of a Moore–Read state. In 2022, a different group of researchers created a computer simulation of atoms in 1 dimensions that had supersymmetric topological quasiparticles. Supersymmetry in optics In 2013, integrated optics was found to provide a fertile ground on which certain ramifications of SUSY can be explored in readily-accessible laboratory settings. Making use of the analogous mathematical structure of the quantum-mechanical Schrödinger equation and the wave equation governing the evolution of light in one-dimensional settings, one may interpret the refractive index distribution of a structure as a potential landscape in which optical wave packets propagate. In this manner, a new class of functional optical structures with possible applications in phase matching, mode conversion and space-division multiplexing becomes possible. SUSY transformations have been also proposed as a way to address inverse scattering problems in optics and as a one-dimensional transformation optics. Supersymmetry in dynamical systems All stochastic (partial) differential equations, the models for all types of continuous time dynamical systems, possess topological supersymmetry. In the operator representation of stochastic evolution, the topological supersymmetry is the exterior derivative which is commutative with the stochastic evolution operator defined as the stochastically averaged pullback induced on differential forms by SDE-defined diffeomorphisms of the phase space. The topological sector of the so-emerging supersymmetric theory of stochastic dynamics can be recognized as the Witten-type topological field theory. The meaning of the topological supersymmetry in dynamical systems is the preservation of the phase space continuity—infinitely close points will remain close during continuous time evolution even in the presence of noise. When the topological supersymmetry is broken spontaneously, this property is violated in the limit of the infinitely long temporal evolution and the model can be said to exhibit (the stochastic generalization of) the butterfly effect. From a more general perspective, spontaneous breakdown of the topological supersymmetry is the theoretical essence of the ubiquitous dynamical phenomenon variously known as chaos, turbulence, self-organized criticality etc. The Goldstone theorem explains the associated emergence of the long-range dynamical behavior that manifests itself as noise, butterfly effect, and the scale-free statistics of sudden (instantonic) processes, such as earthquakes, neuroavalanches, and solar flares, known as the Zipf's law and the Richter scale. Supersymmetry in mathematics SUSY is also sometimes studied mathematically for its intrinsic properties. This is because it describes complex fields satisfying a property known as holomorphy, which allows holomorphic quantities to be exactly computed. This makes supersymmetric models useful "toy models" of more realistic theories. A prime example of this has been the demonstration of S-duality in four-dimensional gauge theories that interchanges particles and monopoles. The proof of the Atiyah–Singer index theorem is much simplified by the use of supersymmetric quantum mechanics. Supersymmetry in string theory Supersymmetry is an integral part of string theory, a possible theory of everything. There are two types of string theory, supersymmetric string theory or superstring theory, and non-supersymmetric string theory. By definition of superstring theory, supersymmetry is required in superstring theory at some level. However, even in non-supersymmetric string theory, a type of supersymmetry called misaligned supersymmetry is still required in the theory in order to ensure no physical tachyons appear. Any string theories without some kind of supersymmetry, such as bosonic string theory and the , , and heterotic string theories, will have a tachyon and therefore the spacetime vacuum itself would be unstable and would decay into some tachyon-free string theory usually in a lower spacetime dimension. There is no experimental evidence that either supersymmetry or misaligned supersymmetry holds in our universe, and many physicists have moved on from supersymmetry and string theory entirely due to the non-detection of supersymmetry at the LHC. Despite the null results for supersymmetry at the LHC so far, some particle physicists have nevertheless moved to string theory in order to resolve the naturalness crisis for certain supersymmetric extensions of the Standard Model. According to the particle physicists, there exists a concept of "stringy naturalness" in string theory, where the string theory landscape could have a power law statistical pull on soft SUSY breaking terms to large values (depending on the number of hidden sector SUSY breaking fields contributing to the soft terms). If this is coupled with an anthropic requirement that contributions to the weak scale not exceed a factor between 2 and 5 from its measured value (as argued by Agrawal et al.), then the Higgs mass is pulled up to the vicinity of 125 GeV while most sparticles are pulled to values beyond the current reach of LHC. (The Higgs was determined to have a mass of 125 GeV ±0.15 GeV in 2022.) An exception occurs for higgsinos which gain mass not from SUSY breaking but rather from whatever mechanism solves the SUSY mu problem. Light higgsino pair production in association with hard initial state jet radiation leads to a soft opposite-sign dilepton plus jet plus missing transverse energy signal. Supersymmetry in particle physics In particle physics, a supersymmetric extension of the Standard Model is a possible candidate for undiscovered particle physics, and seen by some physicists as an elegant solution to many current problems in particle physics if confirmed correct, which could resolve various areas where current theories are believed to be incomplete and where limitations of current theories are well established. In particular, one supersymmetric extension of the Standard Model, the Minimal Supersymmetric Standard Model (MSSM), became popular in theoretical particle physics, as the Minimal Supersymmetric Standard Model is the simplest supersymmetric extension of the Standard Model that could resolve major hierarchy problems within the Standard Model, by guaranteeing that quadratic divergences of all orders will cancel out in perturbation theory. If a supersymmetric extension of the Standard Model is correct, superpartners of the existing elementary particles would be new and undiscovered particles and supersymmetry is expected to be spontaneously broken. There is no experimental evidence that a supersymmetric extension to the Standard Model is correct, or whether or not other extensions to current models might be more accurate. It is only since around 2010 that particle accelerators specifically designed to study physics beyond the Standard Model have become operational (i.e. the Large Hadron Collider (LHC)), and it is not known where exactly to look, nor the energies required for a successful search. However, the negative results from the LHC since 2010 have already ruled out some supersymmetric extensions to the Standard Model, and many physicists believe that the Minimal Supersymmetric Standard Model, while not ruled out, is no longer able to fully resolve the hierarchy problem. Supersymmetric extensions of the Standard Model Incorporating supersymmetry into the Standard Model requires doubling the number of particles since there is no way that any of the particles in the Standard Model can be superpartners of each other. With the addition of new particles, there are many possible new interactions. The simplest possible supersymmetric model consistent with the Standard Model is the Minimal Supersymmetric Standard Model (MSSM) which can include the necessary additional new particles that are able to be superpartners of those in the Standard Model. One of the original motivations for the Minimal Supersymmetric Standard Model came from the hierarchy problem. Due to the quadratically divergent contributions to the Higgs mass squared in the Standard Model, the quantum mechanical interactions of the Higgs boson causes a large renormalization of the Higgs mass and unless there is an accidental cancellation, the natural size of the Higgs mass is the greatest scale possible. Furthermore, the electroweak scale receives enormous Planck-scale quantum corrections. The observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. This problem is known as the hierarchy problem. Supersymmetry close to the electroweak scale, such as in the Minimal Supersymmetric Standard Model, would solve the hierarchy problem that afflicts the Standard Model. It would reduce the size of the quantum corrections by having automatic cancellations between fermionic and bosonic Higgs interactions, and Planck-scale quantum corrections cancel between partners and superpartners (owing to a minus sign associated with fermionic loops). The hierarchy between the electroweak scale and the Planck scale would be achieved in a natural manner, without extraordinary fine-tuning. If supersymmetry were restored at the weak scale, then the Higgs mass would be related to supersymmetry breaking which can be induced from small non-perturbative effects explaining the vastly different scales in the weak interactions and gravitational interactions. Another motivation for the Minimal Supersymmetric Standard Model comes from grand unification, the idea that the gauge symmetry groups should unify at high-energy. In the Standard Model, however, the weak, strong and electromagnetic gauge couplings fail to unify at high energy. In particular, the renormalization group evolution of the three gauge coupling constants of the Standard Model is somewhat sensitive to the present particle content of the theory. These coupling constants do not quite meet together at a common energy scale if we run the renormalization group using the Standard Model. After incorporating minimal SUSY at the electroweak scale, the running of the gauge couplings are modified, and joint convergence of the gauge coupling constants is projected to occur at approximately 1016 GeV. The modified running also provides a natural mechanism for radiative electroweak symmetry breaking. In many supersymmetric extensions of the Standard Model, such as the Minimal Supersymmetric Standard Model, there is a heavy stable particle (such as the neutralino) which could serve as a weakly interacting massive particle (WIMP) dark matter candidate. The existence of a supersymmetric dark matter candidate is related closely to R-parity. Supersymmetry at the electroweak scale (augmented with a discrete symmetry) typically provides a candidate dark matter particle at a mass scale consistent with thermal relic abundance calculations. The standard paradigm for incorporating supersymmetry into a realistic theory is to have the underlying dynamics of the theory be supersymmetric, but the ground state of the theory does not respect the symmetry and supersymmetry is broken spontaneously. The supersymmetry break can not be done permanently by the particles of the MSSM as they currently appear. This means that there is a new sector of the theory that is responsible for the breaking. The only constraint on this new sector is that it must break supersymmetry permanently and must give superparticles TeV scale masses. There are many models that can do this and most of their details do not matter. In order to parameterize the relevant features of supersymmetry breaking, arbitrary soft SUSY breaking terms are added to the theory which temporarily break SUSY explicitly but could never arise from a complete theory of supersymmetry breaking. Searches and constraints for supersymmetry SUSY extensions of the standard model are constrained by a variety of experiments, including measurements of low-energy observables – for example, the anomalous magnetic moment of the muon at Fermilab; the WMAP dark matter density measurement and direct detection experiments – for example, XENON-100 and LUX; and by particle collider experiments, including B-physics, Higgs phenomenology and direct searches for superpartners (sparticles), at the Large Electron–Positron Collider, Tevatron and the LHC. In fact, CERN publicly states that if a supersymmetric model of the Standard Model "is correct, supersymmetric particles should appear in collisions at the LHC." Historically, the tightest limits were from direct production at colliders. The first mass limits for squarks and gluinos were made at CERN by the UA1 experiment and the UA2 experiment at the Super Proton Synchrotron. LEP later set very strong limits, which in 2006 were extended by the D0 experiment at the Tevatron. From 2003 to 2015, WMAP's and Planck's dark matter density measurements have strongly constrained supersymmetric extensions of the Standard Model, which, if they explain dark matter, have to be tuned to invoke a particular mechanism to sufficiently reduce the neutralino density. Prior to the beginning of the LHC, in 2009, fits of available data to CMSSM and NUHM1 indicated that squarks and gluinos were most likely to have masses in the 500 to 800 GeV range, though values as high as 2.5 TeV were allowed with low probabilities. Neutralinos and sleptons were expected to be quite light, with the lightest neutralino and the lightest stau most likely to be found between 100 and 150 GeV. The first runs of the LHC surpassed existing experimental limits from the Large Electron–Positron Collider and Tevatron and partially excluded the aforementioned expected ranges. In 2011–12, the LHC discovered a Higgs boson with a mass of about 125 GeV, and with couplings to fermions and bosons which are consistent with the Standard Model. The MSSM predicts that the mass of the lightest Higgs boson should not be much higher than the mass of the Z boson, and, in the absence of fine tuning (with the supersymmetry breaking scale on the order of 1 TeV), should not exceed 135 GeV. The LHC found no previously unknown particles other than the Higgs boson which was already suspected to exist as part of the Standard Model, and therefore no evidence for any supersymmetric extension of the Standard Model. Indirect methods include the search for a permanent electric dipole moment (EDM) in the known Standard Model particles, which can arise when the Standard Model particle interacts with the supersymmetric particles. The current best constraint on the electron electric dipole moment put it to be smaller than 10−28 e·cm, equivalent to a sensitivity to new physics at the TeV scale and matching that of the current best particle colliders. A permanent EDM in any fundamental particle points towards time-reversal violating physics, and therefore also CP-symmetry violation via the CPT theorem. Such EDM experiments are also much more scalable than conventional particle accelerators and offer a practical alternative to detecting physics beyond the standard model as accelerator experiments become increasingly costly and complicated to maintain. The current best limit for the electron's EDM has already reached a sensitivity to rule out so called 'naive' versions of supersymmetric extensions of the Standard Model. Research in the late 2010s and early 2020s from experimental data on the cosmological constant, LIGO noise, and pulsar timing, suggests it's very unlikely that there are any new particles with masses much higher than those which can be found in the standard model or the LHC. However, this research has also indicated that quantum gravity or perturbative quantum field theory will become strongly coupled before 1 PeV, leading to other new physics in the TeVs. Current status The negative findings in the experiments disappointed many physicists, who believed that supersymmetric extensions of the Standard Model (and other theories relying upon it) were by far the most promising theories for "new" physics beyond the Standard Model, and had hoped for signs of unexpected results from the experiments. In particular, the LHC result seems problematic for the Minimal Supersymmetric Standard Model, as the value of 125 GeV is relatively large for the model and can only be achieved with large radiative loop corrections from top squarks, which many theorists consider to be "unnatural" (see naturalness and fine tuning). In response to the so-called "naturalness crisis" in the Minimal Supersymmetric Standard Model, some researchers have abandoned naturalness and the original motivation to solve the hierarchy problem naturally with supersymmetry, while other researchers have moved on to other supersymmetric models such as split supersymmetry. Still others have moved to string theory as a result of the naturalness crisis. Former enthusiastic supporter Mikhail Shifman went as far as urging the theoretical community to search for new ideas and accept that supersymmetry was a failed theory in particle physics. However, some researchers suggested that this "naturalness" crisis was premature because various calculations were too optimistic about the limits of masses which would allow a supersymmetric extension of the Standard Model as a solution. General supersymmetry Supersymmetry appears in many related contexts of theoretical physics. It is possible to have multiple supersymmetries and also have supersymmetric extra dimensions. Extended supersymmetry It is possible to have more than one kind of supersymmetry transformation. Theories with more than one supersymmetry transformation are known as extended supersymmetric theories. The more supersymmetry a theory has, the more constrained are the field content and interactions. Typically the number of copies of a supersymmetry is a power of 2 (1, 2, 4, 8...). In four dimensions, a spinor has four degrees of freedom and thus the minimal number of supersymmetry generators is four in four dimensions and having eight copies of supersymmetry means that there are 32 supersymmetry generators. The maximal number of supersymmetry generators possible is 32. Theories with more than 32 supersymmetry generators automatically have massless fields with spin greater than 2. It is not known how to make massless fields with spin greater than two interact, so the maximal number of supersymmetry generators considered is 32. This is due to the Weinberg–Witten theorem. This corresponds to an N = 8 supersymmetry theory. Theories with 32 supersymmetries automatically have a graviton. For four dimensions there are the following theories, with the corresponding multiplets (CPT adds a copy, whenever they are not invariant under such symmetry): {|class="wikitable" !rowspan=4|N = 1 |Chiral multiplet||style="border-right:none"|(0,||style="border-left:none" colspan=8|) |- |Vector multiplet||style="border-right:none"|(,||style="border-left:none" colspan=8|1) |- |Gravitino multiplet||style="border-right:none"|(1,||style="border-left:none" colspan=8|) |- |Graviton multiplet||style="border-right:none"|(,||style="border-left:none" colspan=8|2) |- !rowspan=3|N = 2 |Hypermultiplet||style="border-right:none"|(−,||style="border-left:none;border-right:none"|0,||style="border-left:none" colspan=7|) |- |Vector multiplet||style="border-right:none"|(0,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=7|1) |- |Supergravity multiplet||style="border-right:none"|(1,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=7|2) |- !rowspan=2|N = 4 |Vector multiplet||style="border-right:none"|(−1,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|0,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=5|1) |- |Supergravity multiplet||style="border-right:none"|(0,||style="border-left:none;border-right:none"| ,||style="border-left:none;border-right:none"|1,||style="border-left:none;border-right:none"| ,||style="border-left:none" colspan=5|2) |- !N = 8 |Supergravity multiplet||style="border-right:none"|(−2,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|−1,||style="border-left:none;border-right:none"|− ,||style="border-left:none;border-right:none"|0,||style="border-left:none;border-right:none"| ,||style="border-left:none;border-right:none"|1,||style="border-left:none;border-right:none"| ,||style="border-left:none"|2) |} Supersymmetry in alternate numbers of dimensions It is possible to have supersymmetry in dimensions other than four. Because the properties of spinors change drastically between different dimensions, each dimension has its characteristic. In d dimensions, the size of spinors is approximately 2d/2 or 2(d − 1)/2. Since the maximum number of supersymmetries is 32, the greatest number of dimensions in which a supersymmetric theory can exist is eleven. Fractional supersymmetry Fractional supersymmetry is a generalization of the notion of supersymmetry in which the minimal positive amount of spin does not have to be but can be an arbitrary for integer value of N. Such a generalization is possible in two or fewer spacetime dimensions. See also 4D N = 1 global supersymmetry Anyon Next-to-Minimal Supersymmetric Standard Model Quantum group Split supersymmetry Supercharge Supermultiplet Supergeometry Supergravity Supergroup Superpartner Superspace Supersplit supersymmetry Supersymmetric gauge theory Supersymmetry nonrenormalization theorems Wess–Zumino model References Further reading Supersymmetry and Supergravity page in String Theory Wiki lists more books and reviews. Theoretical introductions, free and online . . . Monographs Baer, H., and Tata, X., Weak Scale Supersymmetry, Cambridge University Press, Cambridge, (1999). . Binetruy, P., Supersymmetry: Theory, Experiment, and Cosmology, Oxford University Press, Oxford, (2012). . Cecotti, S., Supersymmetric Field Theories: Geometric Structures and Dualities, Cambridge University Press, Cambridge, (2015). . Drees, M., Godbole, R., and Roy, P., Theory & Phenomenology of Sparticles, World Scientific, Singapore (2005). . Dreiner, H.K., Haber, H.E., Martin, S.P., From Spinors to Supersymmetry, Cambridge University Press, Cambridge, (2023). . Duplij, S., Siegel, W., and Bagger, J.,Concise Encyclopedia of Supersymmetry, Springer, (2003). . Freud, P.G.O., Introduction to Supersymmetry, Cambridge University Press, Cambridge, (1988). . Junker, G., Supersymmetric Methods in Quantum and Statistical Physics, Springer, (2011). . Kane, G.L., Supersymmetry: Unveiling the Ultimate Laws of Nature, Basic Books, New York (2001). . Kane, G.L., and Shifman, M., eds. The Supersymmetric World: The Beginnings of the Theory, World Scientific, Singapore (2000). . Müller-Kirsten, H.J.W., and Wiedemann, A., Introduction to Supersymmetry, 2nd ed., World Scientific, Singapore (2010). . Nath, P., Supersymmetry, Supergravity and Unification, Cambridge University Press, Cambridge, (2016). . Raby, S., Supersymmetric Grand Unified Theories, Springer, (2017). . Tachikawa, Y., N=2 Supersymmetric Dynamics for Pedestrians, Springer, (2014). . Terning, J., Modern Supersymmetry: Dynamics and Duality, Oxford University Press, Oxford, (2009). . Wegner, F., Supermathematics and its Applications in Statistical Physics, Springer, (2016). . Weinberg, S., The Quantum Theory of Fields, Volume 3: Supersymmetry, Cambridge University Press, Cambridge, (1999). . Wess, J. and Bagger, J., Supersymmetry and Supergravity, Princeton University Press, Princeton, (1992). . On experiments Brookhaven National Laboratory (Jan 8, 2004). New g−2 measurement deviates further from Standard Model. Press Release. Fermi National Accelerator Laboratory (Sept 25, 2006). Fermilab's CDF scientists have discovered the quick-change behavior of the B-sub-s meson. Press Release. External links Supersymmetry – European Organization for Nuclear Research (CERN) The status of supersymmetry – Symmetry Magazine (Fermilab/SLAC), January 12, 2021 As Supersymmetry Fails Tests, Physicists Seek New Ideas – Quanta Magazine, November 20, 2012 What is Supersymmetry? – Fermilab, May 21, 2013 Why Supersymmetry? – Fermilab, May 31, 2013 The Standard Model and Supersymmetry – World Science Festival, March 4, 2015 SUSY running out of hiding places – BBC, December 11, 2012 Concepts in physics Concepts in the philosophy of science History of science Physics beyond the Standard Model Quantum field theory Theoretical physics
Supersymmetry
[ "Physics", "Technology" ]
8,092
[ "Quantum field theory", "History of science and technology", "History of science", "Theoretical physics", "Unsolved problems in physics", "Quantum mechanics", "Particle physics", "nan", "Supersymmetry", "Physics beyond the Standard Model", "Symmetry" ]
225,256
https://en.wikipedia.org/wiki/Silicon%20carbide
Silicon carbide (SiC), also known as carborundum (), is a hard chemical compound containing silicon and carbon. A wide bandgap semiconductor, it occurs in nature as the extremely rare mineral moissanite, but has been mass-produced as a powder and crystal since 1893 for use as an abrasive. Grains of silicon carbide can be bonded together by sintering to form very hard ceramics that are widely used in applications requiring high endurance, such as car brakes, car clutches and ceramic plates in bulletproof vests. Large single crystals of silicon carbide can be grown by the Lely method and they can be cut into gems known as synthetic moissanite. Electronic applications of silicon carbide such as light-emitting diodes (LEDs) and detectors in early radios were first demonstrated around 1907. SiC is used in semiconductor electronics devices that operate at high temperatures or high voltages, or both. Natural occurrence Naturally occurring moissanite is found in only minute quantities in certain types of meteorite, corundum deposits, and kimberlite. Virtually all the silicon carbide sold in the world, including moissanite jewels, is synthetic. Natural moissanite was first found in 1893 as a small component of the Canyon Diablo meteorite in Arizona by Ferdinand Henri Moissan, after whom the material was named in 1905. Moissan's discovery of naturally occurring SiC was initially disputed because his sample may have been contaminated by silicon carbide saw blades that were already on the market at that time. While rare on Earth, silicon carbide is remarkably common in space. It is a common form of stardust found around carbon-rich stars, and examples of this stardust have been found in pristine condition in primitive (unaltered) meteorites. The silicon carbide found in space and in meteorites is almost exclusively the beta-polymorph. Analysis of SiC grains found in the Murchison meteorite, a carbonaceous chondrite meteorite, has revealed anomalous isotopic ratios of carbon and silicon, indicating that these grains originated outside the solar system. History Early experiments Non-systematic, less-recognized and often unverified syntheses of silicon carbide include: César-Mansuète Despretz's passing an electric current through a carbon rod embedded in sand (1849) Robert Sydney Marsden's dissolution of silica in molten silver in a graphite crucible (1881) Paul Schuetzenberger's heating of a mixture of silicon and silica in a graphite crucible (1881) Albert Colson's heating of silicon under a stream of ethylene (1882). Wide-scale production Wide-scale production is credited to Edward Goodrich Acheson in 1891. Acheson was attempting to prepare artificial diamonds when he heated a mixture of clay (aluminium silicate) and powdered coke (carbon) in an iron bowl. He called the blue crystals that formed carborundum, believing it to be a new compound of carbon and aluminium, similar to corundum. Moissan also synthesized SiC by several routes, including dissolution of carbon in molten silicon, melting a mixture of calcium carbide and silica, and by reducing silica with carbon in an electric furnace. Acheson patented the method for making silicon carbide powder on February 28, 1893. Acheson also developed the electric batch furnace by which SiC is still made today and formed the Carborundum Company to manufacture bulk SiC, initially for use as an abrasive. In 1900 the company settled with the Electric Smelting and Aluminum Company when a judge's decision gave "priority broadly" to its founders "for reducing ores and other substances by the incandescent method". The first use of SiC was as an abrasive. This was followed by electronic applications. In the beginning of the 20th century, silicon carbide was used as a detector in the first radios. In 1907 Henry Joseph Round produced the first LED by applying a voltage to a SiC crystal and observing yellow, green and orange emission at the cathode. The effect was later rediscovered by O.V. Losev in the Soviet Union, in 1923. Production Because natural moissanite is extremely scarce, most silicon carbide is synthetic. Silicon carbide is used as an abrasive, as well as a semiconductor and diamond simulant of gem quality. The simplest process to manufacture silicon carbide is to combine silica sand and carbon in an Acheson graphite electric resistance furnace at a high temperature, between and . Fine SiO2 particles in plant material (e.g. rice husks) can be converted to SiC by heating in the excess carbon from the organic material. The silica fume, which is a byproduct of producing silicon metal and ferrosilicon alloys, can also be converted to SiC by heating with graphite at . The material formed in the Acheson furnace varies in purity, according to its distance from the graphite resistor heat source. Colorless, pale yellow and green crystals have the highest purity and are found closest to the resistor. The color changes to blue and black at greater distance from the resistor, and these darker crystals are less pure. Nitrogen and aluminium are common impurities, and they affect the electrical conductivity of SiC. Pure silicon carbide can be made by the Lely process, in which SiC powder is sublimed into high-temperature species of silicon, carbon, silicon dicarbide (SiC2), and disilicon carbide (Si2C) in an argon gas ambient at 2,500 °C and redeposited into flake-like single crystals, sized up to 2 × 2 cm, at a slightly colder substrate. This process yields high-quality single crystals, mostly of 6H-SiC phase (because of high growth temperature). A modified Lely process involving induction heating in graphite crucibles yields even larger single crystals of 4 inches (10 cm) in diameter, having a section 81 times larger compared to the conventional Lely process. Cubic SiC is usually grown by the more expensive process of chemical vapor deposition (CVD) of silane, hydrogen and nitrogen. Homoepitaxial and heteroepitaxial SiC layers can be grown employing both gas and liquid phase approaches. To form complexly shaped SiC, preceramic polymers can be used as precursors which form the ceramic product through pyrolysis at temperatures in the range 1,000–1,100 °C. Precursor materials to obtain silicon carbide in such a manner include polycarbosilanes, poly(methylsilyne) and polysilazanes. Silicon carbide materials obtained through the pyrolysis of preceramic polymers are known as polymer derived ceramics or PDCs. Pyrolysis of preceramic polymers is most often conducted under an inert atmosphere at relatively low temperatures. Relative to the CVD process, the pyrolysis method is advantageous because the polymer can be formed into various shapes prior to thermalization into the ceramic. SiC can also be made into wafers by cutting a single crystal either using a diamond wire saw or by using a laser. SiC is a useful semiconductor used in power electronics. Structure and properties Silicon carbide exists in about 250 crystalline forms. Through inert atmospheric pyrolysis of preceramic polymers, silicon carbide in a glassy amorphous form is also produced. The polymorphism of SiC is characterized by a large family of similar crystalline structures called polytypes. They are variations of the same chemical compound that are identical in two dimensions and differ in the third. Thus, they can be viewed as layers stacked in a certain sequence. Alpha silicon carbide (α-SiC) is the most commonly encountered polymorph, and is formed at temperatures greater than 1,700 °C and has a hexagonal crystal structure (similar to Wurtzite). The beta modification (β-SiC), with a zinc blende crystal structure (similar to diamond), is formed at temperatures below 1,700 °C. Until recently, the beta form has had relatively few commercial uses, although there is now increasing interest in its use as a support for heterogeneous catalysts, owing to its higher surface area compared to the alpha form. Pure SiC is colorless. The brown to black color of the industrial product results from iron impurities. The rainbow-like luster of the crystals is due to the thin-film interference of a passivation layer of silicon dioxide that forms on the surface. The high sublimation temperature of SiC (approximately 2,700 °C) makes it useful for bearings and furnace parts. Silicon carbide does not melt but begins to sublimate near 2,700 °C like graphite, having an appreciable vapor pressure near that temp. It is also highly inert chemically, partly due to the formation of a thin passivated layer of SiO2. There is currently much interest in its use as a semiconductor material in electronics, where its high thermal conductivity, high electric field breakdown strength and high maximum current density make it more promising than silicon for high-powered devices. SiC has a very low coefficient of thermal expansion of about 2.3 × 10−6 K−1 near 300 K (for 4H and 6H SiC) and experiences no phase transitions in the temperature range 5 K to 340 K that would cause discontinuities in the thermal expansion coefficient. Electrical conductivity Silicon carbide is a semiconductor, which can be doped n-type by nitrogen or phosphorus and p-type by beryllium, boron, aluminium, or gallium. Metallic conductivity has been achieved by heavy doping with boron, aluminium or nitrogen. Superconductivity has been detected in 3C-SiC:Al, 3C-SiC:B and 6H-SiC:B at similar temperatures ~1.5 K. A crucial difference is however observed for the magnetic field behavior between aluminium and boron doping: 3C-SiC:Al is type-II. In contrast, 3C-SiC:B is type-I, as is 6H-SiC:B. Thus the superconducting properties seem to depend more on dopant (B vs. Al) than on polytype (3C- vs 6H-). In an attempt to explain this dependence, it was noted that B substitutes at C sites in SiC, but Al substitutes at Si sites. Therefore, Al and B "see" different environments, in both polytypes. Uses Abrasive and cutting tools In manufacturing, it is used for its hardness in abrasive machining processes such as grinding, honing, water-jet cutting and sandblasting. SiC provides a much sharper and harder alternative for sand blasting as compared to aluminium oxide. Particles of silicon carbide are laminated to paper to create sandpapers and the grip tape on skateboards. In the arts, silicon carbide is a popular abrasive in modern lapidary due to the durability and low cost of the material. In 1982 an exceptionally strong composite of aluminium oxide and silicon carbide whiskers was discovered. Development of this laboratory-produced composite to a commercial product took only three years. In 1985, the first commercial cutting tools made from this alumina and silicon carbide whisker-reinforced composite were introduced into the market. Structural material In the 1980s and 1990s, silicon carbide was studied in several research programs for high-temperature gas turbines in Europe, Japan and the United States. The components were intended to replace nickel superalloy turbine blades or nozzle vanes. However, none of these projects resulted in a production quantity, mainly because of its low impact resistance and its low fracture toughness. Like other hard ceramics (namely alumina and boron carbide), silicon carbide is used in composite armor (e.g. Chobham armor), and in ceramic plates in bulletproof vests. Dragon Skin, which was produced by Pinnacle Armor, used disks of silicon carbide. Improved fracture toughness in SiC armor can be facilitated through the phenomenon of abnormal grain growth or AGG. The growth of abnormally long silicon carbide grains may serve to impart a toughening effect through crack-wake bridging, similar to whisker reinforcement. Similar AGG-toughening effects have been reported in Silicon nitride (Si3N4). Silicon carbide is used as a support and shelving material in high temperature kilns such as for firing ceramics, glass fusing, or glass casting. SiC kiln shelves are considerably lighter and more durable than traditional alumina shelves. In December 2015, infusion of silicon carbide nano-particles in molten magnesium was mentioned as a way to produce a new strong and plastic alloy suitable for use in aeronautics, aerospace, automobile and micro-electronics. Automobile parts Silicon-infiltrated carbon-carbon composite is used for high performance "ceramic" brake discs, as they are able to withstand extreme temperatures. The silicon reacts with the graphite in the carbon-carbon composite to become carbon-fiber-reinforced silicon carbide (C/SiC). These brake discs are used on some road-going sports cars, supercars, as well as other performance cars including the Porsche Carrera GT, the Bugatti Veyron, the Chevrolet Corvette ZR1, the McLaren P1, Bentley, Ferrari, Lamborghini and some specific high-performance Audi cars. Silicon carbide is also used in a sintered form for diesel particulate filters. It is also used as an oil additive to reduce friction, emissions, and harmonics. Foundry crucibles SiC is used in crucibles for holding melting metal in small and large foundry applications. Electric systems The earliest electrical application of SiC was as a surge protection in lightning arresters in electric power systems. These devices must exhibit high resistance until the voltage across them reaches a certain threshold VT at which point their resistance must drop to a lower level and maintain this level until the applied voltage drops below VT flushing current into the ground. It was recognized early on that SiC had such a voltage-dependent resistance, and so columns of SiC pellets were connected between high-voltage power lines and the earth. When a lightning strike to the line raises the line voltage sufficiently, the SiC column will conduct, allowing strike current to pass harmlessly to the earth instead of along the power line. The SiC columns proved to conduct significantly at normal power-line operating voltages and thus had to be placed in series with a spark gap. This spark gap is ionized and rendered conductive when lightning raises the voltage of the power line conductor, thus effectively connecting the SiC column between the power conductor and the earth. Spark gaps used in lightning arresters are unreliable, either failing to strike an arc when needed or failing to turn off afterwards, in the latter case due to material failure or contamination by dust or salt. Usage of SiC columns was originally intended to eliminate the need for the spark gap in lightning arresters. Gapped SiC arresters were used for lightning-protection and sold under the GE and Westinghouse brand names, among others. The gapped SiC arrester has been largely displaced by no-gap varistors that use columns of zinc oxide pellets. Electronic circuit elements Silicon carbide was the first commercially important semiconductor material. A crystal radio "carborundum" (synthetic silicon carbide) detector diode was patented by Henry Harrison Chase Dunwoody in 1906. It found much early use in shipboard receivers. Power electronic devices In 1993, the silicon carbide was considered a semiconductor in both research and early mass production providing advantages for fast, high-temperature and/or high-voltage devices. The first devices available were Schottky diodes, followed by junction-gate FETs and MOSFETs for high-power switching. Bipolar transistors and thyristors were described. A major problem for SiC commercialization has been the elimination of defects: edge dislocations, screw dislocations (both hollow and closed core), triangular defects and basal plane dislocations. As a result, devices made of SiC crystals initially displayed poor reverse blocking performance, though researchers have been tentatively finding solutions to improve the breakdown performance. Apart from crystal quality, problems with the interface of SiC with silicon dioxide have hampered the development of SiC-based power MOSFETs and insulated-gate bipolar transistors. Although the mechanism is still unclear, nitriding has dramatically reduced the defects causing the interface problems. In 2008, the first commercial JFETs rated at 1,200 V were introduced to the market, followed in 2011 by the first commercial MOSFETs rated at 1200 V. JFETs are now available rated 650 V to 1,700 V with resistance as low as 25 mΩ. Beside SiC switches and SiC Schottky diodes (also Schottky barrier diode, SBD) in the popular TO-247 and TO-220 packages, companies started even earlier to implement the bare chips into their power electronic modules. SiC SBD diodes found wide market spread being used in PFC circuits and IGBT power modules. Conferences such as the International Conference on Integrated Power Electronics Systems (CIPS) report regularly about the technological progress of SiC power devices. Major challenges for fully unleashing the capabilities of SiC power devices are: Gate drive: SiC devices often require gate drive voltage levels that are different from their silicon counterparts and may be even unsymmetric, for example, +20 V and −5 V. Packaging: SiC chips may have a higher power density than silicon power devices and are able to handle higher temperatures exceeding the silicon limit of 150 °C. New die attach technologies such as sintering are required to efficiently get the heat out of the devices and ensure a reliable interconnection. Beginning with Tesla Model 3 the inverters in the drive unit use 24 pairs of silicon carbide (SiC) MOSFET chips rated for 650 volts each. Silicon carbide in this instance gave Tesla a significant advantage over chips made of silicon in terms of size and weight. A number of automobile manufacturers are planning to incorporate silicon carbide into power electronic devices in their products. A significant increase in production of silicon carbide is projected, beginning with a large plant opened 2022 by Wolfspeed, in upstate New York. LEDs The phenomenon of electroluminescence was discovered in 1907 using silicon carbide and some of the first commercial LEDs were based on this material. When General Electric of America introduced its SSL-1 Solid State Lamp in March 1967, using a tiny chip of semi-conducting SiC to emit a point of yellow light, it was then the world's brightest LED. By 1970 it had been usurped by brighter red LEDs, but yellow LEDs made from 3C-SiC continued to be manufactured in the Soviet Union in the 1970s and blue LEDs (6H-SiC) worldwide in the 1980s. Carbide LED production soon stopped when a different material, gallium nitride, showed 10–100 times brighter emission. This difference in efficiency is due to the unfavorable indirect bandgap of SiC, whereas GaN has a direct bandgap which favors light emission. However, SiC is still one of the important LED components: It is a popular substrate for growing GaN devices, and it also serves as a heat spreader in high-power LEDs. Astronomy The low thermal expansion coefficient, high hardness, rigidity and thermal conductivity make silicon carbide a desirable mirror material for astronomical telescopes. The growth technology (chemical vapor deposition) has been scaled up to produce disks of polycrystalline silicon carbide up to in diameter, and several telescopes like the Herschel Space Telescope are already equipped with SiC optics, as well the Gaia space observatory spacecraft subsystems are mounted on a rigid silicon carbide frame, which provides a stable structure that will not expand or contract due to heat. Thin-filament pyrometry Silicon carbide fibers are used to measure gas temperatures in an optical technique called thin-filament pyrometry. It involves the placement of a thin filament in a hot gas stream. Radiative emissions from the filament can be correlated with filament temperature. Filaments are SiC fibers with a diameter of 15 micrometers, about one fifth that of a human hair. Because the fibers are so thin, they do little to disturb the flame and their temperature remains close to that of the local gas. Temperatures of about 800–2,500 K can be measured. Heating elements References to silicon carbide heating elements exist from the early 20th century when they were produced by Acheson's Carborundum Co. in the U.S. and EKL in Berlin. Silicon carbide offered increased operating temperatures compared with metallic heaters. Silicon carbide elements are used today in the melting of glass and non-ferrous metal, heat treatment of metals, float glass production, production of ceramics and electronics components, igniters in pilot lights for gas heaters, etc. Heat shielding The outer thermal protection layer of NASA's LOFTID inflatable heat shield incorporates a woven ceramic made from silicon carbide, with fiber of such small diameter that it can be bundled and spun into a yarn. Nuclear applications Due to SiC's exceptional neutron absorption capability, it is used as fuel cladding in nuclear reactors and as nuclear waste containment material. It is also used in producing radiation detectors for monitoring radiation levels in nuclear facilities, environmental monitoring, and medical imaging. Again, SiC sensors and electronics for nuclear reactor applications are being developed potentially for future Martian nuclear power and the emerging terrestrial micro nuclear power plants. Nuclear fuel particles and cladding Silicon carbide is an important material in TRISO-coated fuel particles, the type of nuclear fuel found in high temperature gas cooled reactors such as the Pebble Bed Reactor. A layer of silicon carbide gives coated fuel particles structural support and is the main diffusion barrier to the release of fission products. Silicon carbide composite material has been investigated for use as a replacement for Zircaloy cladding in light water reactors. One of the reasons for this investigation is that, Zircaloy experiences hydrogen embrittlement as a consequence of the corrosion reaction with water. This produces a reduction in fracture toughness with increasing volumetric fraction of radial hydrides. This phenomenon increases drastically with increasing temperature to the detriment of the material. Silicon carbide cladding does not experience this same mechanical degradation, but instead retains strength properties with increasing temperature. The composite consists of SiC fibers wrapped around a SiC inner layer and surrounded by an SiC outer layer. Problems have been reported with the ability to join the pieces of the SiC composite. Jewelry As a gemstone used in jewelry, silicon carbide is called "synthetic moissanite" or just "moissanite" after the mineral name. Moissanite is similar to diamond in several important respects: it is transparent and hard (9–9.5 on the Mohs scale, compared to 10 for diamond), with a refractive index between 2.65 and 2.69 (compared to 2.42 for diamond). Moissanite is somewhat harder than common cubic zirconia. Unlike diamond, moissanite can be strongly birefringent. For this reason, moissanite jewels are cut along the optic axis of the crystal to minimize birefringent effects. It is lighter (density 3.21 g/cm3 vs. 3.53 g/cm3), and much more resistant to heat than diamond. This results in a stone of higher luster, sharper facets, and good resilience. Loose moissanite stones may be placed directly into wax ring moulds for lost-wax casting, as can diamond, as moissanite remains undamaged by temperatures up to . Moissanite has become popular as a diamond substitute, and may be misidentified as diamond, since its thermal conductivity is closer to diamond than any other substitute. Many thermal diamond-testing devices cannot distinguish moissanite from diamond, but the gem is distinct in its birefringence and a very slight green or yellow fluorescence under ultraviolet light. Some moissanite stones also have curved, string-like inclusions, which diamonds never have. Steel production Silicon carbide, dissolved in a basic oxygen furnace used for making steel, acts as a fuel. The additional energy liberated allows the furnace to process more scrap with the same charge of hot metal. It can also be used to raise tap temperatures and adjust the carbon and silicon content. Silicon carbide is cheaper than a combination of ferrosilicon and carbon, produces cleaner steel and lower emissions due to low levels of trace elements, has a low gas content, and does not lower the temperature of steel. Catalyst support The natural resistance to oxidation exhibited by silicon carbide, as well as the discovery of new ways to synthesize the cubic β-SiC form, with its larger surface area, has led to significant interest in its use as a heterogeneous catalyst support. This form has already been employed as a catalyst support for the oxidation of hydrocarbons, such as n-butane, to maleic anhydride. Carborundum printmaking Silicon carbide is used in carborundum printmaking – a collagraph printmaking technique. Carborundum grit is applied in a paste to the surface of an aluminium plate. When the paste is dry, ink is applied and trapped in its granular surface, then wiped from the bare areas of the plate. The ink plate is then printed onto paper in a rolling-bed press used for intaglio printmaking. The result is a print of painted marks embossed into the paper. Carborundum grit is also used in stone Lithography. Its uniform particle size allows it to be used to "Grain" a stone which removes the previous image. In a similar process to sanding, coarser grit Carborundum is applied to the stone and worked with a Levigator, typically a round plate eccentric on a perpendicular shaft, then gradually finer and finer grit is applied until the stone is clean. This creates a grease sensitive surface. Graphene production Silicon carbide can be used in the production of graphene because of its chemical properties that promote the production of graphene on the surface of SiC nanostructures. When it comes to its production, silicon is used primarily as a substrate to grow the graphene. But there are actually several methods that can be used to grow the graphene on the silicon carbide. The confinement controlled sublimation (CCS) growth method consists of a SiC chip that is heated under vacuum with graphite. Then the vacuum is released very gradually to control the growth of graphene. This method yields the highest quality graphene layers. But other methods have been reported to yield the same product as well. Another way of growing graphene would be thermally decomposing SiC at a high temperature within a vacuum. But, this method turns out to yield graphene layers that contain smaller grains within the layers. So, there have been efforts to improve the quality and yield of graphene. One such method is to perform ex situ graphitization of silicon terminated SiC in an atmosphere consisting of argon. This method has proved to yield layers of graphene with larger domain sizes than the layer that would be attainable via other methods. This new method can be very viable to make higher quality graphene for a multitude of technological applications. When it comes to understanding how or when to use these methods of graphene production, most of them mainly produce or grow this graphene on the SiC within a growth enabling environment. It is utilized most often at rather higher temperatures (such as 1,300 °C) because of SiC thermal properties. However, there have been certain procedures that have been performed and studied that could potentially yield methods that use lower temperatures to help manufacture graphene. More specifically this different approach to graphene growth has been observed to produce graphene within a temperature environment of around 750 °C. This method entails the combination of certain methods like chemical vapor deposition (CVD) and surface segregation. And when it comes to the substrate, the procedure would consist of coating a SiC substrate with thin films of a transition metal. And after the rapid heat treating of this substance, the carbon atoms would then become more abundant at the surface interface of the transition metal film which would then yield graphene. And this process was found to yield graphene layers that were more continuous throughout the substrate surface. Quantum physics Silicon carbide can host point defects in the crystal lattice, which are known as color centers. These defects can produce single photons on demand and thus serve as a platform for single-photon source. Such a device is a fundamental resource for many emerging applications of quantum information science. If one pumps a color center via an external optical source or electric current, the color center will be brought to the excited state and then relax with the emission of one photon. One well known point defect in silicon carbide is the divacancy which has a similar electronic structure as the nitrogen-vacancy center in diamond. In 4H-SiC, the divacancy has four different configurations which correspond to four zero-phonon lines (ZPL). These ZPL values are written using the notation VSi-VC and the unit eV: hh(1.095), kk(1.096), kh(1.119), and hk(1.150). Fishing rod guides Silicon carbide is used in the manufacturing of fishing guides because of its durability and wear resistance. Silicon Carbide rings are fit into a guide frame, typically made from stainless steel or titanium which keep the line from touching the rod blank. The rings provide a low friction surface which improves casting distance while providing adequate hardness that prevents abrasion from braided fishing line. Pottery glazes Silicon carbide is used as a raw ingredient in some glazes applied to ceramics. At high temperatures it can reduce metal oxides forming silica and carbon dioxide. This can be used to make the glaze foam and crater due to the evolved carbon dioxide gas, or to reduce the colorant oxides and achieve colors such as copper reds otherwise only possible in a fuel powered reduction firing in an electric kiln. See also Carborundum Universal Globar Reaction bonded silicon carbide References External links Abrasives Carbides Ceramic materials Deoxidizers Diamond simulants Gemstones Group IV semiconductors Inorganic silicon compounds Refractory materials Semiconductors Superhard materials Synthetic minerals
Silicon carbide
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
6,405
[ "Physical quantities", "Inorganic compounds", "Metallurgy", "Materials", "Gemstones", "Ceramic engineering", "Inorganic silicon compounds", "Refractory materials", "Synthetic materials", "Semiconductor materials", "Group IV semiconductors", "Electronic engineering", "Ceramic materials", "E...
225,617
https://en.wikipedia.org/wiki/Third%20law%20of%20thermodynamics
The third law of thermodynamics states that the entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero. This constant value cannot depend on any other parameters characterizing the system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy. Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy. In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system. Formulations The third law has many formulations, some more general than others, some equivalent, and some neither more general nor equivalent. The Planck statement applies only to perfect crystalline substances:As temperature falls to zero, the entropy of any pure crystalline substance tends to a universal constant. That is, , where is a universal constant that applies for all possible crystals, of all possible sizes, in all possible external constraints. So it can be taken as zero, giving . The Nernst statement concerns thermodynamic processes at a fixed, low temperature, for condensed systems, which are liquids and solids: The entropy change associated with any condensed system undergoing a reversible isothermal process approaches zero as the temperature at which it is performed approaches 0 K. That is, . Or equivalently, At absolute zero, the entropy change becomes independent of the process path. That is, where represents a change in the state variable . The unattainability principle of Nernst: It is impossible for any process, no matter how idealized, to reduce the entropy of a system to its absolute-zero value in a finite number of operations. This principle implies that cooling a system to absolute zero would require an infinite number of steps or an infinite amount of time. The statement in adiabatic accessibility: It is impossible to start from a state of positive temperature, and adiabatically reach a state with zero temperature. The Einstein statement: The entropy of any substance approaches a finite value as the temperature approaches absolute zero. That is, where is the entropy, the zero-point entropy is finite-valued, is the temperature, and represents other relevant state variables. This implies that the heat capacity of a substance must (uniformly) vanish at absolute zero, as otherwise the entropy would diverge. There is also a formulation as the impossibility of "perpetual motion machines of the third kind". History The third law was developed by chemist Walther Nernst during the years 1906 to 1912 and is therefore often referred to as the Nernst heat theorem, or sometimes the Nernst-Simon heat theorem to include the contribution of Nernst's doctoral student Francis Simon. The third law of thermodynamics states that the entropy of a system at absolute zero is a well-defined constant. This is because a system at zero temperature exists in its ground state, so that its entropy is determined only by the degeneracy of the ground state. In 1912 Nernst stated the law thus: "It is impossible for any procedure to lead to the isotherm in a finite number of steps." An alternative version of the third law of thermodynamics was enunciated by Gilbert N. Lewis and Merle Randall in 1923: If the entropy of each element in some (perfect) crystalline state be taken as zero at the absolute zero of temperature, every substance has a finite positive entropy; but at the absolute zero of temperature the entropy may become zero, and does so become in the case of perfect crystalline substances. This version states not only will reach zero at 0 K, but itself will also reach zero as long as the crystal has a ground state with only one configuration. Some crystals form defects which cause a residual entropy. This residual entropy disappears when the kinetic barriers to transitioning to one ground state are overcome. With the development of statistical mechanics, the third law of thermodynamics (like the other laws) changed from a fundamental law (justified by experiments) to a derived law (derived from even more basic laws). The basic law from which it is primarily derived is the statistical-mechanics definition of entropy for a large system: where is entropy, is the Boltzmann constant, and is the number of microstates consistent with the macroscopic configuration. The counting of states is from the reference state of absolute zero, which corresponds to the entropy of . Explanation In simple terms, the third law states that the entropy of a perfect crystal of a pure substance approaches zero as the temperature approaches zero. The alignment of a perfect crystal leaves no ambiguity as to the location and orientation of each part of the crystal. As the energy of the crystal is reduced, the vibrations of the individual atoms are reduced to nothing, and the crystal becomes the same everywhere. The third law provides an absolute reference point for the determination of entropy at any other temperature. The entropy of a closed system, determined relative to this zero point, is then the absolute entropy of that system. Mathematically, the absolute entropy of any system at zero temperature is the natural log of the number of ground states times the Boltzmann constant . The entropy of a perfect crystal lattice as defined by Nernst's theorem is zero provided that its ground state is unique, because . If the system is composed of one-billion atoms that are all alike and lie within the matrix of a perfect crystal, the number of combinations of one billion identical things taken one billion at a time is . Hence: The difference is zero; hence the initial entropy can be any selected value so long as all other such calculations include that as the initial entropy. As a result, the initial entropy value of zero is selected is used for convenience. Example: Entropy change of a crystal lattice heated by an incoming photon Suppose a system consisting of a crystal lattice with volume of identical atoms at , and an incoming photon of wavelength and energy . Initially, there is only one accessible microstate: Let us assume the crystal lattice absorbs the incoming photon. There is a unique atom in the lattice that interacts and absorbs this photon. So after absorption, there are possible microstates accessible by the system, each corresponding to one excited atom, while the other atoms remain at ground state. The entropy, energy, and temperature of the closed system rises and can be calculated. The entropy change is From the second law of thermodynamics: Hence Calculating entropy change: We assume and . The energy change of the system as a result of absorbing the single photon whose energy is : The temperature of the closed system rises by This can be interpreted as the average temperature of the system over the range from . A single atom is assumed to absorb the photon, but the temperature and entropy change characterizes the entire system. Systems with non-zero entropy at absolute zero An example of a system that does not have a unique ground state is one whose net spin is a half-integer, for which time-reversal symmetry gives two degenerate ground states. For such systems, the entropy at zero temperature is at least (which is negligible on a macroscopic scale). Some crystalline systems exhibit geometrical frustration, where the structure of the crystal lattice prevents the emergence of a unique ground state. Ground-state helium (unless under pressure) remains liquid. Glasses and solid solutions retain significant entropy at 0 K, because they are large collections of nearly degenerate states, in which they become trapped out of equilibrium. Another example of a solid with many nearly-degenerate ground states, trapped out of equilibrium, is ice Ih, which has "proton disorder". For the entropy at absolute zero to be zero, the magnetic moments of a perfectly ordered crystal must themselves be perfectly ordered; from an entropic perspective, this can be considered to be part of the definition of a "perfect crystal". Only ferromagnetic, antiferromagnetic, and diamagnetic materials can satisfy this condition. However, ferromagnetic materials do not, in fact, have zero entropy at zero temperature, because the spins of the unpaired electrons are all aligned and this gives a ground-state spin degeneracy. Materials that remain paramagnetic at 0 K, by contrast, may have many nearly degenerate ground states (for example, in a spin glass), or may retain dynamic disorder (a quantum spin liquid). Consequences Absolute zero The third law is equivalent to the statement that It is impossible by any procedure, no matter how idealized, to reduce the temperature of any closed system to zero temperature in a finite number of finite operations. The reason that cannot be reached according to the third law is explained as follows: Suppose that the temperature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1. One can think of a multistage nuclear demagnetization setup where a magnetic field is switched on and off in a controlled way. If there were an entropy difference at absolute zero, could be reached in a finite number of steps. However, at there is no entropy difference, so an infinite number of steps would be needed. The process is illustrated in Fig. 1. Example: magnetic refrigeration To be concrete, we imagine that we are refrigerating magnetic material. Suppose we have a large bulk of paramagnetic salt and an adjustable external magnetic field in the vertical direction. Let the parameter represent the external magnetic field. At the same temperature, if the external magnetic field is strong, then the internal atoms in the salt would strongly align with the field, so the disorder (entropy) would decrease. Therefore, in Fig. 1, the curve for is the curve for lower magnetic field, and the curve for is the curve for higher magnetic field. The refrigeration process repeats the following two steps: Isothermal process. Here, we have a chunk of salt in magnetic field and temperature . We divide the chunk into two parts: a large part playing the role of "environment", and a small part playing the role of "system". We slowly increase the magnetic field on the system to , but keep the magnetic field constant on the environment. The atoms in the system would lose directional degrees of freedom (DOF), and the energy in the directional DOF would be squeezed out into the vibrational DOF. This makes it slightly hotter, and then it would lose thermal energy to the environment, to remain in the same temperature . (The environment is now discarded.) Isentropic cooling. Here, the system is wrapped in adiathermal covering, and the external magnetic field is slowly lowered to . This frees up the direction DOF, absorbing some energy from the vibrational DOF. The effect is that the system has the same entropy, but reaches a lower temperature . At every two-step of the process, the mass of the system decreases, as we discard more and more salt as the "environment". However, if the equations of state for this salt is as shown in Fig. 1 (left), then we can start with a large but finite amount of salt, and end up with a small piece of salt that has . Specific heat A non-quantitative description of his third law that Nernst gave at the very beginning was simply that the specific heat of a material can always be made zero by cooling it down far enough. A modern, quantitative analysis follows. Suppose that the heat capacity of a sample in the low temperature region has the form of a power law asymptotically as , and we wish to find which values of are compatible with the third law. We have By the discussion of third law above, this integral must be bounded as , which is only possible if . So the heat capacity must go to zero at absolute zero if it has the form of a power law. The same argument shows that it cannot be bounded below by a positive constant, even if we drop the power-law assumption. On the other hand, the molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by with the molar ideal gas constant. But clearly a constant heat capacity does not satisfy Eq. (). That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. We can verify this more fundamentally by substituting in Eq. (), which yields In the limit this expression diverges, again contradicting the third law of thermodynamics. The conflict is resolved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi–Dirac statistics and Bose particles follow Bose–Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases with the Fermi temperature TF given by Here is the Avogadro constant, the molar volume, and the molar mass. For Bose gases with given by The specific heats given by Eq. () and () both satisfy Eq. (). Indeed, they are power laws with and respectively. Even within a purely classical setting, the density of a classical ideal gas at fixed particle number becomes arbitrarily high as goes to zero, so the interparticle spacing goes to zero. The assumption of non-interacting particles presumably breaks down when they are sufficiently close together, so the value of gets modified away from its ideal constant value. Vapor pressure The only liquids near absolute zero are 3He and 4He. Their heat of evaporation has a limiting value given by with and constant. If we consider a container partly filled with liquid and partly gas, the entropy of the liquid–gas mixture is where is the entropy of the liquid and is the gas fraction. Clearly the entropy change during the liquid–gas transition ( from 0 to 1) diverges in the limit of T→0. This violates Eq. (). Nature solves this paradox as follows: at temperatures below about 100 mK, the vapor pressure is so low that the gas density is lower than the best vacuum in the universe. In other words, below 100 mK there is simply no gas above the liquid. Miscibility If liquid helium with mixed 3He and 4He were cooled to absolute zero, the liquid must have zero entropy. This either means they are ordered perfectly as a mixed liquid, which is impossible for a liquid, or that they fully separate out into two layers of pure liquid. This is precisely what happens. For example, if a solution with 3 3He to 2 4He atoms were cooled, it would start the separation at 0.9 K, purifying more and more, until at absolute zero, when the upper layer becomes purely 3He, and the lower layer becomes purely 4He. Surface tension Let be the surface tension of liquid, then the entropy per area is . So if a liquid can exist down to absolute zero, then since its entropy is constant no matter its shape at absolute zero, its entropy per area must converge to zero. That is, its surface tension would become constant at low temperatures. In particular, the surface tension of 3He is well-approximated by for some parameters . Latent heat of melting The melting curves of 3He and 4He both extend down to absolute zero at finite pressure. At the melting pressure, liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at . As a result, the latent heat of melting is zero, and the slope of the melting curve extrapolates to zero as a result of the Clausius–Clapeyron equation. Thermal expansion coefficient The thermal expansion coefficient is defined as With the Maxwell relation and Eq. () with it is shown that So the thermal expansion coefficient of all materials must go to zero at zero kelvin. See also Adiabatic process Ground state Laws of thermodynamics Quantum thermodynamics Residual entropy Thermodynamic entropy Timeline of thermodynamics, statistical mechanics, and random processes Quantum heat engines and refrigerators References Further reading Goldstein, Martin & Inge F. (1993) The Refrigerator and the Universe. Cambridge MA: Harvard University Press. . Chpt. 14 is a nontechnical discussion of the Third Law, one including the requisite elementary quantum mechanics. 3
Third law of thermodynamics
[ "Physics", "Chemistry" ]
3,454
[ "Thermodynamics", "Laws of thermodynamics" ]
7,864,709
https://en.wikipedia.org/wiki/Green%27s%20function%20%28many-body%20theory%29
In many-body theory, the term Green's function (or Green function) is sometimes used interchangeably with correlation function, but refers specifically to correlators of field operators or creation and annihilation operators. The name comes from the Green's functions used to solve inhomogeneous differential equations, to which they are loosely related. (Specifically, only two-point "Green's functions" in the case of a non-interacting system are Green's functions in the mathematical sense; the linear operator that they invert is the Hamiltonian operator, which in the non-interacting case is quadratic in the fields.) Spatially uniform case Basic definitions We consider a many-body theory with field operator (annihilation operator written in the position basis) . The Heisenberg operators can be written in terms of Schrödinger operators as and the creation operator is , where is the grand-canonical Hamiltonian. Similarly, for the imaginary-time operators, [Note that the imaginary-time creation operator is not the Hermitian conjugate of the annihilation operator .] In real time, the -point Green function is defined by where we have used a condensed notation in which signifies and signifies . The operator denotes time ordering, and indicates that the field operators that follow it are to be ordered so that their time arguments increase from right to left. In imaginary time, the corresponding definition is where signifies . (The imaginary-time variables are restricted to the range from to the inverse temperature .) Note regarding signs and normalization used in these definitions: The signs of the Green functions have been chosen so that Fourier transform of the two-point () thermal Green function for a free particle is and the retarded Green function is where is the Matsubara frequency. Throughout, is for bosons and for fermions and denotes either a commutator or anticommutator as appropriate. (See below for details.) Two-point functions The Green function with a single pair of arguments () is referred to as the two-point function, or propagator. In the presence of both spatial and temporal translational symmetry, it depends only on the difference of its arguments. Taking the Fourier transform with respect to both space and time gives where the sum is over the appropriate Matsubara frequencies (and the integral involves an implicit factor of , as usual). In real time, we will explicitly indicate the time-ordered function with a superscript T: The real-time two-point Green function can be written in terms of 'retarded' and 'advanced' Green functions, which will turn out to have simpler analyticity properties. The retarded and advanced Green functions are defined by and respectively. They are related to the time-ordered Green function by where is the Bose–Einstein or Fermi–Dirac distribution function. Imaginary-time ordering and β-periodicity The thermal Green functions are defined only when both imaginary-time arguments are within the range to . The two-point Green function has the following properties. (The position or momentum arguments are suppressed in this section.) Firstly, it depends only on the difference of the imaginary times: The argument is allowed to run from to . Secondly, is (anti)periodic under shifts of . Because of the small domain within which the function is defined, this means just for . Time ordering is crucial for this property, which can be proved straightforwardly, using the cyclicity of the trace operation. These two properties allow for the Fourier transform representation and its inverse, Finally, note that has a discontinuity at ; this is consistent with a long-distance behaviour of . Spectral representation The propagators in real and imaginary time can both be related to the spectral density (or spectral weight), given by where refers to a (many-body) eigenstate of the grand-canonical Hamiltonian , with eigenvalue . The imaginary-time propagator is then given by and the retarded propagator by where the limit as is implied. The advanced propagator is given by the same expression, but with in the denominator. The time-ordered function can be found in terms of and . As claimed above, and have simple analyticity properties: the former (latter) has all its poles and discontinuities in the lower (upper) half-plane. The thermal propagator has all its poles and discontinuities on the imaginary axis. The spectral density can be found very straightforwardly from , using the Sokhatsky–Weierstrass theorem where denotes the Cauchy principal part. This gives This furthermore implies that obeys the following relationship between its real and imaginary parts: where denotes the principal value of the integral. The spectral density obeys a sum rule, which gives as . Hilbert transform The similarity of the spectral representations of the imaginary- and real-time Green functions allows us to define the function which is related to and by and A similar expression obviously holds for . The relation between and is referred to as a Hilbert transform. Proof of spectral representation We demonstrate the proof of the spectral representation of the propagator in the case of the thermal Green function, defined as Due to translational symmetry, it is only necessary to consider for , given by Inserting a complete set of eigenstates gives Since and are eigenstates of , the Heisenberg operators can be rewritten in terms of Schrödinger operators, giving Performing the Fourier transform then gives Momentum conservation allows the final term to be written as (up to possible factors of the volume) which confirms the expressions for the Green functions in the spectral representation. The sum rule can be proved by considering the expectation value of the commutator, and then inserting a complete set of eigenstates into both terms of the commutator: Swapping the labels in the first term then gives which is exactly the result of the integration of . Non-interacting case In the non-interacting case, is an eigenstate with (grand-canonical) energy , where is the single-particle dispersion relation measured with respect to the chemical potential. The spectral density therefore becomes From the commutation relations, with possible factors of the volume again. The sum, which involves the thermal average of the number operator, then gives simply , leaving The imaginary-time propagator is thus and the retarded propagator is Zero-temperature limit As , the spectral density becomes where corresponds to the ground state. Note that only the first (second) term contributes when is positive (negative). General case Basic definitions We can use 'field operators' as above, or creation and annihilation operators associated with other single-particle states, perhaps eigenstates of the (noninteracting) kinetic energy. We then use where is the annihilation operator for the single-particle state and is that state's wavefunction in the position basis. This gives with a similar expression for . Two-point functions These depend only on the difference of their time arguments, so that and We can again define retarded and advanced functions in the obvious way; these are related to the time-ordered function in the same way as above. The same periodicity properties as described in above apply to . Specifically, and for . Spectral representation In this case, where and are many-body states. The expressions for the Green functions are modified in the obvious ways: and Their analyticity properties are identical to those of and defined in the translationally invariant case. The proof follows exactly the same steps, except that the two matrix elements are no longer complex conjugates. Noninteracting case If the particular single-particle states that are chosen are 'single-particle energy eigenstates', i.e. then for an eigenstate: so is : and so is : We therefore have We then rewrite therefore use and the fact that the thermal average of the number operator gives the Bose–Einstein or Fermi–Dirac distribution function. Finally, the spectral density simplifies to give so that the thermal Green function is and the retarded Green function is Note that the noninteracting Green function is diagonal, but this will not be true in the interacting case. See also Fluctuation theorem Green–Kubo relations Linear response function Lindblad equation Propagator Correlation function (quantum field theory) Numerical analytic continuation References Books Bonch-Bruevich V. L., Tyablikov S. V. (1962): The Green Function Method in Statistical Mechanics. North Holland Publishing Co. Abrikosov, A. A., Gorkov, L. P. and Dzyaloshinski, I. E. (1963): Methods of Quantum Field Theory in Statistical Physics Englewood Cliffs: Prentice-Hall. Negele, J. W. and Orland, H. (1988): Quantum Many-Particle Systems AddisonWesley. Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory (Vol. 1). John Wiley & Sons. . Mattuck Richard D. (1992), A Guide to Feynman Diagrams in the Many-Body Problem, Dover Publications, . Papers Bogolyubov N. N., Tyablikov S. V. Retarded and advanced Green functions in statistical physics, Soviet Physics Doklady, Vol. 4, p. 589 (1959). Zubarev D. N., Double-time Green functions in statistical physics, Soviet Physics Uspekhi 3(3), 320–345 (1960). External links Linear Response Functions in Eva Pavarini, Erik Koch, Dieter Vollhardt, and Alexander Lichtenstein (eds.): DMFT at 25: Infinite Dimensions, Verlag des Forschungszentrum Jülich, 2014 Quantum field theory Statistical mechanics Mathematical physics
Green's function (many-body theory)
[ "Physics", "Mathematics" ]
2,055
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Statistical mechanics", "Mathematical physics" ]
7,864,782
https://en.wikipedia.org/wiki/Congruent%20melting
Congruent melting occurs during melting of a compound when the composition of the liquid that forms is the same as the composition of the solid. It can be contrasted with incongruent melting. This generally happens in two-component systems. To take a general case, let A and B be the two components and AB a stable solid compound formed by their chemical combination. If we draw a phase diagram for the system, we notice that there are three solid phases, namely A, B and compound AB. Accordingly, there will be three fusion or freezing point curves AC, BE and CDE for the three solid phases. In the phase diagram, we can notice that the top point D of the phase diagram is the congruent melting point of the compound AB because the solid and liquid phases now have the same composition. Evidently, at this temperature, the two-component system has become a one-component system because both solid and liquid phases contains only the compound AB. Congruent melting point represents a definite temperature just like the melting points of pure components. In some phase diagrams, the congruent melting point of a compound AB may lie above the melting points of pure components A and B. But it is not always necessarily the case. There are different types of systems known in which the congruent melting point is observed below the melting points of the pure components. This happens for inter-metallic compounds, for example, for . See also Congruent transition Incongruent melting Phase diagram Phase rule References Phase transitions Materials science
Congruent melting
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
310
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Materials science stubs", "Phases of matter", "Critical phenomena", "Materials science", "nan", "Statistical mechanics", "Matter" ]
7,867,682
https://en.wikipedia.org/wiki/Christopher%20Longuet-Higgins
Hugh Christopher Longuet-Higgins (11 April 1923 – 27 March 2004) was a British theoretical chemist and cognitive scientist. He was the Professor of Theoretical Chemistry at the University of Cambridge for 13 years until 1967 when he moved to the University of Edinburgh to work in the developing field of cognitive science. He made many significant contributions to our understanding of molecular science. He was also a gifted amateur musician, both as performer and composer, and was keen to advance the scientific understanding of this art. He was the founding editor of the journal Molecular Physics. Education and early life Longuet-Higgins was born on 11 April 1923 at The Vicarage, Lenham, Kent, England, the elder son and second of the three children of Henry Hugh Longuet-Higgins (1886-1966), vicar of Lenham, and his wife, Albinia Cecil Bazeley. He was educated at The Pilgrims' School, Winchester, and Winchester College. At Winchester College he was one of the "gang of four" consisting of himself, his brother Michael, Freeman Dyson and James Lighthill. In 1941, he won a scholarship to Balliol College, Oxford. He read chemistry, but also took Part I of a degree in Music. He was a Balliol organ scholar. As an undergraduate he proposed the correct bridged structure of the chemical compound diborane (B2H6), whose structure was then unknown and turned out to be different from structures predicted by contemporary valence bond theory. This was published with his tutor, R. P. Bell. He completed a Doctor of Philosophy degree in 1947 at the University of Oxford under the supervision of Charles Coulson. Career and research After his D.Phil, Longuet-Higgins did postdoctoral research at the University of Chicago and the University of Manchester. In 1952, he was appointed Professor of Theoretical Physics at King's College London, and in 1954 was appointed John Humphrey Plummer Professor of Theoretical Chemistry at the University of Cambridge, and a Fellow of Corpus Christi College, Cambridge. He was the first warden of Leckhampton House, a Corpus Christi College residence for postgraduate students. While at Cambridge he made many original contributions in the field of theoretical chemistry, and he was perhaps unfortunate not to receive the Nobel prize for his work. Among the most important were his discovery of Geometric phase at the conical intersection of potential energy surfaces, his introduction of the correlation diagram approach to the study of Woodward-Hoffmann rules, and his introduction of nuclear permutation-inversion symmetry groups for the study of molecular symmetry. In his later years at Cambridge he became interested in the brain and the new field of artificial intelligence. As a consequence, in 1967, he made a major change in his career by moving to the University of Edinburgh to co-found the Department of Machine intelligence and perception, with Richard Gregory and Donald Michie. In 1974 he moved to the Centre for Research on Perception and Cognition (in the Department of Experimental Psychology) at Sussex University, Brighton, England. In 1981 he introduced the essential matrix to the computer vision community in a paper which also included the eight-point algorithm for the estimation of this matrix. He retired in 1988. Following his retirement he examined the problem of how to automate the process of performing music from a score. This work was never published, but his notebooks were meticulously kept and the research is available for reconstruction. The letters, papers and allied material are archived at the Royal Society. One of his latest publications on music cognition was published in Philosophical Transactions A. An example of Longuet-Higgins's writings, introducing the field of music cognition: His work on developing computational models of music understanding was recognized in the nineties by the award of an Honorary Doctorate of Music by the University of Sheffield. At the time of his death (in 2004) he was Professor Emeritus at the University of Sussex. Honours and awards Christopher Longuet-Higgins was elected a Fellow of the Royal Society in 1958, a Foreign Associate of the US National Academy of Sciences in 1968 a Fellow of the Royal Society of Edinburgh (FRSE) in 1969, and a Fellow of the Royal Society of Arts (FRSA) in 1970. He was a Fellow of the International Academy of Quantum Molecular Science. He had honorary doctorates from the universities of Bristol, Essex, Sheffield, Sussex and York. Among his notable prizes were the Jasper Ridley prize in music from Balliol College, Oxford, the Harrison memorial prize from the Chemical Society, and the Naylor prize from the London Mathematical Society. He was a governor of the BBC from 1979 to 1984. In 2005 the Longuet-Higgins Prize for "Fundamental Contributions in Computer Vision that Have Withstood the Test of Time" was created in his honor. The prize is awarded every year at the IEEE Computer Vision and Pattern Recognition Conference for up to two distinguished papers published at that same conference ten years earlier. Personal life Longuet-Higgins died on 27 March 2004, aged 80. Although he respected many of the features of the Church of England, he was an atheist. See also Geometric phase References 1923 births 2004 deaths People educated at Winchester College Computer vision researchers Alumni of Balliol College, Oxford Fellows of Corpus Christi College, Cambridge Academics of the University of Edinburgh Academics of King's College London Academics of the University of Sussex Fellows of Wolfson College, Cambridge Fellows of the Royal Society Fellows of the Royal Society of Edinburgh Foreign associates of the National Academy of Sciences Members of the International Academy of Quantum Molecular Science Theoretical chemists Cognitive musicology Computational chemists People from Lenham John Humphrey Plummer Professors
Christopher Longuet-Higgins
[ "Chemistry" ]
1,124
[ "Theoretical chemistry", "Quantum chemistry", "Physical chemists", "Theoretical chemists" ]
7,867,823
https://en.wikipedia.org/wiki/Potential%20determining%20ion
When placed into solution, salts begin to dissolve and form ions. This is not always in equal proportion, due to the preference of an ion to be dissolved in a given solution. The ability of an ion to preferentially dissolve (as a result of unequal activities) over its counterion is classified as the potential determining ion. The properties of this ion are strongly related to the surface potential present on a corresponding solid. This unequal property between corresponding ions results in a net surface charge. In some cases this arises because one of the ions freely leaves a corresponding solid and the other does not or it is bound to the solid by some other means. Adsorption of an ion to the solid may result in the solid acting as an electrode. (e.g., H+ and OH− on the surfaces of clays). In a colloidal dispersed system, ion dissolution arises, where the dispersed particles exist in equilibrium with their saturated counterpart, for example: NaCl(s) Na+(aq) + Cl−(aq) The behavior of this system is characterised by the components activity coefficients and solubility product: aNa+ · aCl− = Ksp In clay-aqueous systems the potential of the surface is determined by the activity of ions which react with the mineral surface. Frequently this is the hydrogen ion H+ in which case the important activity is determined by pH. The simultaneous adsorption of protons and hydroxyls as well as other potential determining cations and anions, leads to the concept of point of zero charge or PZC, where the total charge from the cations and anions at the surface is equal to zero. The charge must be zero, and this does not necessarily mean the number of cations versus anions in the solution are equal. For clay minerals the potential determining ions are H+ and OH− and complex ions formed by bonding with H+ and OH−. References Further reading Colloidal chemistry
Potential determining ion
[ "Chemistry" ]
404
[ "Colloidal chemistry", "Surface science", "Colloids" ]
7,868,623
https://en.wikipedia.org/wiki/Insensitive%20nuclei%20enhanced%20by%20polarization%20transfer
Insensitive nuclei enhancement by polarization transfer (INEPT) is a signal enhancement method used in NMR spectroscopy. It involves the transfer of nuclear spin polarization from spins with large Boltzmann population differences to nuclear spins of interest with lower Boltzmann population differences. INEPT uses J-coupling for the polarization transfer in contrast to Nuclear Overhauser effect (NOE), which arises from dipolar cross-relaxation. This method of signal enhancement was introduced by Ray Freeman in 1979. Due to its usefulness in signal enhancement, pulse sequences used in heteronuclear NMR experiments often contain blocks of INEPT or INEPT-like sequences. Background The sensitivity of NMR signal detection depends on the gyromagnetic ratio (γ) of the nucleus. In general, the signal intensity produced from a nucleus with a gyromagnetic ratio of γ is proportional to γ3 because the magnetic moment, the Boltzmann populations, and the nuclear precession frequency all increase in proportion to the gyromagnetic ratio γ. For example, the gyromagnetic ratio of 13C is 4 times lower than that of 1H, so the signal intensity it produces will be 64 times lower than one produced by a proton. However, since noise also increases as the square root of the frequency, the sensitivity is roughly proportional to γ5/2. A 13C nucleus would be 32 times less sensitive than a proton, and 15N around 300 times less sensitive. Sensitivity enhancement techniques are therefore desirable when recording an NMR signal from an insensitive nucleus. The sensitivity can be enhanced artificially by increasing the Boltzmann factors. One method may be through NOE; for example, for 13C signal, the signal-to-noise ratio can be improved three-fold when the attached protons are saturated. However, for NOE, a negative value of K, the ratio of gyromagnetic ratios of the nuclei, may result in a reduction in signal intensity. Since15N has a negative gyromagnetic ratio, the observed 15N signal can be near zero if the dipolar relaxation has to compete with other mechanisms. Alternative methods are therefore necessary for nuclei with a negative gyromagnetic ratio. One such method using the INEPT pulse sequence was proposed by Ray Freeman in 1979, which became widely adopted. Signal enhancement via the INEPT technique The INEPT signal enhancement has two sources: The spin population effect increases the signal by a factor of K = ratio of gyromagnetic ratios γI/γS of the nuclei, where γI and γS are the gyromagnetic ratio of the proton (the I spins) and the low-sensitivity nuclei (the S spins) respectively. Nuclei with higher magnetogyric ratio generally relax more quickly. Since the rate at which the INEPT transfer can be repeated is limited by the relaxation of these spins (rather than the low sensitivity spins), the INEPT experiment can be repeated more frequently, increasing the signal-to-noise ratio. As a result, INEPT can enhance the NMR signal by a factor larger than K, while the maximum enhancement via NOE is by a factor of 1+K/2. Unlike with NOE, no penalty is incurred by a negative gyromagnetic ratio in INEPT. It is therefore a useful method for enhancing the signal from nuclei with negative gyromagnetic ratio such as 15N or 29Si. The 15N signal may be enhanced by a factor of 10 via INEPT. Pulse sequence The pulse sequence of INEPT, as represented in the diagram, can be read as a combination of a spin echo and selective population inversion (SPI). The spin echo is a 90° pulse followed by a 180° pulse after a time period τ and is applied on the proton, the sensitive nucleus (designated, perhaps counter-intuitively, as the I spin, while the insensitive nucleus is the S spin; note, however, that the original paper on INEPT used the opposite designations). Spin Echo 90°I (X) — τ — 180°I (X) The first 90° pulse flips the proton magnetization onto the +y axis of the rotating frame and, due to inhomogeneity of the static magnetic field, the isochromats fan out at slightly different frequencies. After a time period, a 180° pulse is applied along the x axis, rotating the isochromats towards the -y axis. As each individual isochromat still precesses at the same frequency as before, all the isochromats converge and become refocused, thereby regenerating the signal, i.e. the echo. The chemical shifts are also refocused at the same time as the field inhomogeneity, and this property allows the magnetization to be manipulated independent of the chemical shifts. The refocusing allows all the proton chemical shifts to undergo population inversion in the SPI step without its undesirable selectivity. Selective Population Inversion 180°S — τ — 90°I (Y), 90°S — Acquisition As shown in the diagram, a 180° pulse is applied on the insensitive nucleus simultaneously with the 180° pulse on the proton. This is the population inversion part of the scheme, where a further 90° pulse after a time period on both the sensitive and insensitive nuclei rotate the magnetization onto the z-axis. This has the effect of producing an antiphase alignment of magnetization on the z axis, an important step during which the polarization is transferred from the sensitive nucleus to the insensitive one. Variations There are a number of variations of the experiments, for example, a symmetric refocusing step or an extra 90° 1H pulse may be added, and there are also reverse INEPT pulse sequences. References Nuclear magnetic resonance
Insensitive nuclei enhanced by polarization transfer
[ "Physics", "Chemistry" ]
1,210
[ "Nuclear magnetic resonance", "Nuclear physics" ]
7,869,764
https://en.wikipedia.org/wiki/Transverse%20relaxation-optimized%20spectroscopy
Transverse relaxation optimized spectroscopy (TROSY) is an experiment in protein NMR spectroscopy that allows studies of large molecules or complexes. The application of NMR to large molecules is normally limited by the fact that the line widths generally increase with molecular mass. Larger molecules have longer rotational correlation times and consequently shorter transverse relaxation times (T2). In other words, the NMR signal from larger molecules decays more rapidly, leading to line broadening in the NMR spectrum and poor resolution. In an HSQC spectrum in which decoupling has not been applied, peaks appear as multiplets due to J-coupling. Crucially the different multiplet components have different widths. This is due to constructive or destructive interaction between different relaxation mechanisms. Typically for large proteins at high magnetic field strengths, the transverse (T2) relaxation is dominated by the dipole-dipole (DD) mechanism and the chemical shift anisotropy (CSA) mechanism. As the relaxation mechanisms are generally correlated but contribute to the overall relaxation rate of a given component with different signs, the multiplet components relax with very different overall rates. The TROSY experiment is designed to select the component for which the different relaxation mechanisms have almost cancelled, leading to a single, sharp peak in the spectrum. This significantly increases both spectral resolution and sensitivity, both of which are at a premium when studying large and complex biomolecules. This approach significantly extends the molecular mass range that can be studied by NMR, but it generally requires high magnetic fields to achieve the necessary balance between the CSA and DD relaxation mechanisms; CSAs scale with field strength, while dipole-dipole couplings are field-independent. References Nuclear magnetic resonance experiments
Transverse relaxation-optimized spectroscopy
[ "Chemistry" ]
351
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance experiments", "Nuclear magnetic resonance stubs" ]
7,869,937
https://en.wikipedia.org/wiki/Radio%20software
Almost all radio stations today use some form of broadcast automation. Although some only use small scripts in audio players, a more robust solution is using a full radio automation suite. There are many commercial and free radio automation packages available. Radio software history Radio software allows AM and FM broadcasting to reproduce music and voices from the computer’s hard disk instead of using CD, MD, tape recorders or the old cartridge tape (see Fidelipac). Usually the radio stations stores all advertising campaigns and most of the music in hard disk. Then, instant replay of all the recorded material is done from a keyboard or with a click of the mouse. Now the PC is part of every AM and FM broadcasting, webcasting or podcasting system around the world. Radio software not only reproduces audio. It is possible to create a “playlist” that can reproduce automatically, without a board operator, a complete radio program, including meteorological announces, advertising campaigns, music tunes, satellite network connection, etc. Then, 24 hours radio stations are possible, also in small towns that can not afford to have operators and speakers all around the clock. Standard PCs are connected in a LAN network to be used on the master control, production, news, administration, etc. This technology is claimed to be invented in Buenos Aires by Oscar Bonello in 1989. The first radio software for automation, using lossy compressed digital audio codecs, was named Audicom and was internationally introduced at the 1990 National Association of Broadcasters Convention in Atlanta, USA. The world's first radio station to use it was one in San Francisco, California. The basis of the Audicom was the first application, targeted at radio automation, of the audio bit compression technology used to reduce the amount of data. Bonello delivered the first radio automation working technology using the masking curves published by Richard Ehmer. See also earlier developments of a music scheduling system such as of the US company Radio Computing Services in 1979. Nowadays with invention of the MP3 bit compression technology and standard audio cards there are a lot of automation software providers at the market. Some systems includes administration facilities for the traffic department, disc jockey schedule, live assist windows and even artificial intelligence automation control. Main components Media database The basic thing that a radio station does is to broadcast audio to its listeners. Audio can range from a simple talk over, a song or jingle to a sophisticated program with authored content. For profit radio stations rely on advertisements, or commercials, to generate revenue and sustain their operations. The media database stores details on media files, typically mp3 encoded files. Attributes contain, but not limited to, file name, name of the song or audio file, type of media, its duration, etc... Media records can have different types: Song Commercial Jingle Promotion Recorded program Media editing Media files can be edited prior to playback or broadcast. Typical audio editing features exist in most radio software solutions. In addition, radio software allows users to provide metadata for audio files, such as intro and outro positions within the file. Some radio software contain multi-track editors that allows users to set the mix between two songs as well as audio volume levels. Scheduling Radio scheduling starts with a grid. A grid will contain one or more schedules. Grids span a long time period, usually no less than three months. Schedules will in turn contain a list of programs. Schedules span a short time period, typically one day. Programs are usually one to four hours in duration, typically one hour. Programs will contain a list of program elements. This list of program elements become the play list that radio software loads and automates. Scheduling songs, external audio, or live shows, differs to scheduling commercials. The scheduler is used for defining program schedules but not to schedule commercials. There is a special module for commercial scheduling. Commercials The commercials module in radio software allows users to link commercials to campaigns and set their schedules for playback at future pre-defined times. The more advanced radio software solutions allow users to schedule commercials in bulk or through integration. Media playback Is the component of a radio software that allows user to playback media files in full automation or manually on demand. These modules typically contain audio players, a visual representation of the currently playing audio file, a queue of songs to be played in sequence, timers and clocks displays, as well as a browser for audio files in the media database with search capabilities. News News modules are used to create news sessions and news articles which can be read by news readers on air. These modules allow users to attach audio files to the news article to playback during a live news session. Broadcast recorder Records audio 24/7 in compressed format, splits, names, and archives files according to a user-defined structure. Radio stations usually maintain at least three years worth of 24/7 recorded broadcast. Broadcast streaming This module of radio software allows users to publish an audio stream of the playback to a pre-defined streaming server. Reports Reports provide users with capabilities to analyze and summarize events generated by users or the radio software itself. See also List of amateur radio software List of free software for audio List of music software Notes Broadcast engineering
Radio software
[ "Engineering" ]
1,036
[ "Broadcast engineering", "Electronic engineering" ]
7,870,034
https://en.wikipedia.org/wiki/Algebraic%20logic
In mathematical logic, algebraic logic is the reasoning obtained by manipulating equations with free variables. What is now usually called classical algebraic logic focuses on the identification and algebraic description of models appropriate for the study of various logics (in the form of classes of algebras that constitute the algebraic semantics for these deductive systems) and connected problems like representation and duality. Well known results like the representation theorem for Boolean algebras and Stone duality fall under the umbrella of classical algebraic logic . Works in the more recent abstract algebraic logic (AAL) focus on the process of algebraization itself, like classifying various forms of algebraizability using the Leibniz operator . Calculus of relations A homogeneous binary relation is found in the power set of for some set X, while a heterogeneous relation is found in the power set of , where . Whether a given relation holds for two individuals is one bit of information, so relations are studied with Boolean arithmetic. Elements of the power set are partially ordered by inclusion, and lattice of these sets becomes an algebra through relative multiplication or composition of relations. "The basic operations are set-theoretic union, intersection and complementation, the relative multiplication, and conversion." The conversion refers to the converse relation that always exists, contrary to function theory. A given relation may be represented by a logical matrix; then the converse relation is represented by the transpose matrix. A relation obtained as the composition of two others is then represented by the logical matrix obtained by matrix multiplication using Boolean arithmetic. Example An example of calculus of relations arises in erotetics, the theory of questions. In the universe of utterances there are statements S and questions Q. There are two relations and α from Q to S: q α a holds when a is a direct answer to question q. The other relation, q p holds when p is a presupposition of question q. The converse relation T runs from S to Q so that the composition Tα is a homogeneous relation on S. The art of putting the right question to elicit a sufficient answer is recognized in Socratic method dialogue. Functions The description of the key binary relation properties has been formulated with the calculus of relations. The univalence property of functions describes a relation that satisfies the formula where is the identity relation on the range of . The injective property corresponds to univalence of , or the formula where this time is the identity on the domain of . But a univalent relation is only a partial function, while a univalent total relation is a function. The formula for totality is Charles Loewner and Gunther Schmidt use the term mapping for a total, univalent relation. The facility of complementary relations inspired Augustus De Morgan and Ernst Schröder to introduce equivalences using for the complement of relation . These equivalences provide alternative formulas for univalent relations (), and total relations (). Therefore, mappings satisfy the formula Schmidt uses this principle as "slipping below negation from the left". For a mapping Abstraction The relation algebra structure, based in set theory, was transcended by Tarski with axioms describing it. Then he asked if every algebra satisfying the axioms could be represented by a set relation. The negative answer opened the frontier of abstract algebraic logic.<ref>Roger Maddux (1991) "The Origin of Relation Algebras in the Development and Axiomatization of the Calculus of Relations", Studia Logica 50: 421-55</ref> Algebras as models of logics Algebraic logic treats algebraic structures, often bounded lattices, as models (interpretations) of certain logics, making logic a branch of order theory. In algebraic logic: Variables are tacitly universally quantified over some universe of discourse. There are no existentially quantified variables or open formulas; Terms are built up from variables using primitive and defined operations. There are no connectives; Formulas, built from terms in the usual way, can be equated if they are logically equivalent. To express a tautology, equate a formula with a truth value; The rules of proof are the substitution of equals for equals, and uniform replacement. Modus ponens remains valid, but is seldom employed. In the table below, the left column contains one or more logical or mathematical systems, and the algebraic structure which are its models are shown on the right in the same row. Some of these structures are either Boolean algebras or proper extensions thereof. Modal and other nonclassical logics are typically modeled by what are called "Boolean algebras with operators." Algebraic formalisms going beyond first-order logic in at least some respects include: Combinatory logic, having the expressive power of set theory; Relation algebra, arguably the paradigmatic algebraic logic, can express Peano arithmetic and most axiomatic set theories, including the canonical ZFC. History Algebraic logic is, perhaps, the oldest approach to formal logic, arguably beginning with a number of memoranda Leibniz wrote in the 1680s, some of which were published in the 19th century and translated into English by Clarence Lewis in 1918. But nearly all of Leibniz's known work on algebraic logic was published only in 1903 after Louis Couturat discovered it in Leibniz's Nachlass. and translated selections from Couturat's volume into English. Modern mathematical logic began in 1847, with two pamphlets whose respective authors were George Boole and Augustus De Morgan. In 1870 Charles Sanders Peirce published the first of several works on the logic of relatives. Alexander Macfarlane published his Principles of the Algebra of Logic in 1879, and in 1883, Christine Ladd, a student of Peirce at Johns Hopkins University, published "On the Algebra of Logic". Logic turned more algebraic when binary relations were combined with composition of relations. For sets A and B, a relation over A and B is represented as a member of the power set of A×B with properties described by Boolean algebra. The "calculus of relations" is arguably the culmination of Leibniz's approach to logic. At the Hochschule Karlsruhe the calculus of relations was described by Ernst Schröder. In particular he formulated Schröder rules, though De Morgan had anticipated them with his Theorem K. In 1903 Bertrand Russell developed the calculus of relations and logicism as his version of pure mathematics based on the operations of the calculus as primitive notions. The "Boole–Schröder algebra of logic" was developed at University of California, Berkeley in a textbook by Clarence Lewis in 1918. He treated the logic of relations as derived from the propositional functions of two or more variables. Hugh MacColl, Gottlob Frege, Giuseppe Peano, and A. N. Whitehead all shared Leibniz's dream of combining symbolic logic, mathematics, and philosophy. Some writings by Leopold Löwenheim and Thoralf Skolem on algebraic logic appeared after the 1910–13 publication of Principia Mathematica, and Tarski revived interest in relations with his 1941 essay "On the Calculus of Relations". According to Helena Rasiowa, "The years 1920-40 saw, in particular in the Polish school of logic, researches on non-classical propositional calculi conducted by what is termed the logical matrix method. Since logical matrices are certain abstract algebras, this led to the use of an algebraic method in logic." discusses the rich historical connections between algebraic logic and model theory. The founders of model theory, Ernst Schröder and Leopold Loewenheim, were logicians in the algebraic tradition. Alfred Tarski, the founder of set theoretic model theory as a major branch of contemporary mathematical logic, also: Initiated abstract algebraic logic with relation algebras Invented cylindric algebra Co-discovered Lindenbaum–Tarski algebra. In the practice of the calculus of relations, Jacques Riguet used the algebraic logic to advance useful concepts: he extended the concept of an equivalence relation (on a set) to the heterogeneous case with the notion of a difunctional relation. Riguet also extended ordering to the heterogeneous context by his note that a staircase logical matrix has a complement that is also a staircase, and that the theorem of N. M. Ferrers follows from interpretation of the transpose of a staircase. Riguet generated rectangular relations by taking the outer product of logical vectors; these contribute to the non-enlargeable rectangles of formal concept analysis. Leibniz had no influence on the rise of algebraic logic because his logical writings were little studied before the Parkinson and Loemker translations. Our present understanding of Leibniz as a logician stems mainly from the work of Wolfgang Lenzen, summarized in . To see how present-day work in logic and metaphysics can draw inspiration from, and shed light on, Leibniz's thought, see . See also Boolean algebra Codd's theorem Computer algebra Universal algebra References Sources Further reading Good introduction for readers with prior exposure to non-classical logics but without much background in order theory and/or universal algebra; the book covers these prerequisites at length. This book however has been criticized for poor and sometimes incorrect presentation of AAL results. Review by Janusz Czelakowski Draft. Ramon Jansana (2011), "Propositional Consequence Relations and Algebraic Logic". Stanford Encyclopedia of Philosophy. Mainly about abstract algebraic logic. Stanley Burris (2015), "The Algebra of Logic Tradition". Stanford Encyclopedia of Philosophy. Willard Quine, 1976, "Algebraic Logic and Predicate Functors" pages 283 to 307 in The Ways of Paradox, Harvard University Press. Historical perspective Ivor Grattan-Guinness, 2000. The Search for Mathematical Roots. Princeton University Press. Irving Anellis & N. Houser (1991) "Nineteenth Century Roots of Algebraic Logic and Universal Algebra", pages 1–36 in Algebraic Logic'', Colloquia Mathematica Societatis János Bolyai # 54, János Bolyai Mathematical Society & Elsevier External links Algebraic logic at PhilPapers History of logic
Algebraic logic
[ "Mathematics" ]
2,084
[ "Fields of abstract algebra", "Mathematical logic", "Algebraic logic" ]
19,220,750
https://en.wikipedia.org/wiki/22-Dihydroergocalciferol
22-Dihydroergocalciferol is a form of vitamin D, also known as vitamin D4. It has the systematic name (5Z,7E)-(3S)-9,10-seco-5,7,10(19)-ergostatrien-3-ol. Vitamin D4 is found in certain mushrooms, being produced from ergosta-5,7-dienol (22,23-dihydroergosterol) instead of ergosterol. See also Forms of vitamin D, the five known forms of vitamin D Lumisterol, a constituent of vitamin D1 References External links Dihydroergocalciferols at lipidmaps.org Vitamin D
22-Dihydroergocalciferol
[ "Chemistry", "Biology" ]
157
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
19,220,980
https://en.wikipedia.org/wiki/Fermentative%20hydrogen%20production
Fermentative hydrogen production is the fermentative conversion of organic substrates to H2. Hydrogen produced in this manner is often called biohydrogen. The conversion is effected by bacteria and protozoa, which employ enzymes. Fermentative hydrogen production is one of several anaerobic conversions. Dark vs photofermentation Dark fermentation reactions do not require light energy. These are capable of constantly producing hydrogen from organic compounds throughout the day and night. Typically these reactions are coupled to the formation of carbon dioxide or formate. Important reactions that result in hydrogen production start with glucose, which is converted to acetic acid: C6H12O6 + 2 H2O → 2 CH3CO2H + 2 CO2 + 4 H2 A related reaction gives formate instead of carbon dioxide: C6H12O6 + 2 H2O → 2 CH3CO2H + 2 HCO2H + 2 H2 These reactions are exergonic by 216 and 209 kcal/mol, respectively. Using synthetic biology, bacteria can be genetically altered to enhance this reaction. Photofermentation differs from dark fermentation, because it only proceeds in the presence of light. Electrohydrogenesis is used in microbial fuel cells. Bacteria strains For example, photo-fermentation with Rhodobacter sphaeroides SH2C can be employed to convert small molecular fatty acids into hydrogen. Enterobacter aerogenes is an outstanding hydrogen producer. It is an anaerobic facultative and mesophilic bacterium that is able to consume different sugars and in contrast to cultivation of strict anaerobes, no special operation is required to remove all oxygen from the fermenter. E. aerogenes has a short doubling time and high hydrogen productivity and evolution rate. Furthermore, hydrogen production by this bacterium is not inhibited at high hydrogen partial pressures; however, its yield is lower compared to strict anaerobes like Clostridia. A theoretical maximum of 4 mol H2/mol glucose can be produced by strict anaerobic bacteria. Facultative anaerobic bacteria such as E. aerogenes have a theoretical maximum yield of 2 mol H2/mol glucose. See also Biohydrogen Fermentation (biochemistry) Hydrogen production Synthetic biology Single cell protein References External links HYDROGEN PRODUCTION VIA DIRECT FERMENTATION Developments and constraints in fermentative hydrogen production Biofuels technology Catalysis Environmental engineering Fermentation Hydrogen biology Hydrogen economy Hydrogen production
Fermentative hydrogen production
[ "Chemistry", "Engineering", "Biology" ]
526
[ "Catalysis", "Biofuels technology", "Cellular respiration", "Chemical engineering", "Civil engineering", "Environmental engineering", "Biochemistry", "Chemical kinetics", "Fermentation" ]
19,221,718
https://en.wikipedia.org/wiki/Subhelic%20arc
A subhelic arc is a rare halo, formed by internal reflection through ice crystals, that curves upwards from the horizon and touches the tricker arc above the anthelic point. Subhelic arcs result from ray entrance and exit through prism end faces with two intermediate internal reflections. Formation A subhelic arc is formed when sun rays enter one end face of an ice crystal in singly oriented columns and Parry columns, reflect off two of the crystals side faces, and exits the crystal through the opposite end face. The ray leaves the crystal at the exact opposite angle, resulting in a net deviation angle of 120°, the angle for the formation of 120° parhelia. The subhelic arc touches the top of the tricker arc, an indication the two have closely related ray paths. The subhelic arc crosses the parhelic circle at an acute angle, and at a sun elevation of 27° it passes exactly through the 120° parhelion. See also Infralateral arc Wegener arc Notes References External links (Including a drawing from a halo observation in Oulunsalo, Finland, in April 1996.) (A halo display including a subhelic arc, observed in Oulu, Finland, May 2008.) Atmospheric optical phenomena
Subhelic arc
[ "Physics" ]
258
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
19,228,918
https://en.wikipedia.org/wiki/Tolaasin
Tolaasin, a toxic secretion by Pseudomonas tolaasii, is the cause of bacterial brown blotch disease of edible mushrooms. Tolaasin is composed of 18 amino acids, including a beta-hydroxy-octanoic acid chain located at the N terminus. Tolaasin is a 1985 Da lipodepsipeptide non-host specific toxin. In addition to forming an amphipathic left handed alpha-helix in a hydrophobic environment, the toxin has been shown to form Zn2+-sensitive voltage-gated ion channels in planar lipid bilayers and to catalyze erythrocyte lysis by a colloid osmotic mechanism. At high concentrations, tolaasin acts as a detergent that is able to directly dissolve eukaryotic membranes. The fungal cell membranes are disrupted by the lipopeptides through the formation of trans-membrane pores. Tolaasin pores disrupt the cellular osmotic pressure, leading to membrane collapse. Compounds that inhibit the toxicity of tolaasin have been identified from varying food additives. Tolaasin cytotoxicity can be effectively inhibited by food detergents, as well as sucrose and polyglycerol esters of fatty acids. References Peptides Bacterial toxins
Tolaasin
[ "Chemistry", "Biology" ]
278
[ "Biomolecules by chemical classification", "Biotechnology stubs", "Biochemistry stubs", "Molecular biology", "Biochemistry", "Peptides" ]
19,230,473
https://en.wikipedia.org/wiki/PROX
PROX is an acronym for PReferential OXidation, that refers to the preferential oxidation of carbon monoxide in a gas mixture by a catalyst. It is intended to remove trace amounts of CO from H2/CO/CO2 mixtures produced by steam reforming and water-gas shift. An ideal PROX catalyst preferentially oxidizes carbon monoxide (CO) using a heterogeneous catalyst placed upon a ceramic support. Catalysts include metals such as platinum, platinum/iron, platinum/ruthenium, gold nanoparticles as well as novel copper oxide/ceramic conglomerate catalysts. Motivation This reaction is a considerable subject area of research with implications for fuel cell design. Its main utility lies in the removal of carbon monoxide (CO) from the fuel cell's feed gas. CO poisons the catalyst of most low-temperature fuel cells. Carbon monoxide is often produced as a by-product from steam reforming of hydrocarbons, which produces hydrogen and CO. It is possible to consume most of the CO by reacting it with steam in the water-gas shift reaction: CO + H2O H2 + CO2 The water-gas shift reaction can reduce CO to 1% of the feed, with the added benefit of producing more hydrogen, but not eliminate it completely. To be used in a fuel cell, feed gas must have CO below 10 ppm. Description The PROX process allows for the reaction of CO with oxygen, reducing CO concentration from approximately 0.5–1.5% in the feed gas to less than 10 ppm. 2CO + O2 → 2CO2 Due to the prevalent presence of hydrogen in the feed gas, the competing, undesired combustion of hydrogen will also occur to some degree: 2H2 + O2 → 2H2O The selectivity of the process is a measure of the quality of the reactor, and is defined as the ratio of consumed carbon monoxide to the total of consumed hydrogen and carbon monoxide. The disadvantage of this technology is its very strong exothermic nature, coupled with a very narrow optimal operation temperature window, and is best operated between 353 and 450 K, yielding a hydrogen loss of around one percent. Effective cooling is therefore required. In order to minimize steam generation, excessive dilution with nitrogen is used. Additionally the reaction is interrupted with an intermediary cooler before proceeding to a second stage. In the first reaction an excess of oxygen is provided, at around a factor of two, and about 90% of the CO is transformed. In the second step a substantially higher oxygen excess is used, at approximately a factor of 4, which is then processed with the remaining CO, in order to reduce the CO concentration to less than 10 ppm. To also avoid excess CO-fraction loading, the transient operation of a CO adsorber may be important. The instrumentation and process control complexity requirements are relatively high. The advantage of this technique over selective methanation is the higher space velocity, which reduces the required reactor size. For the case of strong temperature rises, the feed of air can simply be broken. The technical origins for CO-PROX lies in the synthesis of ammonia (Haber process). Ammonia synthesis also has a strict requirement of CO-free hydrogen, as CO is a strong catalyst poison for the usual catalysts used in this process. See also Methanol reformer Steam reforming Partial oxidation References Bibliography Peters et al.: Gasaufbereitung für Brennstoffzellen Chemie Ingenieur Technik 76/10 (2004) 1555-1558 Catalysis Inorganic reactions Chemical reaction engineering Hydrogen production Fuel cells
PROX
[ "Chemistry", "Engineering" ]
744
[ "Catalysis", "Chemical reaction engineering", "Chemical engineering", "Inorganic reactions", "Chemical kinetics" ]
2,449,329
https://en.wikipedia.org/wiki/Twyman%E2%80%93Green%20interferometer
A Twyman–Green interferometer is a variant of the Michelson interferometer principally used to test optical components. It was introduced in 1918 by Frank Twyman and Arthur Green. Fig. 1 illustrates a Twyman–Green interferometer set up to test a lens. Light from a laser is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis. The fixed mirror in the Michelson interferometer is rotatable in the Twyman–Green interferometer, and while the light source is usually an extended source (although it can also be a laser) in a Michelson interferometer, the light source is always a point-like source in the Twyman–Green interferometer. The rotation of one mirror results in straight fringes appearing in the interference pattern, a fringing which is used to test the quality of optical components by observing changes in the fringe pattern when the component is placed in one arm of the interferometer. References Interferometers
Twyman–Green interferometer
[ "Technology", "Engineering" ]
251
[ "Interferometers", "Measuring instruments" ]
2,449,393
https://en.wikipedia.org/wiki/Range%20of%20motion
Range of motion (or ROM) is the linear or angular distance that a moving object may normally travel while properly attached to another. In biomechanics and strength training, ROM refers to the angular distance and direction a joint can move between the flexed position and the extended position. The act of attempting to increase this distance through therapeutic exercises (range of motion therapy—stretching from flexion to extension for physiological gain) is also sometimes called range of motion. In mechanical engineering, it is (also called range of travel or ROT) used particularly when talking about mechanical devices, such as a sound volume control knob. In biomechanics Measuring range of motion Each specific joint has a normal range of motion that is expressed in degrees. The reference values for the normal ROM in individuals differ slightly depending on age and sex. For example, as an individual ages, they typically lose a small amount of ROM. Analog and traditional devices to measure range of motion in the joints of the body include the goniometer and inclinometer which use a stationary arm, protractor, fulcrum, and movement arm to measure angle from axis of the joint. As measurement results will vary by the degree of resistance, two levels of range of motion results are recorded in most cases. Recent technological advances in 3D motion capture technology allow for the measurement of joints concurrently, which can be used to measure a patient's active range of motion. Limited range of motion Limited range of motion refers to a joint that has a reduction in its ability to move. The reduced motion may be a problem with the specific joint or it may be caused by injury or diseases such as osteoarthritis, rheumatoid arthritis, or other types of arthritis. Pain, swelling, and stiffness associated with arthritis can limit the range of motion of a particular joint and impair function and the ability to perform usual daily activities. Limited range of motion can affect extension or flexion. If there is limited range of extension, it is called "flexion contracture" or "flexion deformity". If the flexion is deficient, it is called "limited range of flexion" or "limited flexion range". Range of motion exercises Physical and occupational therapy can help to improve joint function by focusing on range of motion exercises. The goal of these exercises is to gently increase range of motion while decreasing pain, swelling, and stiffness. There are three types of range of motion exercises: Passive range of motion (or PROM) – Therapist or equipment moves the joint through the range of motion with no effort from the patient. Active assisted range of motion (or AAROM) – Patient uses the muscles surrounding the joint to perform the exercise but requires some help from the therapist or equipment (such as a strap). Active range of motion (or AROM) – Patient performs the exercise to move the joint without any assistance to the muscles surrounding the joint. See also Joint locking (symptom) Hypermobility References https://www.physio-pedia.com/Range_of_Motion Mechanical engineering Massage therapy
Range of motion
[ "Physics", "Engineering" ]
631
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
1,767,692
https://en.wikipedia.org/wiki/Neutron%20activation
Neutron activation is the process in which neutron radiation induces radioactivity in materials, and occurs when atomic nuclei capture free neutrons, becoming heavier and entering excited states. The excited nucleus decays immediately by emitting gamma rays, or particles such as beta particles, alpha particles, fission products, and neutrons (in nuclear fission). Thus, the process of neutron capture, even after any intermediate decay, often results in the formation of an unstable activation product. Such radioactive nuclei can exhibit half-lives ranging from small fractions of a second to many years. Neutron activation is the only common way that a stable material can be induced into becoming intrinsically radioactive. All naturally occurring materials, including air, water, and soil, can be induced (activated) by neutron capture into some amount of radioactivity in varying degrees, as a result of the production of neutron-rich radioisotopes. Some atoms require more than one neutron to become unstable, which makes them harder to activate because the probability of a double or triple capture by a nucleus is below that of single capture. Water, for example, is made up of hydrogen and oxygen. Hydrogen requires a double capture to attain instability as tritium (hydrogen-3), while natural oxygen (oxygen-16) requires three captures to become unstable oxygen-19. Thus water is relatively difficult to activate, as compared to sodium chloride (NaCl), in which both the sodium and chlorine atoms become unstable with a single capture each. These facts were experienced at the Operation Crossroads atomic test series in 1946. Examples An example of this kind of a nuclear reaction occurs in the production of cobalt-60 within a nuclear reactor: The cobalt-60 then decays by the emission of a beta particle plus gamma rays into nickel-60. This reaction has a half-life of about 5.27 years, and due to the availability of cobalt-59 (100% of its natural abundance), this neutron bombarded isotope of cobalt is a valuable source of nuclear radiation (namely gamma radiation) for radiotherapy. + → In other cases, and depending on the kinetic energy of the neutron, the capture of a neutron can cause nuclear fission—the splitting of the atomic nucleus into two smaller nuclei. If the fission requires an input of energy, that comes from the kinetic energy of the neutron. An example of this kind of fission in a light element can occur when the stable isotope of lithium, lithium-7, is bombarded with fast neutrons and undergoes the following nuclear reaction: + → + + + gamma rays + kinetic energy In other words, the capture of a neutron by lithium-7 causes it to split into an energetic helium nucleus (alpha particle), a hydrogen-3 (tritium) nucleus and a free neutron. The Castle Bravo accident, in which the thermonuclear bomb test at Bikini Atoll in 1954 exploded with 2.5 times the expected yield, was caused by the unexpectedly high probability of this reaction. In the area around a pressurized water reactor or boiling water reactor during normal operation, a significant amount of radiation is produced due to the fast neutron activation of coolant water oxygen via a (n,p) reaction. The activated oxygen-16 nucleus emits a proton (hydrogen nucleus), and transmutes to nitrogen-16, which has a very short life (7.13 seconds) before decaying back to oxygen-16 (emitting 10.4 MeV beta particles and 6.13 MeV gamma radiations). + → + (Decays rapidly) → + + This activation of the coolant water requires extra biological shielding around the nuclear reactor plant. It is the high energy gamma ray in the second reaction that causes the major concern. This is why water that has recently been inside a nuclear reactor core must be shielded until this radiation subsides. One to two minutes is generally sufficient. In facilities that housed a cyclotron, the reinforced concrete foundation can become radioactive due to neutron activation. Six important long-lived radioactive isotopes (54Mn, 55Fe, 60Co, 65Zn, 133Ba, and 152Eu) can be found within concrete nuclei affected by neutrons. The residual radioactivity is predominantly due to trace elements present, and thus the amount of radioactivity derived from cyclotron activation is minuscule, i.e., pCi/g or Bq/g. The release limit for facilities with residual radioactivity is 25 mrem/year. An example of 55Fe production from the activation of iron in reinforcement bars found in concrete is shown below: + → Occurrence Neutron activation is the only common way that a stable material can be induced into becoming intrinsically radioactive. Activation is inherently different than contamination. Neutrons are only free in quantity in the microseconds of a nuclear weapon's explosion, in an active nuclear reactor, or in a spallation neutron source. In an atomic weapon, neutrons are generated for only between 1 and 50 microseconds, but in huge numbers. Most are absorbed by the metallic bomb casing, which is only just starting to be affected by the explosion within it. The neutron activation of the soon-to-be vaporized metal is responsible for a significant portion of the nuclear fallout in nuclear bursts high in the atmosphere. In other types of activation, neutrons may irradiate soil that is dispersed in a mushroom cloud at or near the Earth's surface, resulting in fallout from activation of soil chemical elements. Effects on materials over time In any location with high neutron fluxes, such as within the cores of nuclear reactors, neutron activation contributes to material erosion and periodically the lining materials themselves must be disposed of, as low-level radioactive waste. Some materials are more subject to neutron activation than others, so a suitably chosen low-activation material can significantly reduce this problem (see International Fusion Materials Irradiation Facility). For example, Chromium-51 will form by neutron activation in chrome steel (which contains Cr-50) that is exposed to a typical reactor neutron flux. Carbon-14, most frequently but not solely, generated by the neutron activation of atmospheric nitrogen-14 with a thermal neutron, is (together with its dominant natural production pathway from cosmic ray-air interactions and historical production from atmospheric nuclear testing) also generated in comparatively minute amounts inside many designs of nuclear reactors which contain nitrogen gas impurities in their fuel cladding, coolant water and by neutron activation of the oxygen contained in the water itself. Fast breeder reactors (FBR) produce about an order of magnitude less C-14 than the most common reactor type, the pressurized water reactor, as FBRs do not use water as a primary coolant. Uses Radiation safety For physicians and radiation safety officers, activation of sodium in the human body to sodium-24, and phosphorus to phosphorus-32, can give a good immediate estimate of acute accidental neutron exposure. Neutron detection One way to demonstrate that nuclear fusion has occurred inside a fusor device is to use a Geiger counter to measure the gamma ray radioactivity that is produced from a sheet of aluminium foil. In the ICF fusion approach, the fusion yield of the experiment (directly proportional to neutron production) is usually determined by measuring the gamma-ray emissions of aluminium or copper neutron activation targets. Aluminium can capture a neutron and generate radioactive sodium-24, which has a half life of 15 hours and a beta decay energy of 5.514 MeV. The activation of a number of test target elements such as sulfur, copper, tantalum, and gold have been used to determine the yield of both pure fission and thermonuclear weapons. Materials analysis Neutron activation analysis is one of the most sensitive and precise methods of trace element analysis. It requires no sample preparation or solubilization and can therefore be applied to objects that need to be kept intact such as a valuable piece of art. Although the activation induces radioactivity in the object, its level is typically low and its lifetime may be short, so that its effects soon disappear. In this sense, neutron activation is a non-destructive analysis method. Neutron activation analysis can be done in situ. For example, aluminium (Al-27) can be activated by capturing relatively low-energy neutrons to produce the isotope Al-28, which decays with a half-life of 2.3 minutes with a decay energy of 4.642 MeV. This activated isotope is used in oil drilling to determine the clay content (clay is generally an alumino-silicate) of the underground area under exploration. Historians can use accidental neutron activation to authenticate atomic artifacts and materials subjected to neutron fluxes from fission incidents. For example, one of the rare isotopes found in trinitite, and therefore with its absence likely signifying a fake sample of the mineral, is a barium neutron activation product, the barium in the Trinity device coming from the slow explosive lens employed in the device, known as Baratol. Semiconductor production Neutron irradiation may be used for float-zone silicon slices (wafers) to trigger fractional transmutation of Si atoms into phosphorus (P) and therefore doping it into n-type silicon See also Induced radioactivity Neutron activation analysis Neutron embrittlement Phosphorus-32 produced when sulfur captures a neutron. Salted bomb Table of nuclides References External links Neutron Activation Analysis web Handbook on Nuclear Activation Cross-Sections, IAEA, 1974 Decay Data in MIRD Format from the National Nuclear Data Center at Brookhaven National Laboratory Neutron capture as it relates to nucleosynthesis Neutron capture and the Chart of the nuclides The chart of the Nuclides Discovery of the Chromium isotopes, Chromium-55 by Cr-54 neutron capture ORILL : 1D transmutation, fuel depletion, and radiological protection code Further reading Activation Radiation effects Radiation
Neutron activation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,022
[ "Transport phenomena", "Physical phenomena", "Materials science", "Waves", "Radiation", "Condensed matter physics", "Radiation effects" ]
1,769,024
https://en.wikipedia.org/wiki/Condeep
Condeep is a make of gravity-based structure for oil platforms invented and patented by engineer Olav Mo in 1972, which were fabricated by Norwegian Contractors in Stavanger, Norway. Condeep is an abbreviation for concrete deep water structure. A Condeep usually consists of a base of concrete oil storage tanks from which one, three or four concrete shafts rise. The Condeep base always rests on the sea floor, and the shafts rise to about 30 meters above the sea level. The platform deck itself is not a part of the construction. The Condeep is used for a series of production platforms introduced for crude oil and natural gas production in the North Sea and Norwegian continental shelf. Following the success of the concrete oil storage tank on the Ekofisk field, Norwegian Contractors introduced the Condeep production platform concept in 1973. This gravity-based structure for a platform was unique in that it was built from reinforced concrete instead of steel, which was the norm up to that point. This platform type was designed for the heavy weather conditions and the great water depths often found in the North Sea. Condeep has the advantage that it allows for storage of oil at sea in its own construction. It further allows equipment installation in the hollow legs well protected from the sea. In contrast, one of the challenges with steel platforms is that they only allow for limited weight on the deck compared with a Condeep where the weight allowance for production equipment and living quarters is seldom a problem. Troll A The Troll A platform is the tallest Condeep to date. It was built over a period of four years, using a workforce of 2,000, and deployed in 1995 to produce gas from the enormous Troll oil field. With a total height of , Troll A was the tallest object that has ever been moved to another position, relative to the surface of the Earth. Many sources incorrectly state that it was also the largest structure of any kind to be moved but the Gullfaks C was in fact heavier. The total weight of the Troll A Condeep when launching was 1.2 million tons. 245,000 m³ of concrete and 100,000 tons of steel for reinforcement were used. The amount of steel corresponds to 15 Eiffel towers. The platform is placed at a depth of 300 meters. For stability, it is dug 35 meters into the sea floor. Gullfaks C Gullfaks C rests below the sea surface and has a total height of . Gullfaks C was the heaviest object that has ever been moved to another position, relative to the surface of the Earth with a total displacement between 1.4 and 1.5 million tons. Condeep platforms The original concrete structure of Sleipner A sank during trials in the Gandsfjord on August 23, 1991. A new structure was built, and deployed in 1993. Sources References Oil platforms
Condeep
[ "Chemistry", "Engineering" ]
575
[ "Oil platforms", "Petroleum technology", "Natural gas technology", "Structural engineering" ]
1,770,001
https://en.wikipedia.org/wiki/Active%20Cylinder%20Control
Daimler AG's Active Cylinder Control (ACC) is a variable displacement technology. It debuted in 2001 on the 5.8 L V12 in the CL600 and S600. Like Chrysler's later Multi-Displacement System, General Motors' Active Fuel Management, and Honda's Variable Cylinder Management, it deactivates one bank of the engine's cylinders when the throttle is closed. In order to preserve the sound of the engines, DaimlerChrysler worked with Eberspächer to design a special exhaust system for ACC-equipped vehicles. The system uses an active valve to divert exhaust between two different exhaust systems. It also has a variable length intake manifold system to optimize output in the two modes. Applications M138 V12 2001–2002 See also Chrysler's Multi-Displacement System (MDS) General Motors' Active Fuel Management (AFM) Honda's Variable Cylinder Management (VCM) Variable displacement References External links Ward's article Mercedes-Benz Group Engine technology Automotive technology tradenames
Active Cylinder Control
[ "Technology" ]
213
[ "Engine technology", "Engines" ]
18,208,818
https://en.wikipedia.org/wiki/Rubroboletus%20legaliae
Rubroboletus legaliae, previously known as Boletus splendidus, B. satanoides, and B. legaliae is a basidiomycete fungus of the family Boletaceae. It is poisonous, with predominantly gastrointestinal symptoms, and is related to Rubroboletus satanas. Boletus legaliae was described by Czech mycologist Albert Pilát in 1968. It is named after the French mycologist Marcelle Le Gal. It's uncommon in Southern England, and Europe, and grows with oak (Quercus) and beech (Fagus) often on neutral to acid soils. It is considered vulnerable in the Czech Republic. In Britain, all of the boletes in the Satanas group are either very rare, endangered, or extinct. Description The cap is initially off-white, or coffee-coloured at the button stage. In mid life it often (but not always) turns a pale mouse grey. In old age the cap turns reddish, or what has been described as 'old rose'. It may reach in diameter. The stipe is stocky, with a narrow red reticulation (net pattern) on an orange ground at the apex. This orange ground colour fades gradually towards the midsection, making the red reticulation more pronounced. At the base the reticulation is absent, and the stipe turns dark vinaceous. Sometimes the stipe detail can be faint, or even absent when covered with earth or leaf litter. The pores are initially red, but have an overall orange colour when mature, and they bruise blue. The flesh turns pale blue on cutting / dark vinaceous in the stipe base. Often this blueing process is very slow, sometimes taking a minute or so for the flesh to turn a light blue. In other situations, blueing is near-instant. The flesh is said to smell of chicory. Boletus splendidus as described by Charles-Édouard Martín in 1894 is a synonym. The description of Boletus satanoides was too vague to be ascribed to any actual species. Boletus legaliae was transferred to the genus Rubroboletus in 2015 by Marco Della Maggiora and Renzo Trassinelli. Occurrence in the UK R. legaliae is an uncommon to rare species in the UK, commonly occurring in open woodland or parkland with plenty of sun on neutral-to-acidic soil. Assessed as 'Vulnerable' by the JNCC, it has a population of around 560 mature specimens. An unusual morph with bright yellow pores has been recorded from Windsor Great Park, Berkshire, sometimes growing alongside normal-pored variants. Similar species Rubroboletus satanas, found in broad-leaved woodland on calcareous soil, has a whiter cap that turns brownish-ochre, lacking the overall reddish tones in maturity. It has a more nauseating smell. Molecular study of the holotype of Rubroboletus spinari has demonstrated its conspecifity with Rubroboletus legaliae. References Poisonous fungi legaliae Fungi of Europe Fungi described in 1968 Fungus species
Rubroboletus legaliae
[ "Biology", "Environmental_science" ]
644
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
18,208,899
https://en.wikipedia.org/wiki/MIKE%20BASIN
MIKE BASIN is an extension of ArcMap (ESRI) for integrated water resources management and planning. It provides a framework for managers and stakeholders to address multi-sectoral allocation and environmental issues in river basins. It is designed to investigate water sharing issues at international or interstate level, and between competing groups of water users, including the environment. MIKE BASIN is developed by DHI. As of September 2014, MIKE BASIN is no longer available for order or download from DHI. It has been replaced by the application named MIKE HYDRO Basin. Applications MIKE BASIN can be used for providing solutions and alternatives to water allocation and water shortage problems, improving and optimizing reservoir and hydropower operations, exploring conjunctive use of groundwater and surface water, evaluating and improving irrigation performance, solving multi-criteria optimization problems, establishing cost-effective measures for water quality compliance. External links Integrated hydrologic modelling Hydraulic engineering Environmental engineering Physical geography
MIKE BASIN
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
186
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
18,209,139
https://en.wikipedia.org/wiki/MIKE%20FLOOD
is a computer program that simulates inundation for rivers, flood plains and urban drainage systems. It dynamically couples 1D (MIKE 11 and Mouse) and 2D (MIKE 21) modeling techniques into one single tool. MIKE FLOOD is developed by DHI. MIKE FLOOD is accepted by US Federal Emergency Management Agency (FEMA) for use in the National Flood Insurance Program (NFIP). MIKE FLOOD can be expanded with a range of modules and methods including a flexible mesh overland flow solver, MIKE URBAN, Rainfall-runoff modeling and dynamic operation of structures. Applications MIKE FLOOD can be used for river-flood plain interaction, integrated urban drainage and river modeling, urban flood analysis and detailed dam break studies. Integrated hydrologic modelling Hydraulic engineering Environmental engineering Physical geography
MIKE FLOOD
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
156
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
18,209,184
https://en.wikipedia.org/wiki/ALL%20%28complexity%29
In computability and complexity theory, ALL is the class of all decision problems. Relations to other classes ALL contains all of the complex classes of decision problems, including RE and co-RE, and uncountably many languages that are neither RE nor co-RE. It is the largest complexity class, containing all other complexity classes. External links Complexity classes Undecidable problems
ALL (complexity)
[ "Mathematics" ]
79
[ "Mathematical problems", "Undecidable problems", "Computational problems" ]
18,210,373
https://en.wikipedia.org/wiki/Canadian%20traveller%20problem
In computer science and graph theory, the Canadian traveller problem (CTP) is a generalization of the shortest path problem to graphs that are partially observable. In other words, a "traveller" on a given point on the graph cannot see the full graph, rather only adjacent nodes or a certain "realization restriction." This optimization problem was introduced by Christos Papadimitriou and Mihalis Yannakakis in 1989 and a number of variants of the problem have been studied since. The name supposedly originates from conversations of the authors who learned of a difficulty Canadian drivers had: traveling a network of cities with snowfall randomly blocking roads. The stochastic version, where each edge is associated with a probability of independently being in the graph, has been given considerable attention in operations research under the name "the Stochastic Shortest Path Problem with Recourse" (SSPPR). Problem description For a given instance, there are a number of possibilities, or realizations, of how the hidden graph may look. Given an instance, a description of how to follow the instance in the best way is called a policy. The CTP task is to compute the expected cost of the optimal policies. To compute an actual description of an optimal policy may be a harder problem. Given an instance and policy for the instance, every realization produces its own (deterministic) walk in the graph. Note that the walk is not necessarily a path since the best strategy may be to, e.g., visit every vertex of a cycle and return to the start. This differs from the shortest path problem (with strictly positive weights), where repetitions in a walk implies that a better solution exists. Variants There are primarily five parameters distinguishing the number of variants of the Canadian Traveller Problem. The first parameter is how to value the walk produced by a policy for a given instance and realization. In the Stochastic Shortest Path Problem with Recourse, the goal is simply to minimize the cost of the walk (defined as the sum over all edges of the cost of the edge times the number of times that edge was taken). For the Canadian Traveller Problem, the task is to minimize the competitive ratio of the walk; i.e., to minimize the number of times longer the produced walk is to the shortest path in the realization. The second parameter is how to evaluate a policy with respect to different realizations consistent with the instance under consideration. In the Canadian Traveller Problem, one wishes to study the worst case and in SSPPR, the average case. For average case analysis, one must furthermore specify an a priori distribution over the realizations. The third parameter is restricted to the stochastic versions and is about what assumptions we can make about the distribution of the realizations and how the distribution is represented in the input. In the Stochastic Canadian Traveller Problem and in the Edge-independent Stochastic Shortest Path Problem (i-SSPPR), each uncertain edge (or cost) has an associated probability of being in the realization and the event that an edge is in the graph is independent of which other edges are in the realization. Even though this is a considerable simplification, the problem is still #P-hard. Another variant is to make no assumption on the distribution but require that each realization with non-zero probability be explicitly stated (such as “Probability 0.1 of edge set { {3,4},{1,2} }, probability 0.2 of...”). This is called the Distribution Stochastic Shortest Path Problem (d-SSPPR or R-SSPPR) and is NP-complete. The first variant is harder than the second because the former can represent in logarithmic space some distributions that the latter represents in linear space. The fourth and final parameter is how the graph changes over time. In CTP and SSPPR, the realization is fixed but not known. In the Stochastic Shortest Path Problem with Recourse and Resets or the Expected Shortest Path problem, a new realization is chosen from the distribution after each step taken by the policy. This problem can be solved in polynomial time by reducing it to a Markov decision process with polynomial horizon. The Markov generalization, where the realization of the graph may influence the next realization, is known to be much harder. An additional parameter is how new knowledge is being discovered on the realization. In traditional variants of CTP, the agent uncovers the exact weight (or status) of an edge upon reaching an adjacent vertex. A new variant was recently suggested where an agent also has the ability to perform remote sensing from any location on the realization. In this variant, the task is to minimize the travel cost plus the cost of sensing operations. Formal definition We define the variant studied in the paper from 1989. That is, the goal is to minimize the competitive ratio in the worst case. It is necessary that we begin by introducing certain terms. Consider a given graph and the family of undirected graphs that can be constructed by adding one or more edges from a given set. Formally, let where we think of E as the edges that must be in the graph and of F as the edges that may be in the graph. We say that is a realization of the graph family. Furthermore, let W be an associated cost matrix where is the cost of going from vertex i to vertex j, assuming that this edge is in the realization. For any vertex v in V, we call its incident edges with respect to the edge set B on V. Furthermore, for a realization , let be the cost of the shortest path in the graph from s to t. This is called the off-line problem because an algorithm for such a problem would have complete information of the graph. We say that a strategy to navigate such a graph is a mapping from to , where denotes the powerset of X. We define the cost of a strategy with respect to a particular realization as follows. Let and . For , define , , and . If there exists a T such that , then ; otherwise let . In other words, we evaluate the policy based on the edges we currently know are in the graph () and the edges we know might be in the graph (). When we take a step in the graph, the edges incident to our new location become known to us. Those edges that are in the graph are added to , and regardless of whether the edges are in the graph or not, they are removed from the set of unknown edges, . If the goal is never reached, we say that we have an infinite cost. If the goal is reached, we define the cost of the walk as the sum of the costs of all of the edges traversed, with cardinality. Finally, we define the Canadian traveller problem. Given a CTP instance , decide whether there exists a policy such that for every realization , the cost of the policy is no more than r times the off-line optimal, . Papadimitriou and Yannakakis noted that this defines a two-player game, where the players compete over the cost of their respective paths and the edge set is chosen by the second player (nature). Complexity The original paper analysed the complexity of the problem and reported it to be PSPACE-complete. It was also shown that finding an optimal path in the case where each edge has an associated probability of being in the graph (i-SSPPR) is a PSPACE-easy but ♯P-hard problem. It was an open problem to bridge this gap, but since then both the directed and undirected versions were shown to be PSPACE-hard. The directed version of the stochastic problem is known in operations research as the Stochastic Shortest Path Problem with Recourse. Applications The problem is said to have applications in operations research, transportation planning, artificial intelligence, machine learning, communication networks, and routing. A variant of the problem has been studied for robot navigation with probabilistic landmark recognition. Open problems Despite the age of the problem and its many potential applications, many natural questions still remain open. Is there a constant-factor approximation or is the problem APX-hard? Is it i-SSPPR #P-complete? An even more fundamental question has been left unanswered: is there a polynomial-size description of an optimal policy, setting aside for a moment the time necessary to compute the description? See also Graph traversal Hitting time Shortest path problem Notes References PSPACE-complete problems Travelling salesman problem Computational problems in graph theory NP-complete problems
Canadian traveller problem
[ "Mathematics" ]
1,753
[ "Computational problems in graph theory", "PSPACE-complete problems", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
18,212,581
https://en.wikipedia.org/wiki/MIKE%2021
MIKE 21 is a computer program that simulates flows, waves, sediments and ecology in rivers, lakes, estuaries, bays, coastal areas and seas in two dimensions. It was developed by DHI. Simulation engines MIKE 21 comprises three simulation engines: Single Grid: the -dependent non-linear equations of continuity and conservation of momentum are solved by implicit finite difference techniques with the variables defined on a space-staggered rectangular grid. Multiple Grids: the Multiple Grids version uses the same simulation engine and numerical approach as the single grid version. However, it provides the possibility of refining areas of special interest within the model area (nesting). All domains within the model area are dynamically linked. Flexible Mesh: is an unstructured mesh and uses a cell-centred finite volume solution technique. The mesh is based on linear triangular elements. Applications MIKE 21 can be used for design data assessment for coastal and offshore structures, optimization of port layout and coastal protection measures, cooling water, desalination and recirculation analysis, environmental impact assessment of marine infrastructures, water forecast for safe marine operations and navigation, coastal flooding and storm surge warnings, inland flooding and overland flow modeling. Integrated hydrologic modelling Hydraulic engineering Environmental engineering Physical geography
MIKE 21
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
249
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydrology stubs", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
18,212,880
https://en.wikipedia.org/wiki/MIKE%2021C
MIKE 21C is a computer program that simulates the development in the river bed and channel plan form in two dimensions. MIKE 21C was developed by DHI. MIKE 21C uses curvilinear finite difference grids. Simulated processes with MIKE 21C include bank erosion, scouring and shoaling brought about by activities such as construction and dredging, seasonal fluctuations in flow, etc. Applications MIKE 21C can be used for designing protection schemes against bank erosion, evaluating measures to reduce or manage shoaling, analyzing alignments and dimensions of navigation channels for minimizing capital and maintenance dredging, predicting the impact of bridge, tunnel and pipeline crossings on river channel hydraulics and morphology, optimizing restoration plans for habitat environment in channel floodplain systems, designing monitoring networks based on morphological forecasting. Due to its accurate descriptions of the physical processes, MIKE 21C can simulate a braided river developing from a plane bed, which was illustrated by Enggrob & Tjerry (1998). Theory As most other models made by DHI, MIKE 21C applies an add-on concept in which the overall time-loop can contain processes to be simulated, selected by the user. In its basic form the model is a 2-dimensional hydrodynamic model that can simulate dynamic as well as quasi-steady or steady-state hydrodynamic solutions. The hydrodynamic model solves the Saint-Venant equations in two dimensions with the water depth defined in cell centers and a staggered velocity field (internally the code solves the flux field, i.e. the water depth multiplied by the velocity vector) defined with direction as the local grid base vector. The model is computationally a parallel code (written in Fortran) with parallelizations in all modules, which allows for simulations of morphological developments on fine grids over long periods of time. The model is typically applied with as much 25,000 computational points over periods of several years or even decades. The most important secondary flow in rivers is the so-called helical flow, with its name derived from Helios (the Sun in Greek). The name helical is used because the flow arises as the water in the lower portions of the water column flowing towards the local center of curvature, and away from the local center of curvature along the water surface. This has only a minor impact on the hydrodynamics, usually only pronounced on a laboratory scale, but it has profound impacts on the sediment transport and morphology because the helical flow influences the otherwise zero transverse sediment component. MIKE 21C applies standard theory for the helical flow, which can be found in e.g. Rozowsky (1957). Standard helical flow theory provides a secondary flow velocity profile that is fully characterised by friction and the deviation angle between the main flow direction and the direction of the shear stress at the river bed. MIKE 21C uses the traditional division of sediment transport into bedload and suspended load, and the model can simulate both non-cohesive and cohesive sediment in a mixture. The bed-load model accounts for the impacts of secondary flow (bed shear stress direction) and local bed slope (gravity). Suspended load is calculated with an advection-dispersion equation for each fraction, which includes adaptation in time and space as well as the 2-dimensional depth-integrated effects of the 3-dimensional flow pattern through profile functions (Galappatti & Vreugdenhil, 1985). Citations I.L. Rozowsky (1957) "Flow of Water in bends of open channels", English Translation, Israel Progr. For Scientific Transl., Jerusalem R. Galappatti and C.B. Vreugdenhil (1985) "A depth-integrated model for suspended transport", Journal of Hydraulic Research, Vol.23, No.4 H.G. Enggrob and S. Tjerry (1998) "Simulation of Morphological Characteristics of a Braided River", Proc IAHR-Symp on River, Coastal and Estuarine morphodynamics, University of Genova, Dept Environmental Eng., Genova, 585–594. Integrated hydrologic modelling Hydraulic engineering Environmental engineering Physical geography
MIKE 21C
[ "Physics", "Chemistry", "Engineering", "Environmental_science" ]
868
[ "Hydrology", "Chemical engineering", "Physical systems", "Hydraulics", "Civil engineering", "Environmental engineering", "Hydraulic engineering" ]
18,213,407
https://en.wikipedia.org/wiki/Kinetic%20capillary%20electrophoresis
Kinetic capillary electrophoresis or KCE is capillary electrophoresis of molecules that interact during electrophoresis. KCE was introduced and developed by Professor Sergey Krylov and his research group at York University, Toronto, Canada. It serves as a conceptual platform for development of homogeneous chemical affinity methods for studies of molecular interactions (measurements of binding and rate constants) and affinity purification (purification of known molecules and search for unknown molecules). Different KCE methods are designed by varying initial and boundary conditions – the way interacting molecules enter and exit the capillary. Several KCE methods were described: non-equilibrium capillary electrophoresis of the equilibrium mixtures (NECEEM), sweeping capillary electrophoresis (SweepCE), and plug-plug KCE (ppKCE). External links More detailed description and several applications of KCE methods (measuring equilibrium and rate constants of molecular interactions, quantitative affinity analysis of proteins, thermochemistry of protein–ligand interactions, selection of aptamers, determination of temperature inside a capillary) can be found in a PDF presentation: KCE is a conceptual platform for kinetic homogeneous affinity methods. References Electrophoresis Chemical kinetics
Kinetic capillary electrophoresis
[ "Chemistry", "Biology" ]
257
[ "Chemical reaction engineering", "Instrumental analysis", "Biochemical separation processes", "Molecular biology techniques", "Chemical kinetics", "Electrophoresis" ]
5,985,207
https://en.wikipedia.org/wiki/Expansion%20of%20the%20universe
The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion, so it does not mean that the universe expands "into" anything or that space exists "outside" it. To any observer in the universe, it appears that all but the nearest galaxies (which are bound to each other by gravity) move away at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation applies only with respect to local reference frames and does not limit the recession rates of cosmologically distant objects. Cosmic expansion is a key feature of Big Bang cosmology. It can be modeled mathematically with the Friedmann–Lemaître–Robertson–Walker metric (FLRW), where it corresponds to an increase in the scale of the spatial part of the universe's spacetime metric tensor (which governs the size and geometry of spacetime). Within this framework, the separation of objects over time is associated with the expansion of space itself. However, this is not a generally covariant description but rather only a choice of coordinates. Contrary to common misconception, it is equally valid to adopt a description in which space does not expand and objects simply move apart while under the influence of their mutual gravity. Although cosmic expansion is often framed as a consequence of general relativity, it is also predicted by Newtonian gravity. According to inflation theory, the universe suddenly expanded during the inflationary epoch (about 10−32 of a second after the Big Bang), and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions). This would be equivalent to expanding an object 1 nanometer across (, about half the width of a molecule of DNA) to one approximately 10.6 light-years across (about , or 62 trillion miles). Cosmic expansion subsequently decelerated to much slower rates, until around 9.8 billion years after the Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models, as a way to explain this late-time acceleration. According to the simplest extrapolation of the currently favored cosmological model, the Lambda-CDM model, this acceleration becomes dominant in the future. History In 1912–1914, Vesto Slipher discovered that light from remote galaxies was redshifted, a phenomenon later interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used the Einstein field equations to provide theoretical evidence that the universe is expanding. Swedish astronomer Knut Lundmark was the first person to find observational evidence for expansion, in 1924. According to Ian Steer of the NASA/IPAC Extragalactic Database of Galaxy Distances, "Lundmark's extragalactic distance estimates were far more accurate than Hubble's, consistent with an expansion rate (Hubble constant) that was within 1% of the best measurements today." In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, and also presented observational evidence for a linear relationship between distance to galaxies and their recessional velocity. Edwin Hubble observationally confirmed Lundmark's and Lemaître's findings in 1929. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other. Astronomer Walter Baade recalculated the size of the known universe in the 1940s, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of the Hubble constant was estimated to be between . On 13 January 1994, NASA formally announced a completion of its repairs related to the main mirror of the Hubble Space Telescope, allowing for sharper images and, consequently, more accurate analyses of its observations. Shortly after the repairs were made, Wendy Freedman's 1994 Key Project analyzed the recession velocity of M100 from the core of the Virgo Cluster, offering a Hubble constant measurement of . Later the same year, Adam Riess et al. used an empirical method of visual-band light-curve shapes to more finely estimate the luminosity of Type Ia supernovae. This further minimized the systematic measurement errors of the Hubble constant, to . Reiss's measurements on the recession velocity of the nearby Virgo Cluster more closely agree with subsequent and independent analyses of Cepheid variable calibrations of Type Ia supernova, which estimates a Hubble constant of . In 2003, David Spergel's analysis of the cosmic microwave background during the first year observations of the Wilkinson Microwave Anisotropy Probe satellite (WMAP) further agreed with the estimated expansion rates for local galaxies, . Structure of cosmic expansion The universe at the largest scales is observed to be homogeneous (the same everywhere) and isotropic (the same in all directions), consistent with the cosmological principle. These constraints demand that any expansion of the universe accord with Hubble's law, in which objects recede from each observer with velocities proportional to their positions with respect to that observer. That is, recession velocities scale with (observer-centered) positions according to where the Hubble rate quantifies the rate of expansion. is a function of cosmic time. Dynamics of cosmic expansion Mathematically, the expansion of the universe is quantified by the scale factor, , which is proportional to the average separation between objects, such as galaxies. The scale factor is a function of time and is conventionally set to be at the present time. Because the universe is expanding, is smaller in the past and larger in the future. Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero; our current understanding of cosmology sets this time at 13.787 ± 0.020 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. It is also possible in principle for the universe to stop expanding and begin to contract, which corresponds to the scale factor decreasing in time. The scale factor is a parameter of the FLRW metric, and its time evolution is governed by the Friedmann equations. The second Friedmann equation, shows how the contents of the universe influence its expansion rate. Here, is the gravitational constant, is the energy density within the universe, is the pressure, is the speed of light, and is the cosmological constant. A positive energy density leads to deceleration of the expansion, , and a positive pressure further decelerates expansion. On the other hand, sufficiently negative pressure with leads to accelerated expansion, and the cosmological constant also accelerates expansion. Nonrelativistic matter is essentially pressureless, with , while a gas of ultrarelativistic particles (such as a photon gas) has positive pressure . Negative-pressure fluids, like dark energy, are not experimentally confirmed, but the existence of dark energy is inferred from astronomical observations. Distances in the expanding universe Comoving coordinates In an expanding universe, it is often useful to study the evolution of structure with the expansion of the universe factored out. This motivates the use of comoving coordinates, which are defined to grow proportionally with the scale factor. If an object is moving only with the Hubble flow of the expanding universe, with no other motion, then it remains stationary in comoving coordinates. The comoving coordinates are the spatial coordinates in the FLRW metric. Shape of the universe The universe is a four-dimensional spacetime, but within a universe that obeys the cosmological principle, there is a natural choice of three-dimensional spatial surface. These are the surfaces on which observers who are stationary in comoving coordinates agree on the age of the universe. In a universe governed by special relativity, such surfaces would be hyperboloids, because relativistic time dilation means that rapidly receding distant observers' clocks are slowed, so that spatial surfaces must bend "into the future" over long distances. However, within general relativity, the shape of these comoving synchronous spatial surfaces is affected by gravity. Current observations are consistent with these spatial surfaces being geometrically flat (so that, for example, the angles of a triangle add up to 180 degrees). Cosmological horizons An expanding universe typically has a finite age. Light, and other particles, can have propagated only a finite distance. The comoving distance that such particles can have covered over the age of the universe is known as the particle horizon, and the region of the universe that lies within our particle horizon is known as the observable universe. If the dark energy that is inferred to dominate the universe today is a cosmological constant, then the particle horizon converges to a finite value in the infinite future. This implies that the amount of the universe that we will ever be able to observe is limited. Many systems exist whose light can never reach us, because there is a cosmic event horizon induced by the repulsive gravity of the dark energy. Within the study of the evolution of structure within the universe, a natural scale emerges, known as the Hubble horizon. Cosmological perturbations much larger than the Hubble horizon are not dynamical, because gravitational influences do not have time to propagate across them, while perturbations much smaller than the Hubble horizon are straightforwardly governed by Newtonian gravitational dynamics. Consequences of cosmic expansion Velocities and redshifts An object's peculiar velocity is its velocity with respect to the comoving coordinate grid, i.e., with respect to the average expansion-associated motion of the surrounding material. It is a measure of how a particle's motion deviates from the Hubble flow of the expanding universe. The peculiar velocities of nonrelativistic particles decay as the universe expands, in inverse proportion with the cosmic scale factor. This can be understood as a self-sorting effect. A particle that is moving in some direction gradually overtakes the Hubble flow of cosmic expansion in that direction, asymptotically approaching material with the same velocity as its own. More generally, the peculiar momenta of both relativistic and nonrelativistic particles decay in inverse proportion with the scale factor. For photons, this leads to the cosmological redshift. While the cosmological redshift is often explained as the stretching of photon wavelengths due to "expansion of space", it is more naturally viewed as a consequence of the Doppler effect. Temperature The universe cools as it expands. This follows from the decay of particles' peculiar momenta, as discussed above. It can also be understood as adiabatic cooling. The temperature of ultrarelativistic fluids, often called "radiation" and including the cosmic microwave background, scales inversely with the scale factor (i.e. ). The temperature of nonrelativistic matter drops more sharply, scaling as the inverse square of the scale factor (i.e. ). Density The contents of the universe dilute as it expands. The number of particles within a comoving volume remains fixed (on average), while the volume expands. For nonrelativistic matter, this implies that the energy density drops as , where is the scale factor. For ultrarelativistic particles ("radiation"), the energy density drops more sharply, as . This is because in addition to the volume dilution of the particle count, the energy of each particle (including the rest mass energy) also drops significantly due to the decay of peculiar momenta. In general, we can consider a perfect fluid with pressure , where is the energy density. The parameter is the equation of state parameter. The energy density of such a fluid drops as Nonrelativistic matter has while radiation has . For an exotic fluid with negative pressure, like dark energy, the energy density drops more slowly; if it remains constant in time. If , corresponding to phantom energy, the energy density grows as the universe expands. Expansion history Cosmic inflation Inflation is a period of accelerated expansion hypothesized to have occurred at a time of around 10−32 seconds. It would have been driven by the inflaton, a field that has a positive-energy false vacuum state. Inflation was originally proposed to explain the absence of exotic relics predicted by grand unified theories, such as magnetic monopoles, because the rapid expansion would have diluted such relics. It was subsequently realized that the accelerated expansion would also solve the horizon problem and the flatness problem. Additionally, quantum fluctuations during inflation would have created initial variations in the density of the universe, which gravity later amplified to yield the observed spectrum of matter density variations. During inflation, the cosmic scale factor grew exponentially in time. In order to solve the horizon and flatness problems, inflation must have lasted long enough that the scale factor grew by at least a factor of e60 (about 1026). Radiation epoch The history of the universe after inflation but before a time of about 1 second is largely unknown. However, the universe is known to have been dominated by ultrarelativistic Standard Model particles, conventionally called radiation, by the time of neutrino decoupling at about 1 second. During radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time. Matter epoch Since radiation redshifts as the universe expands, eventually nonrelativistic matter came to dominate the energy density of the universe. This transition happened at a time of about 50 thousand years after the Big Bang. During the matter-dominated epoch, cosmic expansion also decelerated, with the scale factor growing as the 2/3 power of the time (). Also, gravitational structure formation is most efficient when nonrelativistic matter dominates, and this epoch is responsible for the formation of galaxies and the large-scale structure of the universe. Dark energy Around 3 billion years ago, at a time of about 11 billion years, dark energy is believed to have begun to dominate the energy density of the universe. This transition came about because dark energy does not dilute as the universe expands, instead maintaining a constant energy density. Similarly to inflation, dark energy drives accelerated expansion, such that the scale factor grows exponentially in time. Measuring the expansion rate The most direct way to measure the expansion rate is to independently measure the recession velocities and the distances of distant objects, such as galaxies. The ratio between these quantities gives the Hubble rate, in accordance with Hubble's law. Typically, the distance is measured using a standard candle, which is an object or event for which the intrinsic brightness is known. The object's distance can then be inferred from the observed apparent brightness. Meanwhile, the recession speed is measured through the redshift. Hubble used this approach for his original measurement of the expansion rate, by measuring the brightness of Cepheid variable stars and the redshifts of their host galaxies. More recently, using Type Ia supernovae, the expansion rate was measured to be H0=. This means that for every million parsecs of distance from the observer, recessional velocity of objects at that distance increases by about . Supernovae are observable at such great distances that the light travel time therefrom can approach the age of the universe. Consequently, they can be used to measure not only the present-day expansion rate but also the expansion history. In work that was awarded the 2011 Nobel Prize in Physics, supernova observations were used to determine that cosmic expansion is accelerating in the present epoch. By assuming a cosmological model, e.g. the Lambda-CDM model, another possibility is to infer the present-day expansion rate from the sizes of the largest fluctuations seen in the cosmic microwave background. A higher expansion rate would imply a smaller characteristic size of CMB fluctuations, and vice versa. The Planck collaboration measured the expansion rate this way and determined H0 = . There is a disagreement between this measurement and the supernova-based measurements, known as the Hubble tension. A third option proposed recently is to use information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), to measure the expansion rate. Such measurements do not yet have the precision to resolve the Hubble tension. In principle, the cosmic expansion history can also be measured by studying how redshifts, distances, fluxes, angular positions, and angular sizes of astronomical objects change over the course of the time that they are being observed. These effects are too small to have yet been detected. However, changes in redshift or flux could be observed by the Square Kilometre Array or Extremely Large Telescope in the mid-2030s. Conceptual considerations and misconceptions Measuring distances in expanding space At cosmological scales, the present universe conforms to Euclidean space, what cosmologists describe as geometrically flat, to within experimental error. Consequently, the rules of Euclidean geometry associated with Euclid's fifth postulate hold in the present universe in 3D space. It is, however, possible that the geometry of past 3D space could have been highly curved. The curvature of space is often modeled using a non-zero Riemann curvature tensor in curvature of Riemannian manifolds. Euclidean "geometrically flat" space has a Riemann curvature tensor of zero. "Geometrically flat" space has three dimensions and is consistent with Euclidean space. However, spacetime has four dimensions; it is not flat according to Einstein's general theory of relativity. Einstein's theory postulates that "matter and energy curve spacetime, and there is enough matter and energy to provide for curvature." In part to accommodate such different geometries, the expansion of the universe is inherently general-relativistic. It cannot be modeled with special relativity alone: Though such models exist, they may be at fundamental odds with the observed interaction between matter and spacetime seen in the universe. The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the Big Bang, while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature that eventually dominates in this model. The purple grid lines mark cosmological time at intervals of one billion years from the Big Bang. The cyan grid lines mark comoving distance at intervals of one billion light-years in the present era (less in the past and more in the future). The circular curling of the surface is an artifact of the embedding with no physical significance and is done for illustrative purposes; a flat universe does not curl back onto itself. (A similar effect can be seen in the tubular shape of the pseudosphere.) The brown line on the diagram is the worldline of Earth (or more precisely its location in space, even before it was formed). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching Earth at the present day. The orange line shows the present-day distance between the quasar and Earth, about 28 billion light-years, which is a larger distance than the age of the universe multiplied by the speed of light, ct. According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c; in the diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance ct in a time t, as the red worldline illustrates. While it always moves locally at c, its time in transit (about 13 billion years) is not related to the distance traveled in any simple way, since the universe expands as the light beam traverses space and time. The distance traveled is thus inherently ambiguous because of the changing scale of the universe. Nevertheless, there are two distances that appear to be physically meaningful: the distance between Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension defined as the spatial dimension). The former distance is about 4 billion light-years, much smaller than ct, whereas the latter distance (shown by the orange line) is about 28 billion light-years, much larger than ct. In other words, if space were not expanding today, it would take 28 billion years for light to travel between Earth and the quasar, while if the expansion had stopped at the earlier time, it would have taken only 4 billion years. The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light-years away. In fact, the light emitted towards Earth was actually moving away from Earth when it was first emitted; the metric distance to Earth increased with cosmological time for the first few billion years of its travel time, also indicating that the expansion of space between Earth and the quasar at the early time was faster than the speed of light. None of this behavior originates from a special property of metric expansion, but rather from local principles of special relativity integrated over a curved surface. Topology of expanding space Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded, while 'universe' refers to everything that exists, including the matter and energy in space, the extra dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3D manifold only; that is, the description involves no structures such as extra dimensions or an exterior universe. The ultimate topology of space is a posteriori – something that in principle must be observed – as there are no constraints that can simply be reasoned out (in other words there cannot be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines that intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man universe", where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth), is an observational question that is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe having infinite extent and being a simply connected space, though cosmological horizons limit our ability to distinguish between simple and more complicated proposals. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe. Thus any edges or exotic geometries or topologies would not be directly observable, since light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness. Regardless of the overall shape of the universe, the question of what the universe is expanding into is one that does not require an answer, according to the theories that describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand, since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" the expanding universe into which the universe expands. Even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite. Effects of expansion on small scales The expansion of space is sometimes described as a force that acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so. There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum; the former is simply a large-scale extrapolation of the latter. Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda Galaxy, which is bound to the Milky Way Galaxy, is actually falling towards us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. Such future events are predicted by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, the Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor", with which we would eventually merge if dark energy were not acting. A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry that describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe, which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging. The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects that is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects that have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate; unless they are very weakly bound, they will simply settle into an equilibrium state that is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases. Thus, the ultimate fate of the ΛCDM universe is a near-vacuum expanding at an ever-increasing rate under the influence of the cosmological constant. However, gravitationally bound objects like the Milky Way do not expand, and the Andromeda Galaxy is moving fast enough towards us that it will still merge with the Milky Way in around 3 billion years. Metric expansion and speed of light At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity. This is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field). While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "distance" here is somewhat different from that used in an inertial frame. The definition of distance used here is the summation or integration of local comoving distances, all done at constant local proper time. For example, galaxies that are farther than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light. Visibility of these objects depends on the exact expansion history of the universe. Light that is emitted today from galaxies beyond the more-distant cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past. Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists. Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within the fields of education and communication of scientific concepts. Common analogies for cosmic expansion The expansion of the universe is often illustrated with conceptual models where an expanding object is taken to represent expanding space. These models can be misleading to the extent that they give the false impression that expanding space must carry objects with it. In reality, the expansion of the universe can alternatively be thought of as corresponding only to the inertial motion of objects away from one another. In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope that is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is conceptually accurate – the ant's position over time will match the path of the red line on the embedding diagram above. In the "rubber sheet model", one replaces the rope with a flat two-dimensional rubber sheet that expands uniformly in all directions. The addition of a second spatial dimension allows for the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet. In the "balloon model" the flat sheet is replaced by a spherical balloon that is inflated from an initial size of zero (representing the Big Bang). A balloon has positive Gaussian curvature, even though observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat within the limits of observation. This analogy is potentially confusing since it could wrongly suggest that the Big Bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time or will be occupied later. In the "raisin bread model", one imagines a loaf of raisin bread expanding in an oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand; they merely move farther away from each other. This analogy has the disadvantage of wrongly implying that the expansion has a center and an edge. See also Comoving and proper distances Great Attractor References Printed references Eddington, Arthur. The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Press Syndicate of the University of Cambridge, 1933. Liddle, Andrew R. and Lyth, David H. Cosmological Inflation and Large-Scale Structure. Cambridge University Press, 2000. Lineweaver, Charles H. and Davis, Tamara M. "Misconceptions about the Big Bang", Scientific American, March 2005 (non-free content). Mook, Delo E. and Thomas Vargish. Inside Relativity. Princeton University Press, 1991. External links Swenson, Jim, Answer to a question about the expanding universe Felder, Gary, "The Expanding universe". NASA's WMAP team offers an "Explanation of the universal expansion" at an elementary level. Hubble Tutorial from the University of Wisconsin Physics Department Expanding raisin bread from the University of Winnipeg: an illustration, but no explanation "Ant on a balloon" analogy to explain the expanding universe at "Ask an Astronomer" (the astronomer who provides this explanation is not specified). General relativity Big Bang Concepts in astronomy Physical cosmological concepts
Expansion of the universe
[ "Physics", "Astronomy" ]
7,077
[ "Physical cosmological concepts", "Cosmogony", "Concepts in astrophysics", "Concepts in astronomy", "Big Bang", "General relativity", "Theory of relativity" ]
5,987,201
https://en.wikipedia.org/wiki/Palmitoylcarnitine
Palmitoylcarnitine is an ester derivative of carnitine involved in the metabolism of fatty acids. During the tricarboxylic acid cycle (TCA), fatty acids undergo a process known as β-oxidation to produce energy in the form of ATP. β-oxidation occurs primarily within mitochondria, however the mitochondrial membrane prevents the entry of long chain fatty acids (>C10), so the conversion of fatty acids such as palmitic acid is key. Palmitic acid is brought to the cell and once inside the cytoplasm is first converted to Palmitoyl-CoA. Palmitoyl-CoA has the ability to freely pass the outer mitochondrial membrane, but the inner membrane is impermeable to the Acyl-CoA and thioester forms of various long-chain fatty acids such as palmitic acid. The palmitoyl-CoA is then enzymatically transformed into palmitoylcarnitine via the Carnitine O-palmitoyltransferase family. The palmitoylcarnitine is then actively transferred into the inner membrane of the mitochondria via the carnitine-acylcarnitine translocase. Once inside the inner mitochondrial membrane, the same Carnitine O-palmitoyltransferase family is then responsible for transforming the palmitoylcarnitine back to the palmitoyl-CoA form. Structure Palmitoylcarnitine contains the saturated fatty acid known as palmitic acid (C16:0) which is bound to the β-hydroxy group of the carnitine. The core carnitine structure, consisting of butanoate with a quaternary ammonium attached to C4 and hydroxy group at C3, is a common molecular backbone for the transfer of multiple long chain fatty acids in the TCA cycle. Function Energy Generation Palmitoylcarnitine is one molecule in a family of ester derivatives of carnitine that are utilized in the TCA cycle to generate energy. The beta oxidation yields 7 NADH, 7 FADH2, and 8 Acetyl-CoA chains. This Acetyl-CoA generates 3 NADH, 1 FADH2, and 1 GTP for every molecule in the TCA cycle. Each NADH generates 2.5 ATP in the ETC and FADH2 generates 1.5 ATP. This totals to 108 ATP, but 2 ATP are consumed to generate the initial Palmitoyl-CoA, leaving a net gain of 106 ATP. Clinical Significance Palmitoylcarnitine has demonstrated potential as a diagnostic marker in newborns for the medical condition of primary carnitine deficiency. Levels of palmitoylcarnitine (palcar) demonstrated significant correlation with dihydrotestosterone (DHT) and its effects in prostate cancer models, suggesting a similar role between the two molecules. See also Carnitine O-palmitoyltransferase References Metabolism Lipids
Palmitoylcarnitine
[ "Chemistry", "Biology" ]
609
[ "Biomolecules by chemical classification", "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Cellular processes", "Metabolism", "Biochemistry", "Lipids" ]
5,987,626
https://en.wikipedia.org/wiki/Aryl%20hydrocarbon%20receptor%20nuclear%20translocator
The ARNT gene encodes the aryl hydrocarbon receptor nuclear translocator protein that forms a complex with ligand-bound aryl hydrocarbon receptor (AhR), and is required for receptor function. The encoded protein has also been identified as the beta subunit of a heterodimeric transcription factor, hypoxia-inducible factor 1 (HIF1). A t(1;12)(q21;p13) translocation, which results in a TEL–ARNT fusion protein, is associated with acute myeloblastic leukemia. Three alternatively spliced variants encoding different isoforms have been described for this gene. The aryl hydrocarbon receptor (AhR) is involved in the induction of several enzymes that participate in xenobiotic metabolism. The ligand-free, cytosolic form of the aryl hydrocarbon receptor is complexed to heat shock protein 90. Binding of ligand, which includes dioxin and polycyclic aromatic hydrocarbons, results in translocation of the ligand-binding subunit only into the nucleus. Induction of enzymes involved in xenobiotic metabolism occurs through binding of the ligand-bound AhR to xenobiotic responsive elements in the promoters of genes for these enzymes. Interactions Aryl hydrocarbon receptor nuclear translocator has been shown to interact with: AIP, AHR, EPAS1, HIF1A, NCOA2, SIM1, and SIM2. References Further reading External links Transcription factors PAS-domain-containing proteins Oncogenes
Aryl hydrocarbon receptor nuclear translocator
[ "Chemistry", "Biology" ]
317
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
5,988,464
https://en.wikipedia.org/wiki/Lipid%20metabolism
Lipid metabolism is the synthesis and degradation of lipids in cells, involving the breakdown and storage of fats for energy and the synthesis of structural and functional lipids, such as those involved in the construction of cell membranes. In animals, these fats are obtained from food and are synthesized by the liver. Lipogenesis is the process of synthesizing these fats. The majority of lipids found in the human body from ingesting food are triglycerides and cholesterol. Other types of lipids found in the body are fatty acids and membrane lipids. Lipid metabolism is often considered the digestion and absorption process of dietary fat; however, there are two sources of fats that organisms can use to obtain energy: from consumed dietary fats and from stored fat. Vertebrates (including humans) use both sources of fat to produce energy for organs such as the heart to function. Since lipids are hydrophobic molecules, they need to be solubilized before their metabolism can begin. Lipid metabolism often begins with hydrolysis, which occurs with the help of various enzymes in the digestive system. Lipid metabolism also occurs in plants, though the processes differ in some ways when compared to animals. The second step after the hydrolysis is the absorption of the fatty acids into the epithelial cells of the intestinal wall. In the epithelial cells, fatty acids are packaged and transported to the rest of the body. Metabolic processes include lipid digestion, lipid absorption, lipid transportation, lipid storage, lipid catabolism, and lipid biosynthesis. Lipid catabolism is accomplished by a process known as beta oxidation which takes place in the mitochondria and peroxisome cell organelles. Lipid digestion Digestion is the first step to lipid metabolism, and it is the process of breaking the triglycerides down into smaller monoglyceride units with the help of lipase enzymes. Digestion of fats begin in the mouth through chemical digestion by lingual lipase. Ingested cholesterol is not broken down by the lipases and stays intact until it enters the epithelium cells of the small intestine. Lipids then continue to the stomach where chemical digestion continues by gastric lipase and mechanical digestion begins (peristalsis). The majority of lipid digestion and absorption, however, occurs once the fats reach the small intestines. Chemicals from the pancreas (pancreatic lipase family and bile salt-dependent lipase) are secreted into the small intestines to help breakdown the triglycerides, along with further mechanical digestion, until they are individual fatty acid units able to be absorbed into the small intestine's epithelial cells. It is the pancreatic lipase that is responsible for signalling for the hydrolysis of the triglycerides into separate free fatty acids and glycerol units. Lipid absorption The second step in lipid metabolism is absorption of fats. Short chain fatty acids can be absorbed in the stomach, while most absorption of fats occurs only in the small intestines. Once the triglycerides are broken down into individual fatty acids and glycerols, along with cholesterol, they will aggregate into structures called micelles. Fatty acids and monoglycerides leave the micelles and diffuse across the membrane to enter the intestinal epithelial cells. In the cytosol of epithelial cells, fatty acids and monoglycerides are recombined back into triglycerides. In the cytosol of epithelial cells, triglycerides and cholesterol are packaged into bigger particles called chylomicrons which are amphipathic structures that transport digested lipids. Chylomicrons will travel through the bloodstream to enter adipose and other tissues in the body. Lipid transportation Due to the hydrophobic nature of membrane lipids, triglycerides and cholesterols, they require special transport proteins known as lipoproteins. The amphipathic structure of lipoproteins allows the triglycerides and cholesterol to be transported through the blood. Chylomicrons are one sub-group of lipoproteins which carry the digested lipids from small intestine to the rest of the body. The varying densities between the types of lipoproteins are characteristic to what type of fats they transport. For example, very-low-density lipoproteins (VLDL) carry the triglycerides synthesized by our body and low-density lipoproteins (LDL) transport cholesterol to our peripheral tissues. A number of these lipoproteins are synthesized in the liver, but not all of them originate from this organ. Lipid storage Lipids are stored in white adipose tissue as triglycerides. In a lean young adult human, the mass of triglycerides stored represents about 10–20 kilograms. Triglycerides are formed from a backbone of glycerol with three fatty acids. Free fatty acids are activated into acyl-CoA and esterified to finally reach the triglyceride droplet. Lipoprotein lipase has an important role. Lipid catabolism Once the chylomicrons (or other lipoproteins) travel through the tissues, these particles will be broken down by lipoprotein lipase in the luminal surface of endothelial cells in capillaries to release triglycerides. Triglycerides will get broken down into fatty acids and glycerol before entering cells and remaining cholesterol will again travel through the blood to the liver. In the cytosol of the cell (for example a muscle cell), the glycerol will be converted to glyceraldehyde 3-phosphate, which is an intermediate in the glycolysis, to get further oxidized and produce energy. However, the main steps of fatty acids catabolism occur in the mitochondria. Long chain fatty acids (more than 14 carbon) need to be converted to fatty acyl-CoA in order to pass across the mitochondria membrane. Fatty acid catabolism begins in the cytoplasm of cells as acyl-CoA synthetase uses the energy from cleavage of an ATP to catalyze the addition of coenzyme A to the fatty acid. The resulting acyl-CoA cross the mitochondria membrane and enter the process of beta oxidation. The main products of the beta oxidation pathway are acetyl-CoA (which is used in the citric acid cycle to produce energy), NADH and FADH. The process of beta oxidation requires the following enzymes: acyl-CoA dehydrogenase, enoyl-CoA hydratase, 3-hydroxyacyl-CoA dehydrogenase, and 3-ketoacyl-CoA thiolase. The diagram to the left shows how fatty acids are converted into acetyl-CoA. The overall net reaction, using palmitoyl-CoA (16:0) as a model substrate is: 7 FAD + 7 NAD+ + 7 CoASH + 7 H2O + H(CH2CH2)7CH2CO-SCoA → 8 CH3CO-SCoA + 7 FADH2 + 7 NADH + 7 H+ Lipid biosynthesis In addition to dietary fats, storage lipids stored in the adipose tissues are one of the main sources of energy for living organisms. Triacylglycerols, lipid membrane, and cholesterol can be synthesized by the organisms through various pathways. Membrane lipid biosynthesis There are two major classes of membrane lipids: glycerophospholipids and sphingolipids. Although many different membrane lipids are synthesized in our body, pathways share the same pattern. The first step is synthesizing the backbone (sphingosine or glycerol), the second step is the addition of fatty acids to the backbone to make phosphatidic acid. Phosphatidic acid is further modified with the attachment of different hydrophilic head groups to the backbone. Membrane lipid biosynthesis occurs in the endoplasmic reticulum membrane. Triglyceride biosynthesis The phosphatidic acid is also a precursor for triglyceride biosynthesis. Phosphatidic acid phosphotase catalyzes the conversion of phosphatidic acid to diacylglyceride, which will be converted to triglycerides by acyltransferase. Triglyceride biosynthesis occurs in the cytosol. Fatty acid biosynthesis The precursor for fatty acids is acetyl-CoA and it occurs in the cytosol of the cell. The overall net reaction, using palmitate (16:0) as a model substrate is: 8 Acetyl-coA + 7 ATP + 14 NADPH + 6H+ → palmitate + 14 NADP+ + 6H2O + 7ADP + 7P¡ Cholesterol biosynthesis Cholesterol can be made from acetyl-CoA through a multiple-step pathway known as isoprenoid pathway. Cholesterols are essential because they can be modified to form different hormones in the body such as progesterone. 70% of cholesterol biosynthesis occurs in the cytosol of liver cells. Hormonal regulation of lipid metabolism Lipid metabolism is tightly regulated by hormones to ensure a balance between energy storage and utilization. Insulin: promotes lipid synthesis, inhibiting lipid breakdown, and facilitating glucose transport and conversion into fatty acids. Glucagon: stimulates fatty acid oxidation and inhibits de novo fatty acid synthesis, reducing VLDL release and hepatic steatosis. Thyroid Hormone: promotes hepatic triglyceride synthesis, enhancing lipolysis, stimulating mitochondrial fatty acid β-oxidation, and regulating cholesterol levels through various mechanisms, including LDL receptor expression and bile acid excretion. Sex Hormone: Estrogen: decreases triglyceride synthesis and enhances HDL cholesterol levels, potentially through promoting fatty acid oxidation and inhibiting lipogenesis. Testosterone: stimulates de novo lipogenesis and fat accumulation which are then incorporated to triglycerides for energy storage. Adrenaline: stimulates lipolysis and inhibits lipogenesis via AMPK phosphorylation, influencing lipid turnover and accumulation in adipose tissue. Lipid metabolism disorders Lipid metabolism disorders (including inborn errors of lipid metabolism) are illnesses where trouble occurs in breaking down or synthesizing fats (or fat-like substances). Lipid metabolism disorders are associated with an increase in the concentrations of plasma lipids in the blood such as LDL cholesterol, VLDL, and triglycerides which most commonly lead to cardiovascular diseases. A good deal of the time these disorders are hereditary, meaning it's a condition that is passed along from parent to child through their genes. Gaucher's disease (types I, II, and III), Niemann–Pick disease, Tay–Sachs disease, and Fabry's disease are all diseases where those afflicted can have a disorder of their body's lipid metabolism. Rarer diseases concerning a disorder of the lipid metabolism are sitosterolemia, Wolman's disease, Refsum's disease, and cerebrotendinous xanthomatosis. Types of lipids The types of lipids involved in lipid metabolism include: Membrane lipids: Phospholipids: Phospholipids are a major component of the lipid bilayer of the cell membrane and are found in many parts of the body. Sphingolipids: Sphingolipids are mostly found in the cell membrane of neural tissue. Glycolipids: The main role of glycolipids is to maintain lipid bilayer stability and facilitate cell recognition. Glycerophospholipids: Neural tissue (including the brain) contains high amounts of glycerophospholipids. Other types of lipids: Cholesterols: Cholesterols are the main precursors for different hormones in our body such as progesterone and testosterone. The main function of cholesterol is controlling the cell membrane fluidity. Steroid – see also steroidogenesis: Steroids are one of the important cell signaling molecules. Triacylglycerols (fats) – see also lipolysis and lipogenesis: Triacylglycerols are the major form of energy storage in human body. Fatty acids – see also fatty acid metabolism: Fatty acids are one of the precursors used for lipid membrane and cholesterol biosynthesis. They are also used for energy. Bile salts: Bile salts are secreted from liver and they facilitate lipid digestion in the small intestine. Eicosanoids: Eicosanoids are made from fatty acids in the body and they are used for cell signaling. Ketone bodies: Ketone bodies are made from fatty acids in the liver. Their function is to produce energy during periods of starvation or low food intake. References Lipid metabolism
Lipid metabolism
[ "Chemistry" ]
2,807
[ "Lipid biochemistry", "Lipid metabolism", "Metabolism" ]
12,510,214
https://en.wikipedia.org/wiki/Carbochemistry
Carbochemistry is the branch of chemistry that studies the transformation of coal (bituminous coal, coal tar, anthracite, lignite, graphite, and charcoal) into useful products and raw materials. The processes that are used in carbochemistry include degasification processes such as carbonization and coking, gasification processes, and liquefaction processes. History The beginning of carbochemistry goes back to the 16th century. At that time, large quantities of charcoal were needed for the smelting of iron ores. Since the production of charcoal required large amounts of slowly-regenerating wood, the use of coal was studied. The use of pure coal was difficult because of the amount of liquid and solid by-products that were generated. In order to improve the handling the coal was initially treated as wood in kilns to produce coke. Around 1684, John Clayton discovered that coal gas generated from coal was combustible. He described his discovery in the Philosophical Transactions of the Royal Society. See also Bergius process Clean coal technology Coal tar Fischer–Tropsch process Petrochemistry Synthetic Liquid Fuels Program Coking factory References Chemical engineering
Carbochemistry
[ "Chemistry", "Engineering" ]
243
[ "Chemical engineering", "nan" ]
12,511,846
https://en.wikipedia.org/wiki/Bunyakovsky%20conjecture
The Bunyakovsky conjecture (or Bouniakowsky conjecture) gives a criterion for a polynomial in one variable with integer coefficients to give infinitely many prime values in the sequence It was stated in 1857 by the Russian mathematician Viktor Bunyakovsky. The following three conditions are necessary for to have the desired prime-producing property: The leading coefficient is positive, The polynomial is irreducible over the rationals (and integers), and There is no common factor for all the infinitely many values . (In particular, the coefficients of should be relatively prime. It is not necessary for the values f(n) to be pairwise relatively prime.) Bunyakovsky's conjecture is that these conditions are sufficient: if satisfies (1)–(3), then is prime for infinitely many positive integers . A seemingly weaker yet equivalent statement to Bunyakovsky's conjecture is that for every integer polynomial that satisfies (1)–(3), is prime for at least one positive integer : but then, since the translated polynomial still satisfies (1)–(3), in view of the weaker statement is prime for at least one positive integer , so that is indeed prime for infinitely many positive integers . Bunyakovsky's conjecture is a special case of Schinzel's hypothesis H, one of the most famous open problems in number theory. Discussion of three conditions The first condition is necessary because if the leading coefficient is negative then for all large , and thus is not a (positive) prime number for large positive integers . (This merely satisfies the sign convention that primes are positive.) The second condition is necessary because if where the polynomials and have integer coefficients, then we have for all integers ; but and take the values 0 and only finitely many times, so is composite for all large . The second condition also fails for the polynomials reducible over the rationals. For example, the integer-valued polynomial doesn't satisfy the condition (2) since , so at least one of the latter two factors must be a divisor of in order to have prime, which holds only if . The corresponding values are , so these are the only such primes for integral since all of these numbers are prime. This isn't a counterexample to Bunyakovsky conjecture since the condition (2) fails. The third condition, that the numbers have gcd 1, is obviously necessary, but is somewhat subtle, and is best understood by a counterexample. Consider , which has positive leading coefficient and is irreducible, and the coefficients are relatively prime; however is even for all integers , and so is prime only finitely many times (namely at , when ). In practice, the easiest way to verify the third condition is to find one pair of positive integers and such that and are relatively prime. In general, for any integer-valued polynomial we can use for any integer , so the gcd is given by values of at any consecutive integers. In the example above, we have and so the gcd is , which implies that has even values on the integers. Alternatively, when an integer polynomial is written in the basis of binomial coefficient polynomials: each coefficient is an integer and In the example above, this is: and the coefficients in the right side of the equation have gcd 2. Using this gcd formula, it can be proved if and only if there are positive integers and such that and are relatively prime. Examples A simple quadratic polynomial Some prime values of the polynomial are listed in the following table. (Values of form OEIS sequence ; those of form .) That should be prime infinitely often is a problem first raised by Euler, and it is also the fifth Hardy–Littlewood conjecture and the fourth of Landau's problems. Despite the extensive numerical evidence it is not known that this sequence extends indefinitely. Cyclotomic polynomials The cyclotomic polynomials for satisfy the three conditions of Bunyakovsky's conjecture, so for all k, there should be infinitely many natural numbers n such that is prime. It can be shown that if for all k, there exists an integer n > 1 with prime, then for all k, there are infinitely many natural numbers n with prime. The following sequence gives the smallest natural number n > 1 such that is prime, for : 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 6, 2, 4, 3, 2, 10, 2, 22, 2, 2, 4, 6, 2, 2, 2, 2, 2, 14, 3, 61, 2, 10, 2, 14, 2, 15, 25, 11, 2, 5, 5, 2, 6, 30, 11, 24, 7, 7, 2, 5, 7, 19, 3, 2, 2, 3, 30, 2, 9, 46, 85, 2, 3, 3, 3, 11, 16, 59, 7, 2, 2, 22, 2, 21, 61, 41, 7, 2, 2, 8, 5, 2, 2, ... . This sequence is known to contain some large terms: the 545th term is 2706, the 601st is 2061, and the 943rd is 2042. This case of Bunyakovsky's conjecture is widely believed, but again it is not known that the sequence extends indefinitely. Usually, there is an integer between 2 and (where is Euler's totient function, so is the degree of ) such that is prime, but there are exceptions; the first few are: 1, 2, 25, 37, 44, 68, 75, 82, 99, 115, 119, 125, 128, 159, 162, 179, 183, 188, 203, 213, 216, 229, 233, 243, 277, 289, 292, .... Partial results: only Dirichlet's theorem To date, the only case of Bunyakovsky's conjecture that has been proved is that of polynomials of degree 1. This is Dirichlet's theorem, which states that when and are relatively prime integers there are infinitely many prime numbers . This is Bunyakovsky's conjecture for (or if ). The third condition in Bunyakovsky's conjecture for a linear polynomial is equivalent to and being relatively prime. No single case of Bunyakovsky's conjecture for degree greater than 1 is proved, although numerical evidence in higher degree is consistent with the conjecture. Generalized Bunyakovsky conjecture Given polynomials with positive degrees and integer coefficients, each satisfying the three conditions, assume that for any prime there is an such that none of the values of the polynomials at are divisible by . Given these assumptions, it is conjectured that there are infinitely many positive integers such that all values of these polynomials at are prime. This conjecture is equivalent to the generalized Dickson conjecture and Schinzel's hypothesis H. See also Integer-valued polynomial Cohn's irreducibility criterion Schinzel's hypothesis H Bateman–Horn conjecture Hardy and Littlewood's conjecture F References Bibliography Conjectures about prime numbers Unsolved problems in number theory
Bunyakovsky conjecture
[ "Mathematics" ]
1,495
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in number theory", "Number theory" ]
1,162,543
https://en.wikipedia.org/wiki/Quantum%20efficiency
The term quantum efficiency (QE) may apply to incident photon to converted electron (IPCE) ratio of a photosensitive device, or it may refer to the TMR effect of a magnetic tunnel junction. This article deals with the term as a measurement of a device's electrical sensitivity to light. In a charge-coupled device (CCD) or other photodetector, it is the ratio between the number of charge carriers collected at either terminal and the number of photons hitting the device's photoreactive surface. As a ratio, QE is dimensionless, but it is closely related to the responsivity, which is expressed in amps per watt. Since the energy of a photon is inversely proportional to its wavelength, QE is often measured over a range of different wavelengths to characterize a device's efficiency at each photon energy level. For typical semiconductor photodetectors, QE drops to zero for photons whose energy is below the band gap. A photographic film typically has a QE of much less than 10%, while CCDs can have a QE of well over 90% at some wavelengths. QE of solar cells A solar cell's quantum efficiency value indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. If the cell's quantum efficiency is integrated over the whole solar electromagnetic spectrum, one can evaluate the amount of current that the cell will produce when exposed to sunlight. The ratio between this energy-production value and the highest possible energy-production value for the cell (i.e., if the QE were 100% over the whole spectrum) gives the cell's overall energy conversion efficiency value. Note that in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon. Types Two types of quantum efficiency of a solar cell are often considered: External quantum efficiency (EQE) is the ratio of the number of charge carriers collected by the solar cell to the number of photons of a given energy shining on the solar cell from outside (incident photons). Internal quantum efficiency (IQE) is the ratio of the number of charge carriers collected by the solar cell to the number of photons of a given energy that shine on the solar cell from outside and are absorbed by the cell. The IQE is always larger than the EQE in the visible spectrum. A low IQE indicates that the active layer of the solar cell is unable to make good use of the photons, most likely due to poor carrier collection efficiency. To measure the IQE, one first measures the EQE of the solar device, then measures its transmission and reflection, and combines these data to infer the IQE. The external quantum efficiency therefore depends on both the absorption of light and the collection of charges. Once a photon has been absorbed and has generated an electron-hole pair, these charges must be separated and collected at the junction. A "good" material avoids charge recombination. Charge recombination causes a drop in the external quantum efficiency. The ideal quantum efficiency graph has a square shape, where the QE value is fairly constant across the entire spectrum of wavelengths measured. However, the QE for most solar cells is reduced because of the effects of recombination, where charge carriers are not able to move into an external circuit. The same mechanisms that affect the collection probability also affect the QE. For example, modifying the front surface can affect carriers generated near the surface. Highly doped front surface layers can also cause 'free carrier absorption' which reduces QE in the longer wavelengths. And because high-energy (blue) light is absorbed very close to the surface, considerable recombination at the front surface will affect the "blue" portion of the QE. Similarly, lower energy (green) light is absorbed in the bulk of a solar cell, and a low diffusion length will affect the collection probability from the solar cell bulk, reducing the QE in the green portion of the spectrum. Generally, solar cells on the market today do not produce much electricity from ultraviolet and infrared light (<400 nm and >1100 nm wavelengths, respectively); these wavelengths of light are either filtered out or are absorbed by the cell, thus heating the cell. That heat is wasted energy, and could damage the cell. QE of image sensors Quantum efficiency (QE) is the fraction of photon flux that contributes to the photocurrent in a photodetector or a pixel. Quantum efficiency is one of the most important parameters used to evaluate the quality of a detector and is often called the spectral response to reflect its wavelength dependence. It is defined as the number of signal electrons created per incident photon. In some cases it can exceed 100% (i.e. when more than one electron is created per incident photon). EQE mapping Conventional measurement of the EQE will give the efficiency of the overall device. However it is often useful to have a map of the EQE over large area of the device. This mapping provides an efficient way to visualize the homogeneity and/or the defects in the sample. It was realized by researchers from the Institute of Researcher and Development on Photovoltaic Energy (IRDEP) who calculated the EQE mapping from electroluminescence measurements taken with a hyperspectral imager. Spectral responsivity Spectral responsivity is a similar measurement, but it has different units: amperes per watt (A/W); (i.e. how much current comes out of the device per unit of incident light power). Responsivity is ordinarily specified for monochromatic light (i.e. light of a single wavelength). Both the quantum efficiency and the responsivity are functions of the photons' wavelength (indicated by the subscript λ). To convert from responsivity (, in A/W) to QEλ (on a scale 0 to 1): where is the wavelength in nm, h is the Planck constant, c is the speed of light in vacuum, and e is the elementary charge. Note that the unit W/A (watts per ampere) is equivalent to V (volts). Determination where = number of electrons produced, = number of photons absorbed. Assuming each photon absorbed in the depletion layer produces a viable electron-hole pair, and all other photons do not, where t is the measurement time (in seconds), = incident optical power in watts, = optical power absorbed in depletion layer, also in watts. See also Counting efficiency DQE (imaging) Solar-cell efficiency References Engineering ratios Photodetectors Physical quantities Quantum electronics Spectroscopy
Quantum efficiency
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,423
[ "Physical phenomena", "Molecular physics", "Physical quantities", "Quantum electronics", "Spectrum (physical sciences)", "Metrics", "Instrumental analysis", "Quantity", "Engineering ratios", "Quantum mechanics", "Condensed matter physics", "Nanotechnology", "Spectroscopy", "Physical proper...